False Positives in Azure Content Safety Image Moderation (Paid Plan)

Naresh Khuriwal 0 Reputation points
2024-09-24T16:59:22.3866667+00:00

We are using the paid version of Azure AI Content Safety to moderate images, but we’re encountering multiple false positives—images flagged as inappropriate when they shouldn’t be.

Is there a way to fine-tune or adjust the model to reduce these false positives? Are there specific settings or best practices to improve moderation accuracy while using the paid plan?

Despite being on a paid plan, we’re facing issues with Azure Content Safety flagging legitimate images incorrectly. We’d like to reduce these false positives without losing accuracy for actual harmful content. Has anyone else experienced this, and how did you resolve it? Any recommendations or steps to fine-tune the service would be greatly appreciated.

Thank you!

Azure AI services
Azure AI services
A group of Azure services, SDKs, and APIs designed to make apps more intelligent, engaging, and discoverable.
2,840 questions
{count} votes

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.