@Wong, Eric YW I think there are two scenarios here with respect to content filtering and logging your questions or prompts.
Azure OpenAI does not use any information from users to train the models, but content filtering is always in place to check for harmful content passed to API as prompt or if the generated content could be harmful it is screened before it can be provided as completion to the customer.
The behavior of content filtering can be summarized in the following key points:
- Prompts that are deemed inappropriate will return an HTTP 400 error
- Non-streaming completions calls won't return any content when the content is filtered. The
finish_reason
value will be set tocontent_filter
. In rare cases with long responses, a partial result can be returned. In these cases, thefinish_reason
will be updated. - For streaming completions calls, segments will be returned back to the user as they're completed. The service will continue streaming until either reaching a stop token, length or harmful content is detected.
To the second part of the question, where the logging of prompts and completions is required. The service does not store any data unless diagnostic logging is turned on for the resource. There is sample repo which helps users setup their resource to record this data. If you are looking to review all data passed to API then you might want to set it up for your resource to record all requests.
I have also answered one of the previous threads on data privacy that could be helpful for you if you have similar questions. Please feel free to check the same. Thanks!!
If this answers your query, do click Accept Answer
and Yes
for was this answer helpful. And, if you have any further query do let us know.