Hello
Thanks for reaching out to us, please refer to here for Azure OpenAI Data, privacy, and security.
https://video2.skills-academy.com/en-us/legal/cognitive-services/openai/data-privacy
Above article provides details regarding how data provided by you to the Azure OpenAI service is processed, used, and stored. Azure OpenAI stores and processes data to provide the service, monitor for abusive use, and to develop and improve the quality of Azure’s Responsible AI systems. Please also see the Microsoft Products and Services Data Protection Addendum, which governs data processing by the Azure OpenAI Service except as otherwise provided in the applicable Product Terms.
Also share Microsoft General Data Protection Regulation GDPR - https://video2.skills-academy.com/en-us/legal/gdpr
For your question how Azure OpenAI process data and how data is processing -
What data does the Azure OpenAI Service process?
Azure OpenAI processes the following types of data:
- Text prompts, queries and responses submitted by the user via the completions, search, and embeddings operations.
- Training & validation data. You can provide your own training data consisting of prompt-completion pairs for the purposes of fine-tuning an OpenAI model.
- Results data from training process. After training a fine-tuned model, the service will output meta-data on the job which includes tokens processed and validation scores at each step.
How does the Azure OpenAI Service process data?
The diagram below illustrates how your data is processed. This diagram covers three different types of processing:
- How the Azure OpenAI Service creates a fine-tuned (custom) model with your training data
- How the Azure OpenAI Service processes your text prompts to generate completions, embeddings, and search results; and
- How the Azure OpenAI Service and Microsoft personnel analyze prompts & completions for abuse, misuse or harmful content generation.
A question you may be interested in - Can a customer opt out of the logging and human review process?
Some customers in highly regulated industries with low risk use cases process sensitive data with less likelihood of misuse. Because of the nature of the data or use case, these customers do not want or do not have the right to permit Microsoft to process such data for abuse detection due to their internal policies or applicable legal regulations.
To empower its enterprise customers and to strike a balance between regulatory / privacy needs and abuse prevention, the Azure Open AI Service will include a set of Limited Access features to provide potential customers with the option to modify following:
- abuse monitoring
- content filtering
These Limited Access features will enable potential customers to opt out of the human review and data logging processes subject to eligibility criteria governed by Microsoft’s Limited Access framework. Customers who meet Microsoft’s Limited Access eligibility criteria and have a low-risk use case can apply for the ability to opt-out of both data logging and human review process. This allows trusted customers with low-risk scenarios the data and privacy controls they require while also allowing us to offer AOAI models to all other customers in a way that minimizes the risk of harm and abuse.
If Microsoft approves a customer’s request to access Limited Access features with the capability to (i) modify abuse monitoring and (ii) modify content filtering, then Microsoft will not store the associated request or response. Since no request or response data will be stored at rest in the Service Results Store in this case, the human review process will no longer be feasible. Therefore, both CMK and Lockbox will be deemed out-of-scope for harm and abuse detection.
I hope this information is helpful, please let me know if you have any other question.
Regards,
Yutong
-Please kindly accept the answer and vote 'Yes' if you feel helpful to support the community, thanks a lot.