Follow the responsible AI standard playbook

Completed

Technological advances make artificial intelligence an integral part of our daily lives. While AI opens many opportunities and possibilities, It's also prone to errors that can lead to harm. To protect users from these potential risks, practical guidelines are needed that help steer the creation and application of AI toward more beneficial and fair outcomes.

Microsoft's Responsible AI Standard Playbook was developed to bridge the policy gap in AI, providing concrete guidance for upholding the company's AI principles. This living document, now in its second version, is part of an ongoing effort to refine AI norms and practices. It's designed to evolve with new insights and regulations, contributing to the global dialogue on responsible AI development. Microsoft encourages collaboration across sectors to further this initiative, emphasizing the need for principled, actionable standards in AI deployment.

The Responsible AI Standard also helps users determine whether an AI system is created and implemented with responsible AI principles in mind. It's composed of two key aspects:

Goals: Goals or outcomes are the conditions that must be achieved in creating an AI system. These goals help breakdown the six responsible AI principles like "accountability" into specific goals such as impact assessment, data governance, and human oversight.

Principle Goals
Accountability A1: Impact assessment
A2: Oversight of significant adverse impacts
A3: Fit for purpose
A4: Data governance and management
A5: Human oversight and control
Transparency T1: System intelligibility for decision making
T2: Communication to stakeholders
T3: Disclosure of AI interaction
Fairness F1: Quality of service
F2: Allocation of resources and opportunities
F3: Minimization of stereotyping, demeaning, and erasing outputs
Reliability & Safety RS1: Reliability and safety guidance
RS2: Failures and remediations
RS3: Ongoing monitoring, feedback, and evaluation
Privacy & Security PS1: Privacy Standard compliance
PS2: Security Policy compliance
Inclusiveness I1: Accessibility Standards compliance

Requirements: Each goal is supported to meet specific requirements, and the steps that a team must follow to ensure the AI system fulfills the goals at every stage of its lifecycle. The number of requirements may differ for each goal. For instance, under the principle of Transparency, there's a goal titled 'Communication to Stakeholders.' A requirement for achieving this goal is 'T2.1 Identify,' which involves two tasks: first, pinpointing stakeholders who decide on the system's deployment for certain tasks, and second, identifying the stakeholders who develop or integrate systems that work with it. These stakeholders should be documented in the Impact Assessment template, under the category of Impact Assessment.

In addition to the two aspects previously mentioned, the standard also provides tools and resources to help people implement the requirements effectively and efficiently. The standard includes best practices, checklists, templates, training modules, and case studies.

Though the goals and requirements are already detailed, the Responsible AI Standard continues to evolve. It's regularly updated based on feedback from internal and external stakeholders and new research and insights. Contributions from users and creators of AI systems will be needed to continuously monitor and measure the development of their AI systems, and then compare the best practices that they learn with these existing guidelines.

Want to learn more about the standards? Download the guide by visiting: Empowering responsible AI practices | Microsoft AI.