Summary

Completed

This module has provided an initial exploration of AI security testing, from understanding the unique challenges of AI systems to planning and executing effective red teaming exercises. As AI continues to evolve, so too must our approaches to security and safety, making AI red team practices essential for maintaining the integrity and trustworthiness of AI solutions.

Further reading

The following articles and video presentations provide more information about Microsoft's approach to AI red teaming.

Azure OpenAI Service - Azure OpenAI

Azure OpenAI Service content filtering - Azure OpenAI

How Microsoft Approaches AI Red Teaming

Microsoft AI Red Team building future of safer AI

Red teams think like hackers to help keep AI safe