你当前正在访问 Microsoft Azure Global Edition 技术文档网站。 如果需要访问由世纪互联运营的 Microsoft Azure 中国技术文档网站,请访问 https://docs.azure.cn

快速入门:使用人脸服务

重要

如果使用 Microsoft 产品或服务处理生物识别数据,则需负责以下事项:(i) 向数据主体提供通知,包括有关保留期和销毁的通知;(ii) 从数据主体处获得同意;(iii) 根据适用的数据保护要求删除生物识别数据。 “生物识别数据”将具有 GDPR 第 4 条中所述的含义,以及其他数据保护要求中的等效术语(如果适用)。 如需相关信息,请参阅人脸的数据和隐私

注意

为了支持我们负责任的 AI 原则,基于资格和使用标准对人脸服务访问进行限制。 人脸服务仅适用于 Microsoft 托管客户和合作伙伴。 使用人脸识别引入表单来申请访问。 有关详细信息,请参阅人脸受限访问页面。

开始使用适用于 .NET 的人脸客户端库进行人脸识别。 通过 Azure AI 人脸服务,可以访问用于检测和识别图像中的人脸的高级算法。 按照以下步骤安装包,并尝试使用远程图像进行基本面部识别的示例代码。

参考文档 | 库源代码 | 包 (NuGet) | 示例

先决条件

  • Azure 订阅 - 免费创建订阅
  • Visual Studio IDE 或最新版本的 .NET Core
  • 拥有 Azure 订阅后,请在 Azure 门户中创建人脸资源,以获取密钥和终结点。 部署后,选择”转到资源”。
    • 需要从创建的资源获取密钥和终结点,以便将应用程序连接到人脸 API。
    • 可以使用免费定价层 (F0) 试用该服务,然后再升级到付费层进行生产。

创建环境变量

在此示例中,将凭据写入运行应用程序的本地计算机上的环境变量。

转到 Azure 门户。 如果在“先决条件”部分创建的资源部署成功,请选择“后续步骤”下的“转到资源”。 在“密钥和终结点”页的“资源管理”下,可以找到密钥和终结点。 你的资源密钥与你的 Azure 订阅 ID 不同。

若要为密钥和终结点设置环境变量,请打开控制台窗口,并按照操作系统和开发环境的说明进行操作。

  • 若要设置 FACE_APIKEY 环境变量,请将 <your_key> 替换为资源的其中一个密钥。
  • 若要设置 FACE_ENDPOINT 环境变量,请将 <your_endpoint> 替换为资源的终结点。

重要

如果使用 API 密钥,请将其安全地存储在某个其他位置,例如 Azure Key Vault 中。 请不要直接在代码中包含 API 密钥,并且切勿公开发布该密钥。

有关 Azure AI 服务安全性的详细信息,请参阅对 Azure AI 服务的请求进行身份验证

setx FACE_APIKEY <your_key>
setx FACE_ENDPOINT <your_endpoint>

添加环境变量后,可能需要重启任何正在运行的、将读取环境变量的程序(包括控制台窗口)。

识别和验证人脸

  1. 新建 C# 应用程序

    使用 Visual Studio 创建新的 .NET Core 应用程序。

    安装客户端库

    创建新项目后,右键单击“解决方案资源管理器”中的项目解决方案,然后选择“管理 NuGet 包”,以安装客户端库 。 在打开的包管理器中,选择“浏览”,选中“包括预发行版”并搜索 Azure.AI.Vision.Face。 选择最新版本,然后选择“安装”。

  2. 将以下代码添加到 Program.cs 文件。

    注意

    如果你还没有通过入口表单获得对人脸服务的访问权限,则其中一些功能将不起作用。

    using System.Net.Http.Headers;
    using System.Text;
    
    using Azure;
    using Azure.AI.Vision.Face;
    using Newtonsoft.Json;
    using Newtonsoft.Json.Linq;
    
    namespace FaceQuickstart
    {
        class Program
        {
            static readonly string largePersonGroupId = Guid.NewGuid().ToString();
    
            // URL path for the images.
            const string IMAGE_BASE_URL = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/";
    
            // From your Face subscription in the Azure portal, get your subscription key and endpoint.
            static readonly string SUBSCRIPTION_KEY = Environment.GetEnvironmentVariable("FACE_APIKEY") ?? "<apikey>";
            static readonly string ENDPOINT = Environment.GetEnvironmentVariable("FACE_ENDPOINT") ?? "<endpoint>";
    
            static void Main(string[] args)
            {
                // Recognition model 4 was released in 2021 February.
                // It is recommended since its accuracy is improved
                // on faces wearing masks compared with model 3,
                // and its overall accuracy is improved compared
                // with models 1 and 2.
                FaceRecognitionModel RECOGNITION_MODEL4 = FaceRecognitionModel.Recognition04;
    
                // Authenticate.
                FaceClient client = Authenticate(ENDPOINT, SUBSCRIPTION_KEY);
    
                // Identify - recognize a face(s) in a large person group (a large person group is created in this example).
                IdentifyInLargePersonGroup(client, IMAGE_BASE_URL, RECOGNITION_MODEL4).Wait();
    
                Console.WriteLine("End of quickstart.");
            }
    
            /*
             *	AUTHENTICATE
             *	Uses subscription key and region to create a client.
             */
            public static FaceClient Authenticate(string endpoint, string key)
            {
                return new FaceClient(new Uri(endpoint), new AzureKeyCredential(key));
            }
    
            // Detect faces from image url for recognition purposes. This is a helper method for other functions in this quickstart.
            // Parameter `returnFaceId` of `DetectAsync` must be set to `true` (by default) for recognition purposes.
            // Parameter `returnFaceAttributes` is set to include the QualityForRecognition attribute. 
            // Recognition model must be set to recognition_03 or recognition_04 as a result.
            // Result faces with insufficient quality for recognition are filtered out. 
            // The field `faceId` in returned `DetectedFace`s will be used in Verify and Identify.
            // It will expire 24 hours after the detection call.
            private static async Task<List<FaceDetectionResult>> DetectFaceRecognize(FaceClient faceClient, string url, FaceRecognitionModel recognition_model)
            {
                // Detect faces from image URL.
                Response<IReadOnlyList<FaceDetectionResult>> response = await faceClient.DetectAsync(new Uri(url), FaceDetectionModel.Detection03, recognition_model, returnFaceId: true, [FaceAttributeType.QualityForRecognition]);
                IReadOnlyList<FaceDetectionResult> detectedFaces = response.Value;
                List<FaceDetectionResult> sufficientQualityFaces = new List<FaceDetectionResult>();
                foreach (FaceDetectionResult detectedFace in detectedFaces)
                {
                    var faceQualityForRecognition = detectedFace.FaceAttributes.QualityForRecognition;
                    if (faceQualityForRecognition.HasValue && (faceQualityForRecognition.Value != QualityForRecognition.Low))
                    {
                        sufficientQualityFaces.Add(detectedFace);
                    }
                }
                Console.WriteLine($"{detectedFaces.Count} face(s) with {sufficientQualityFaces.Count} having sufficient quality for recognition detected from image `{Path.GetFileName(url)}`");
    
                return sufficientQualityFaces;
            }
    
            /*
             * IDENTIFY FACES
             * To identify faces, you need to create and define a large person group.
             * The Identify operation takes one or several face IDs from DetectedFace or PersistedFace and a LargePersonGroup and returns 
             * a list of Person objects that each face might belong to. Returned Person objects are wrapped as Candidate objects, 
             * which have a prediction confidence value.
             */
            public static async Task IdentifyInLargePersonGroup(FaceClient client, string url, FaceRecognitionModel recognitionModel)
            {
                Console.WriteLine("========IDENTIFY FACES========");
                Console.WriteLine();
    
                // Create a dictionary for all your images, grouping similar ones under the same key.
                Dictionary<string, string[]> personDictionary =
                    new Dictionary<string, string[]>
                        { { "Family1-Dad", new[] { "Family1-Dad1.jpg", "Family1-Dad2.jpg" } },
                          { "Family1-Mom", new[] { "Family1-Mom1.jpg", "Family1-Mom2.jpg" } },
                          { "Family1-Son", new[] { "Family1-Son1.jpg", "Family1-Son2.jpg" } }
                        };
                // A group photo that includes some of the persons you seek to identify from your dictionary.
                string sourceImageFileName = "identification1.jpg";
    
                // Create a large person group.
                Console.WriteLine($"Create a person group ({largePersonGroupId}).");
                HttpClient httpClient = new HttpClient();
                httpClient.DefaultRequestHeaders.Add("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY);
                using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = largePersonGroupId, ["recognitionModel"] = recognitionModel.ToString() }))))
                {
                    content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
                    await httpClient.PutAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}", content);
                }
                // The similar faces will be grouped into a single large person group person.
                foreach (var groupedFace in personDictionary.Keys)
                {
                    // Limit TPS
                    await Task.Delay(250);
                    string? personId = null;
                    using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["name"] = groupedFace }))))
                    {
                        content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
                        using (var response = await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}/persons", content))
                        {
                            string contentString = await response.Content.ReadAsStringAsync();
                            personId = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["personId"]);
                        }
                    }
                    Console.WriteLine($"Create a person group person '{groupedFace}'.");
    
                    // Add face to the large person group person.
                    foreach (var similarImage in personDictionary[groupedFace])
                    {
                        Console.WriteLine($"Check whether image is of sufficient quality for recognition");
                        Response<IReadOnlyList<FaceDetectionResult>> response = await client.DetectAsync(new Uri($"{url}{similarImage}"), FaceDetectionModel.Detection03, recognitionModel, returnFaceId: false, [FaceAttributeType.QualityForRecognition]);
                        IReadOnlyList<FaceDetectionResult> detectedFaces1 = response.Value;
                        bool sufficientQuality = true;
                        foreach (var face1 in detectedFaces1)
                        {
                            var faceQualityForRecognition = face1.FaceAttributes.QualityForRecognition;
                            //  Only "high" quality images are recommended for person enrollment
                            if (faceQualityForRecognition.HasValue && (faceQualityForRecognition.Value != QualityForRecognition.High))
                            {
                                sufficientQuality = false;
                                break;
                            }
                        }
    
                        if (!sufficientQuality)
                        {
                            continue;
                        }
    
                        if (detectedFaces1.Count != 1)
                        {
                            continue;
                        }
    
                        // add face to the large person group
                        Console.WriteLine($"Add face to the person group person({groupedFace}) from image `{similarImage}`");
                        using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["url"] = $"{url}{similarImage}" }))))
                        {
                            content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
                            await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces?detectionModel=detection_03", content);
                        }
                    }
                }
    
                // Start to train the large person group.
                Console.WriteLine();
                Console.WriteLine($"Train person group {largePersonGroupId}.");
                await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}/train", null);
    
                // Wait until the training is completed.
                while (true)
                {
                    await Task.Delay(1000);
                    string? trainingStatus = null;
                    using (var response = await httpClient.GetAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}/training"))
                    {
                        string contentString = await response.Content.ReadAsStringAsync();
                        trainingStatus = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["status"]);
                    }
                    Console.WriteLine($"Training status: {trainingStatus}.");
                    if ("succeeded".Equals(trainingStatus)) { break; }
                }
                Console.WriteLine();
    
                Console.WriteLine("Pausing for 60 seconds to avoid triggering rate limit on free account...");
                await Task.Delay(60000);
    
                List<Guid> sourceFaceIds = new List<Guid>();
                // Detect faces from source image url.
                List<FaceDetectionResult> detectedFaces = await DetectFaceRecognize(client, $"{url}{sourceImageFileName}", recognitionModel);
    
                // Add detected faceId to sourceFaceIds.
                foreach (var detectedFace in detectedFaces) { sourceFaceIds.Add(detectedFace.FaceId.Value); }
    
                // Identify the faces in a large person group.
                List<Dictionary<string, object>> identifyResults = new List<Dictionary<string, object>>();
                using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["faceIds"] = sourceFaceIds, ["largePersonGroupId"] = largePersonGroupId }))))
                {
                    content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
                    using (var response = await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/identify", content))
                    {
                        string contentString = await response.Content.ReadAsStringAsync();
                        identifyResults = JsonConvert.DeserializeObject<List<Dictionary<string, object>>>(contentString) ?? [];
                    }
                }
    
                foreach (var identifyResult in identifyResults)
                {
                    string faceId = (string)identifyResult["faceId"];
                    List<Dictionary<string, object>> candidates = JsonConvert.DeserializeObject<List<Dictionary<string, object>>>(((JArray)identifyResult["candidates"]).ToString()) ?? [];
                    if (candidates.Count == 0)
                    {
                        Console.WriteLine($"No person is identified for the face in: {sourceImageFileName} - {faceId},");
                        continue;
                    }
    
                    string? personName = null;
                    using (var response = await httpClient.GetAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}/persons/{candidates.First()["personId"]}"))
                    {
                        string contentString = await response.Content.ReadAsStringAsync();
                        personName = (string?)(JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString)?["name"]);
                    }
                    Console.WriteLine($"Person '{personName}' is identified for the face in: {sourceImageFileName} - {faceId}," +
                        $" confidence: {candidates.First()["confidence"]}.");
    
                    Dictionary<string, object> verifyResult = new Dictionary<string, object>();
                    using (var content = new ByteArrayContent(Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(new Dictionary<string, object> { ["faceId"] = faceId, ["personId"] = candidates.First()["personId"], ["largePersonGroupId"] = largePersonGroupId }))))
                    {
                        content.Headers.ContentType = new MediaTypeHeaderValue("application/json");
                        using (var response = await httpClient.PostAsync($"{ENDPOINT}/face/v1.0/verify", content))
                        {
                            string contentString = await response.Content.ReadAsStringAsync();
                            verifyResult = JsonConvert.DeserializeObject<Dictionary<string, object>>(contentString) ?? [];
                        }
                    }
                    Console.WriteLine($"Verification result: is a match? {verifyResult["isIdentical"]}. confidence: {verifyResult["confidence"]}");
                }
                Console.WriteLine();
    
                // Delete large person group.
                Console.WriteLine("========DELETE PERSON GROUP========");
                Console.WriteLine();
                await httpClient.DeleteAsync($"{ENDPOINT}/face/v1.0/largepersongroups/{largePersonGroupId}");
                Console.WriteLine($"Deleted the person group {largePersonGroupId}.");
                Console.WriteLine();
            }
        }
    }
    
  3. 运行应用程序

    单击 IDE 窗口顶部的“调试”按钮,运行应用程序。

输出

========IDENTIFY FACES========

Create a person group (18d1c443-a01b-46a4-9191-121f74a831cd).
Create a person group person 'Family1-Dad'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Dad) from image `Family1-Dad1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Dad) from image `Family1-Dad2.jpg`
Create a person group person 'Family1-Mom'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Mom) from image `Family1-Mom1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Mom) from image `Family1-Mom2.jpg`
Create a person group person 'Family1-Son'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Son) from image `Family1-Son1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Son) from image `Family1-Son2.jpg`

Train person group 18d1c443-a01b-46a4-9191-121f74a831cd.
Training status: succeeded.

Pausing for 60 seconds to avoid triggering rate limit on free account...
4 face(s) with 4 having sufficient quality for recognition detected from image `identification1.jpg`
Person 'Family1-Dad' is identified for the face in: identification1.jpg - ad813534-9141-47b4-bfba-24919223966f, confidence: 0.96807.
Verification result: is a match? True. confidence: 0.96807
Person 'Family1-Mom' is identified for the face in: identification1.jpg - 1a39420e-f517-4cee-a898-5d968dac1a7e, confidence: 0.96902.
Verification result: is a match? True. confidence: 0.96902
No person is identified for the face in: identification1.jpg - 889394b1-e30f-4147-9be1-302beb5573f3,
Person 'Family1-Son' is identified for the face in: identification1.jpg - 0557d87b-356c-48a8-988f-ce0ad2239aa5, confidence: 0.9281.
Verification result: is a match? True. confidence: 0.9281

========DELETE PERSON GROUP========

Deleted the person group 18d1c443-a01b-46a4-9191-121f74a831cd.

End of quickstart.

提示

人脸 API 在一组预构建的模型呢上运行,这些模型在本质上是静态的(模型的性能不会因为运行服务而提高或降低)。 如果 Microsoft 更新模型的后端,但不迁移整个新模型版本,那么模型生成的结果可能会变化。 若要使用更新的模型版本,可重新训练 PersonGroup,将更新的模型指定为具有相同注册映像的参数。

清理资源

如果想要清理并移除 Azure AI 服务订阅,可以删除资源或资源组。 删除资源组同时也会删除与之相关联的任何其他资源。

后续步骤

在本快速入门中,你已了解如何使用适用于 .NET 的人脸客户端库来执行基本人脸识别。 接下来,了解不同的人脸检测模型,学习如何为你的用例指定适当的模型。

开始使用适用于 Python 的人脸客户端库进行人脸识别。 请按照以下步骤安装程序包并试用基本任务的示例代码。 通过人脸服务,可以访问用于检测和识别图像中的人脸的高级算法。 按照以下步骤安装包,并尝试使用远程图像进行基本面部识别的示例代码。

参考文档 | 库源代码 | 包 (PiPy) | 示例

先决条件

  • Azure 订阅 - 免费创建订阅
  • Python 3.x
    • 你的 Python 安装应包含 pip。 可以通过在命令行上运行 pip --version 来检查是否安装了 pip。 通过安装最新版本的 Python 获取 pip。
  • 拥有 Azure 订阅后,请在 Azure 门户中创建人脸资源,以获取密钥和终结点。 部署后,选择”转到资源”。
    • 需要从创建的资源获取密钥和终结点,以便将应用程序连接到人脸 API。
    • 可以使用免费定价层 (F0) 试用该服务,然后再升级到付费层进行生产。

创建环境变量

在此示例中,将凭据写入运行应用程序的本地计算机上的环境变量。

转到 Azure 门户。 如果在“先决条件”部分创建的资源部署成功,请选择“后续步骤”下的“转到资源”。 在“密钥和终结点”页的“资源管理”下,可以找到密钥和终结点。 你的资源密钥与你的 Azure 订阅 ID 不同。

若要为密钥和终结点设置环境变量,请打开控制台窗口,并按照操作系统和开发环境的说明进行操作。

  • 若要设置 FACE_APIKEY 环境变量,请将 <your_key> 替换为资源的其中一个密钥。
  • 若要设置 FACE_ENDPOINT 环境变量,请将 <your_endpoint> 替换为资源的终结点。

重要

如果使用 API 密钥,请将其安全地存储在某个其他位置,例如 Azure Key Vault 中。 请不要直接在代码中包含 API 密钥,并且切勿公开发布该密钥。

有关 Azure AI 服务安全性的详细信息,请参阅对 Azure AI 服务的请求进行身份验证

setx FACE_APIKEY <your_key>
setx FACE_ENDPOINT <your_endpoint>

添加环境变量后,可能需要重启任何正在运行的、将读取环境变量的程序(包括控制台窗口)。

识别和验证人脸

  1. 安装客户端库

    在安装 Python 后,可以通过以下命令安装客户端库:

    pip install --upgrade azure-ai-vision-face
    
  2. 创建新的 Python 应用程序

    创建新的 Python 脚本,例如 quickstart-file.py。 然后在偏好的编辑器或 IDE 中打开它,并粘贴以下代码。

    注意

    如果你还没有通过入口表单获得对人脸服务的访问权限,则其中一些功能将不起作用。

    import os
    import time
    import uuid
    import requests
    
    from azure.core.credentials import AzureKeyCredential
    from azure.ai.vision.face import FaceClient
    from azure.ai.vision.face.models import (
        FaceAttributeTypeRecognition04,
        FaceDetectionModel,
        FaceRecognitionModel,
        QualityForRecognition,
    )
    
    
    # This key will serve all examples in this document.
    KEY = os.environ["FACE_APIKEY"]
    
    # This endpoint will be used in all examples in this quickstart.
    ENDPOINT = os.environ["FACE_ENDPOINT"]
    
    # Used in the Large Person Group Operations and Delete Large Person Group examples.
    # LARGE_PERSON_GROUP_ID should be all lowercase and alphanumeric. For example, 'mygroupname' (dashes are OK).
    LARGE_PERSON_GROUP_ID = str(uuid.uuid4())  # assign a random ID (or name it anything)
    
    HEADERS = {"Ocp-Apim-Subscription-Key": KEY, "Content-Type": "application/json"}
    
    # Create an authenticated FaceClient.
    with FaceClient(endpoint=ENDPOINT, credential=AzureKeyCredential(KEY)) as face_client:
        '''
        Create the LargePersonGroup
        '''
        # Create empty Large Person Group. Large Person Group ID must be lower case, alphanumeric, and/or with '-', '_'.
        print("Person group:", LARGE_PERSON_GROUP_ID)
        response = requests.put(
            ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}",
            headers=HEADERS,
            json={"name": LARGE_PERSON_GROUP_ID, "recognitionModel": "recognition_04"})
        response.raise_for_status()
    
        # Define woman friend
        response = requests.post(ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/persons", headers=HEADERS, json={"name": "Woman"})
        response.raise_for_status()
        woman = response.json()
        # Define man friend
        response = requests.post(ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/persons", headers=HEADERS, json={"name": "Man"})
        response.raise_for_status()
        man = response.json()
        # Define child friend
        response = requests.post(ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/persons", headers=HEADERS, json={"name": "Child"})
        response.raise_for_status()
        child = response.json()
    
        '''
        Detect faces and register them to each person
        '''
        # Find all jpeg images of friends in working directory (TBD pull from web instead)
        woman_images = [
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Mom1.jpg",  # noqa: E501
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Mom2.jpg",  # noqa: E501
        ]
        man_images = [
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Dad1.jpg",  # noqa: E501
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Dad2.jpg",  # noqa: E501
        ]
        child_images = [
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Son1.jpg",  # noqa: E501
            "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Son2.jpg",  # noqa: E501
        ]
    
        # Add to woman person
        for image in woman_images:
            # Check if the image is of sufficent quality for recognition.
            sufficientQuality = True
            detected_faces = face_client.detect_from_url(
                url=image,
                detection_model=FaceDetectionModel.DETECTION_03,
                recognition_model=FaceRecognitionModel.RECOGNITION_04,
                return_face_id=True,
                return_face_attributes=[FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION])
            for face in detected_faces:
                if face.face_attributes.quality_for_recognition != QualityForRecognition.HIGH:
                    sufficientQuality = False
                    break
    
            if not sufficientQuality:
                continue
    
            if len(detected_faces) != 1:
                continue
    
            response = requests.post(
                ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/persons/{woman['personId']}/persistedFaces",
                headers=HEADERS,
                json={"url": image})
            response.raise_for_status()
            print(f"face {face.face_id} added to person {woman['personId']}")
    
    
        # Add to man person
        for image in man_images:
            # Check if the image is of sufficent quality for recognition.
            sufficientQuality = True
            detected_faces = face_client.detect_from_url(
                url=image,
                detection_model=FaceDetectionModel.DETECTION_03,
                recognition_model=FaceRecognitionModel.RECOGNITION_04,
                return_face_id=True,
                return_face_attributes=[FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION])
            for face in detected_faces:
                if face.face_attributes.quality_for_recognition != QualityForRecognition.HIGH:
                    sufficientQuality = False
                    break
    
            if not sufficientQuality:
                continue
    
            if len(detected_faces) != 1:
                continue
    
            response = requests.post(
                ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/persons/{man['personId']}/persistedFaces",
                headers=HEADERS,
                json={"url": image})
            response.raise_for_status()
            print(f"face {face.face_id} added to person {man['personId']}")
    
        # Add to child person
        for image in child_images:
            # Check if the image is of sufficent quality for recognition.
            sufficientQuality = True
            detected_faces = face_client.detect_from_url(
                url=image,
                detection_model=FaceDetectionModel.DETECTION_03,
                recognition_model=FaceRecognitionModel.RECOGNITION_04,
                return_face_id=True,
                return_face_attributes=[FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION])
            for face in detected_faces:
                if face.face_attributes.quality_for_recognition != QualityForRecognition.HIGH:
                    sufficientQuality = False
                    break
            if not sufficientQuality:
                continue
    
            if len(detected_faces) != 1:
                continue
    
            response = requests.post(
                ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/persons/{child['personId']}/persistedFaces",
                headers=HEADERS,
                json={"url": image})
            response.raise_for_status()
            print(f"face {face.face_id} added to person {child['personId']}")
    
        '''
        Train LargePersonGroup
        '''
        # Train the large person group
        print(f"Train the person group {LARGE_PERSON_GROUP_ID}")
        response = requests.post(ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/train", headers=HEADERS)
        response.raise_for_status()
    
        while (True):
            response = requests.get(ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}/training", headers=HEADERS)
            response.raise_for_status()
            training_status = response.json()["status"]
            if training_status == "succeeded":
                break
        print(f"The person group {LARGE_PERSON_GROUP_ID} is trained successfully.")
    
        '''
        Identify a face against a defined LargePersonGroup
        '''
        # Group image for testing against
        test_image = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/identification1.jpg"  # noqa: E501
    
        print("Pausing for 60 seconds to avoid triggering rate limit on free account...")
        time.sleep(60)
    
        # Detect faces
        face_ids = []
        # We use detection model 03 to get better performance, recognition model 04 to support quality for
        # recognition attribute.
        faces = face_client.detect_from_url(
            url=test_image,
            detection_model=FaceDetectionModel.DETECTION_03,
            recognition_model=FaceRecognitionModel.RECOGNITION_04,
            return_face_id=True,
            return_face_attributes=[FaceAttributeTypeRecognition04.QUALITY_FOR_RECOGNITION])
        for face in faces:
            # Only take the face if it is of sufficient quality.
            if face.face_attributes.quality_for_recognition != QualityForRecognition.LOW:
                face_ids.append(face.face_id)
    
        # Identify faces
        response = requests.post(
            ENDPOINT + f"/face/v1.0/identify",
            headers=HEADERS,
            json={"faceIds": face_ids, "largePersonGroupId": LARGE_PERSON_GROUP_ID})
        response.raise_for_status()
        results = response.json()
        print("Identifying faces in image")
        if not results:
            print("No person identified in the person group")
        for identifiedFace in results:
            if len(identifiedFace["candidates"]) > 0:
                print(f"Person is identified for face ID {identifiedFace['faceId']} in image, with a confidence of "
                      f"{identifiedFace['candidates'][0]['confidence']}.")  # Get topmost confidence score
    
                # Verify faces
                response = requests.post(
                    ENDPOINT + f"/face/v1.0/verify",
                    headers=HEADERS,
                    json={"faceId": identifiedFace["faceId"], "personId": identifiedFace["candidates"][0]["personId"], "largePersonGroupId": LARGE_PERSON_GROUP_ID})
                response.raise_for_status()
                verify_result = response.json()
                print(f"verification result: {verify_result['isIdentical']}. confidence: {verify_result['confidence']}")
            else:
                print(f"No person identified for face ID {identifiedFace['faceId']} in image.")
    
        print()
    
        # Delete the large person group
        response = requests.delete(ENDPOINT + f"/face/v1.0/largepersongroups/{LARGE_PERSON_GROUP_ID}", headers=HEADERS)
        response.raise_for_status()
        print(f"The person group {LARGE_PERSON_GROUP_ID} is deleted.")
    
        print()
        print("End of quickstart.")
    
    
  3. 使用 python 命令从应用程序目录运行人脸识别应用。

    python quickstart-file.py
    

    提示

    人脸 API 在一组预构建的模型呢上运行,这些模型在本质上是静态的(模型的性能不会因为运行服务而提高或降低)。 如果 Microsoft 更新模型的后端,但不迁移整个新模型版本,那么模型生成的结果可能会变化。 若要使用更新的模型版本,可重新训练 PersonGroup,将更新的模型指定为具有相同注册映像的参数。

输出

Person group: ad12b2db-d892-48ec-837a-0e7168c18224
face 335a2cb1-5211-4c29-9c45-776dd014b2af added to person 9ee65510-81a5-47e5-9e50-66727f719465
face df57eb50-4a13-4f93-b804-cd108327ad5a added to person 9ee65510-81a5-47e5-9e50-66727f719465
face d8b7b8b8-3ca6-4309-b76e-eeed84f7738a added to person 00651036-4236-4004-88b9-11466c251548
face dffbb141-f40b-4392-8785-b6c434fa534e added to person 00651036-4236-4004-88b9-11466c251548
face 9cdac36e-5455-447b-a68d-eb1f5e2ec27d added to person 23614724-b132-407a-aaa0-67003987ce93
face d8208412-92b7-4b8d-a2f8-3926c839c87e added to person 23614724-b132-407a-aaa0-67003987ce93
Train the person group ad12b2db-d892-48ec-837a-0e7168c18224
The person group ad12b2db-d892-48ec-837a-0e7168c18224 is trained successfully.
Pausing for 60 seconds to avoid triggering rate limit on free account...
Identifying faces in image
Person is identified for face ID bc52405a-5d83-4500-9218-557468ccdf99 in image, with a confidence of 0.96726.
verification result: True. confidence: 0.96726
Person is identified for face ID dfcc3fc8-6252-4f3a-8205-71466f39d1a7 in image, with a confidence of 0.96925.
verification result: True. confidence: 0.96925
No person identified for face ID 401c581b-a178-45ed-8205-7692f6eede88 in image.
Person is identified for face ID 8809d9c7-e362-4727-8c95-e1e44f5c2e8a in image, with a confidence of 0.92898.
verification result: True. confidence: 0.92898

The person group ad12b2db-d892-48ec-837a-0e7168c18224 is deleted.

End of quickstart.

清理资源

如果想要清理并移除 Azure AI 服务订阅,可以删除资源或资源组。 删除资源组同时也会删除与之相关联的任何其他资源。

后续步骤

在本快速入门中,你已了解如何使用适用于 Python 的人脸客户端库来执行基本人脸识别。 接下来,了解不同的人脸检测模型,学习如何为你的用例指定适当的模型。

开始使用适用于 Java 的人脸客户端库进行人脸识别。 请按照以下步骤安装程序包并试用基本任务的示例代码。 通过人脸服务,可以访问用于检测和识别图像中的人脸的高级算法。 按照以下步骤安装包,并尝试使用远程图像进行基本面部识别的示例代码。

参考文档 | 库源代码 | 包 (Maven) | 示例

先决条件

  • Azure 订阅 - 免费创建订阅
  • 最新版的 Java 开发工具包 (JDK)
  • 已安装 Apache Maven。 在 Linux 上,从分发存储库安装(如果可用)。
  • 拥有 Azure 订阅后,请在 Azure 门户中创建人脸资源,以获取密钥和终结点。 部署后,选择”转到资源”。
    • 需要从创建的资源获取密钥和终结点,以便将应用程序连接到人脸 API。
    • 可以使用免费定价层 (F0) 试用该服务,然后再升级到付费层进行生产。

创建环境变量

在此示例中,将凭据写入运行应用程序的本地计算机上的环境变量。

转到 Azure 门户。 如果在“先决条件”部分创建的资源部署成功,请选择“后续步骤”下的“转到资源”。 在“密钥和终结点”页的“资源管理”下,可以找到密钥和终结点。 你的资源密钥与你的 Azure 订阅 ID 不同。

若要为密钥和终结点设置环境变量,请打开控制台窗口,并按照操作系统和开发环境的说明进行操作。

  • 若要设置 FACE_APIKEY 环境变量,请将 <your_key> 替换为资源的其中一个密钥。
  • 若要设置 FACE_ENDPOINT 环境变量,请将 <your_endpoint> 替换为资源的终结点。

重要

如果使用 API 密钥,请将其安全地存储在某个其他位置,例如 Azure Key Vault 中。 请不要直接在代码中包含 API 密钥,并且切勿公开发布该密钥。

有关 Azure AI 服务安全性的详细信息,请参阅对 Azure AI 服务的请求进行身份验证

setx FACE_APIKEY <your_key>
setx FACE_ENDPOINT <your_endpoint>

添加环境变量后,可能需要重启任何正在运行的、将读取环境变量的程序(包括控制台窗口)。

识别和验证人脸

  1. 安装客户端库

    打开控制台窗口,并为快速入门应用程序创建一个新文件夹。 将以下内容复制到新文件。 在项目目录中将文件另存为 pom.xml

    <project xmlns="http://maven.apache.org/POM/4.0.0"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
      <modelVersion>4.0.0</modelVersion>
      <groupId>com.example</groupId>
      <artifactId>my-application-name</artifactId>
      <version>1.0.0</version>
      <dependencies>
        <!-- https://mvnrepository.com/artifact/com.azure/azure-ai-vision-face -->
        <dependency>
          <groupId>com.azure</groupId>
          <artifactId>azure-ai-vision-face</artifactId>
          <version>1.0.0-beta.1</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/org.apache.httpcomponents/httpclient -->
        <dependency>
          <groupId>org.apache.httpcomponents</groupId>
          <artifactId>httpclient</artifactId>
          <version>4.5.13</version>
        </dependency>
        <!-- https://mvnrepository.com/artifact/com.google.code.gson/gson -->
        <dependency>
          <groupId>com.google.code.gson</groupId>
          <artifactId>gson</artifactId>
          <version>2.11.0</version>
        </dependency>
      </dependencies>
    </project>
    

    通过在项目目录中运行以下命令来安装 SDK 和依赖项:

    mvn clean dependency:copy-dependencies
    
  2. 创建新的 Java 应用程序

    创建名为 Quickstart.java 的文件,在文本编辑器中打开该文件,并粘贴以下代码:

    注意

    如果你还没有通过入口表单获得对人脸服务的访问权限,则其中一些功能将不起作用。

    import java.util.Arrays;
    import java.util.LinkedHashMap;
    import java.util.List;
    import java.util.Map;
    import java.util.stream.Collectors;
    import java.util.UUID;
    
    import com.azure.ai.vision.face.FaceClient;
    import com.azure.ai.vision.face.FaceClientBuilder;
    import com.azure.ai.vision.face.models.DetectOptions;
    import com.azure.ai.vision.face.models.FaceAttributeType;
    import com.azure.ai.vision.face.models.FaceDetectionModel;
    import com.azure.ai.vision.face.models.FaceDetectionResult;
    import com.azure.ai.vision.face.models.FaceRecognitionModel;
    import com.azure.ai.vision.face.models.QualityForRecognition;
    import com.azure.core.credential.KeyCredential;
    import com.google.gson.Gson;
    import com.google.gson.reflect.TypeToken;
    
    import org.apache.http.HttpHeaders;
    import org.apache.http.client.HttpClient;
    import org.apache.http.client.methods.HttpDelete;
    import org.apache.http.client.methods.HttpGet;
    import org.apache.http.client.methods.HttpPost;
    import org.apache.http.client.methods.HttpPut;
    import org.apache.http.client.utils.URIBuilder;
    import org.apache.http.entity.StringEntity;
    import org.apache.http.impl.client.HttpClients;
    import org.apache.http.message.BasicHeader;
    import org.apache.http.util.EntityUtils;
    
    public class Quickstart {
        // LARGE_PERSON_GROUP_ID should be all lowercase and alphanumeric. For example, 'mygroupname' (dashes are OK).
        private static final String LARGE_PERSON_GROUP_ID = UUID.randomUUID().toString();
    
        // URL path for the images.
        private static final String IMAGE_BASE_URL = "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/";
    
        // From your Face subscription in the Azure portal, get your subscription key and endpoint.
        private static final String SUBSCRIPTION_KEY = System.getenv("FACE_APIKEY");
        private static final String ENDPOINT = System.getenv("FACE_ENDPOINT");
    
        public static void main(String[] args) throws Exception {
            // Recognition model 4 was released in 2021 February.
            // It is recommended since its accuracy is improved
            // on faces wearing masks compared with model 3,
            // and its overall accuracy is improved compared
            // with models 1 and 2.
            FaceRecognitionModel RECOGNITION_MODEL4 = FaceRecognitionModel.RECOGNITION_04;
    
            // Authenticate.
            FaceClient client = authenticate(ENDPOINT, SUBSCRIPTION_KEY);
    
            // Identify - recognize a face(s) in a large person group (a large person group is created in this example).
            identifyInLargePersonGroup(client, IMAGE_BASE_URL, RECOGNITION_MODEL4);
    
            System.out.println("End of quickstart.");
        }
    
        /*
         *	AUTHENTICATE
         *	Uses subscription key and region to create a client.
         */
        public static FaceClient authenticate(String endpoint, String key) {
            return new FaceClientBuilder().endpoint(endpoint).credential(new KeyCredential(key)).buildClient();
        }
    
    
        // Detect faces from image url for recognition purposes. This is a helper method for other functions in this quickstart.
        // Parameter `returnFaceId` of `DetectOptions` must be set to `true` (by default) for recognition purposes.
        // Parameter `returnFaceAttributes` is set to include the QualityForRecognition attribute. 
        // Recognition model must be set to recognition_03 or recognition_04 as a result.
        // Result faces with insufficient quality for recognition are filtered out. 
        // The field `faceId` in returned `DetectedFace`s will be used in Verify and Identify.
        // It will expire 24 hours after the detection call.
        private static List<FaceDetectionResult> detectFaceRecognize(FaceClient faceClient, String url, FaceRecognitionModel recognitionModel) {
            // Detect faces from image URL.
            DetectOptions options = new DetectOptions(FaceDetectionModel.DETECTION_03, recognitionModel, true).setReturnFaceAttributes(Arrays.asList(FaceAttributeType.QUALITY_FOR_RECOGNITION));
            List<FaceDetectionResult> detectedFaces = faceClient.detect(url, options);
            List<FaceDetectionResult> sufficientQualityFaces = detectedFaces.stream().filter(f -> f.getFaceAttributes().getQualityForRecognition() != QualityForRecognition.LOW).collect(Collectors.toList());
            System.out.println(detectedFaces.size() + " face(s) with " + sufficientQualityFaces.size() + " having sufficient quality for recognition.");
    
            return sufficientQualityFaces;
        }
    
        /*
         * IDENTIFY FACES
         * To identify faces, you need to create and define a large person group.
         * The Identify operation takes one or several face IDs from DetectedFace or PersistedFace and a LargePersonGroup and returns
         * a list of Person objects that each face might belong to. Returned Person objects are wrapped as Candidate objects,
         * which have a prediction confidence value.
         */
        public static void identifyInLargePersonGroup(FaceClient client, String url, FaceRecognitionModel recognitionModel) throws Exception {
            System.out.println("========IDENTIFY FACES========");
            System.out.println();
    
            // Create a dictionary for all your images, grouping similar ones under the same key.
            Map<String, String[]> personDictionary = new LinkedHashMap<String, String[]>();
            personDictionary.put("Family1-Dad", new String[]{"Family1-Dad1.jpg", "Family1-Dad2.jpg"});
            personDictionary.put("Family1-Mom", new String[]{"Family1-Mom1.jpg", "Family1-Mom2.jpg"});
            personDictionary.put("Family1-Son", new String[]{"Family1-Son1.jpg", "Family1-Son2.jpg"});
            // A group photo that includes some of the persons you seek to identify from your dictionary.
            String sourceImageFileName = "identification1.jpg";
    
            // Create a large person group.
            System.out.println("Create a person group (" + LARGE_PERSON_GROUP_ID + ").");
            List<BasicHeader> headers = Arrays.asList(new BasicHeader("Ocp-Apim-Subscription-Key", SUBSCRIPTION_KEY), new BasicHeader(HttpHeaders.CONTENT_TYPE, "application/json"));
            HttpClient httpClient = HttpClients.custom().setDefaultHeaders(headers).build();
            createLargePersonGroup(httpClient, recognitionModel);
            // The similar faces will be grouped into a single large person group person.
            for (String groupedFace : personDictionary.keySet()) {
                // Limit TPS
                Thread.sleep(250);
                String personId = createLargePersonGroupPerson(httpClient, groupedFace);
                System.out.println("Create a person group person '" + groupedFace + "'.");
    
                // Add face to the large person group person.
                for (String similarImage : personDictionary.get(groupedFace)) {
                    System.out.println("Check whether image is of sufficient quality for recognition");
                    DetectOptions options = new DetectOptions(FaceDetectionModel.DETECTION_03, recognitionModel, false).setReturnFaceAttributes(Arrays.asList(FaceAttributeType.QUALITY_FOR_RECOGNITION));
                    List<FaceDetectionResult> detectedFaces1 = client.detect(url + similarImage, options);
                    if (detectedFaces1.stream().anyMatch(f -> f.getFaceAttributes().getQualityForRecognition() != QualityForRecognition.HIGH)) {
                        continue;
                    }
    
                    if (detectedFaces1.size() != 1) {
                        continue;
                    }
    
                    // add face to the large person group
                    System.out.println("Add face to the person group person(" + groupedFace + ") from image `" + similarImage + "`");
                    addFaceToLargePersonGroup(httpClient, personId, url + similarImage);
                }
            }
    
            // Start to train the large person group.
            System.out.println();
            System.out.println("Train person group " + LARGE_PERSON_GROUP_ID + ".");
            trainLargePersonGroup(httpClient);
    
            // Wait until the training is completed.
            while (true) {
                Thread.sleep(1000);
                String trainingStatus = getLargePersonGroupTrainingStatus(httpClient);
                System.out.println("Training status: " + trainingStatus + ".");
                if ("succeeded".equals(trainingStatus)) {
                    break;
                }
            }
            System.out.println();
    
            System.out.println("Pausing for 60 seconds to avoid triggering rate limit on free account...");
            Thread.sleep(60000);
    
            // Detect faces from source image url.
            List<FaceDetectionResult> detectedFaces = detectFaceRecognize(client, url + sourceImageFileName, recognitionModel);
            // Add detected faceId to sourceFaceIds.
            List<String> sourceFaceIds = detectedFaces.stream().map(FaceDetectionResult::getFaceId).collect(Collectors.toList());
    
            // Identify the faces in a large person group.
            List<Map<String, Object>> identifyResults = identifyFacesInLargePersonGroup(httpClient, sourceFaceIds);
    
            for (Map<String, Object> identifyResult : identifyResults) {
                String faceId = identifyResult.get("faceId").toString();
                List<Map<String, Object>> candidates = new Gson().fromJson(new Gson().toJson(identifyResult.get("candidates")), new TypeToken<List<Map<String, Object>>>(){});
                if (candidates.isEmpty()) {
                    System.out.println("No person is identified for the face in: " + sourceImageFileName + " - " + faceId + ".");
                    continue;
                }
    
                Map<String, Object> candidate = candidates.stream().findFirst().orElseThrow();
                String personName = getLargePersonGroupPersonName(httpClient, candidate.get("personId").toString());
                System.out.println("Person '" + personName + "' is identified for the face in: " + sourceImageFileName + " - " + faceId + ", confidence: " + candidate.get("confidence") + ".");
    
                Map<String, Object> verifyResult = verifyFaceWithLargePersonGroupPerson(httpClient, faceId, candidate.get("personId").toString());
                System.out.println("Verification result: is a match? " + verifyResult.get("isIdentical") + ". confidence: " + verifyResult.get("confidence"));
            }
            System.out.println();
    
            // Delete large person group.
            System.out.println("========DELETE PERSON GROUP========");
            System.out.println();
            deleteLargePersonGroup(httpClient);
            System.out.println("Deleted the person group " + LARGE_PERSON_GROUP_ID + ".");
            System.out.println();
        }
    
        private static void createLargePersonGroup(HttpClient httpClient, FaceRecognitionModel recognitionModel) throws Exception {
            HttpPut request = new HttpPut(new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID).build());
            request.setEntity(new StringEntity(new Gson().toJson(Map.of("name", LARGE_PERSON_GROUP_ID, "recognitionModel", recognitionModel.toString()))));
            httpClient.execute(request);
            request.releaseConnection();
        }
    
        private static String createLargePersonGroupPerson(HttpClient httpClient, String name) throws Exception {
            HttpPost request = new HttpPost(new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID + "/persons").build());
            request.setEntity(new StringEntity(new Gson().toJson(Map.of("name", name))));
            String response = EntityUtils.toString(httpClient.execute(request).getEntity());
            request.releaseConnection();
            return new Gson().fromJson(response, new TypeToken<Map<String, Object>>(){}).get("personId").toString();
        }
    
        private static void addFaceToLargePersonGroup(HttpClient httpClient, String personId, String url) throws Exception {
            URIBuilder builder = new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID + "/persons/" + personId + "/persistedfaces");
            builder.setParameter("detectionModel", "detection_03");
            HttpPost request = new HttpPost(builder.build());
            request.setEntity(new StringEntity(new Gson().toJson(Map.of("url", url))));
            httpClient.execute(request);
            request.releaseConnection();
        }
    
        private static void trainLargePersonGroup(HttpClient httpClient) throws Exception {
            HttpPost request = new HttpPost(new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID + "/train").build());
            httpClient.execute(request);
            request.releaseConnection();
        }
    
        private static String getLargePersonGroupTrainingStatus(HttpClient httpClient) throws Exception {
            HttpGet request = new HttpGet(new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID + "/training").build());
            String response = EntityUtils.toString(httpClient.execute(request).getEntity());
            request.releaseConnection();
            return new Gson().fromJson(response, new TypeToken<Map<String, Object>>(){}).get("status").toString();
        }
    
        private static List<Map<String, Object>> identifyFacesInLargePersonGroup(HttpClient httpClient, List<String> sourceFaceIds) throws Exception {
            HttpPost request = new HttpPost(new URIBuilder(ENDPOINT + "/face/v1.0/identify").build());
            request.setEntity(new StringEntity(new Gson().toJson(Map.of("faceIds", sourceFaceIds, "largePersonGroupId", LARGE_PERSON_GROUP_ID))));
            String response = EntityUtils.toString(httpClient.execute(request).getEntity());
            request.releaseConnection();
            return new Gson().fromJson(response, new TypeToken<List<Map<String, Object>>>(){});
        }
    
        private static String getLargePersonGroupPersonName(HttpClient httpClient, String personId) throws Exception {
            HttpGet request = new HttpGet(new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID + "/persons/" + personId).build());
            String response = EntityUtils.toString(httpClient.execute(request).getEntity());
            request.releaseConnection();
            return new Gson().fromJson(response, new TypeToken<Map<String, Object>>(){}).get("name").toString();
        }
    
        private static Map<String, Object> verifyFaceWithLargePersonGroupPerson(HttpClient httpClient, String faceId, String personId) throws Exception {
            HttpPost request = new HttpPost(new URIBuilder(ENDPOINT + "/face/v1.0/verify").build());
            request.setEntity(new StringEntity(new Gson().toJson(Map.of("faceId", faceId, "personId", personId, "largePersonGroupId", LARGE_PERSON_GROUP_ID))));
            String response = EntityUtils.toString(httpClient.execute(request).getEntity());
            request.releaseConnection();
            return new Gson().fromJson(response, new TypeToken<Map<String, Object>>(){});
        }
    
        private static void deleteLargePersonGroup(HttpClient httpClient) throws Exception {
            HttpDelete request = new HttpDelete(new URIBuilder(ENDPOINT + "/face/v1.0/largepersongroups/" + LARGE_PERSON_GROUP_ID).build());
            httpClient.execute(request);
            request.releaseConnection();
        }
    }
    
  3. 使用 javacjava 命令从应用程序目录运行人脸识别应用。

    javac -cp target\dependency\* Quickstart.java
    java -cp .;target\dependency\* Quickstart
    

输出

========IDENTIFY FACES========

Create a person group (3761e61a-16b2-4503-ad29-ed34c58ba676).
Create a person group person 'Family1-Dad'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Dad) from image `Family1-Dad1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Dad) from image `Family1-Dad2.jpg`
Create a person group person 'Family1-Mom'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Mom) from image `Family1-Mom1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Mom) from image `Family1-Mom2.jpg`
Create a person group person 'Family1-Son'.
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Son) from image `Family1-Son1.jpg`
Check whether image is of sufficient quality for recognition
Add face to the person group person(Family1-Son) from image `Family1-Son2.jpg`

Train person group 3761e61a-16b2-4503-ad29-ed34c58ba676.
Training status: succeeded.

Pausing for 60 seconds to avoid triggering rate limit on free account...
4 face(s) with 4 having sufficient quality for recognition.
Person 'Family1-Dad' is identified for the face in: identification1.jpg - d7995b34-1b72-47fe-82b6-e9877ed2578d, confidence: 0.96807.
Verification result: is a match? true. confidence: 0.96807
Person 'Family1-Mom' is identified for the face in: identification1.jpg - 844da0ed-4890-4bbf-a531-e638797f96fc, confidence: 0.96902.
Verification result: is a match? true. confidence: 0.96902
No person is identified for the face in: identification1.jpg - c543159a-57f3-4872-83ce-2d4a733d71c9.
Person 'Family1-Son' is identified for the face in: identification1.jpg - 414fac6c-7381-4dba-9c8b-fd26d52e879b, confidence: 0.9281.
Verification result: is a match? true. confidence: 0.9281

========DELETE PERSON GROUP========

Deleted the person group 3761e61a-16b2-4503-ad29-ed34c58ba676.

End of quickstart.

清理资源

如果想要清理并移除 Azure AI 服务订阅,可以删除资源或资源组。 删除资源组同时也会删除与之相关联的任何其他资源。

后续步骤

在本快速入门中,你已了解如何使用适用于 Java 的人脸客户端库来执行基本人脸识别。 接下来,了解不同的人脸检测模型,学习如何为你的用例指定适当的模型。

开始使用适用于 JavaScript 的人脸客户端库进行人脸识别。 请按照以下步骤安装程序包并试用基本任务的示例代码。 通过人脸服务,可以访问用于检测和识别图像中的人脸的高级算法。 按照以下步骤安装包,并尝试使用远程图像进行基本面部识别的示例代码。

参考文档 | 库源代码 | 包 (npm) | 示例

先决条件

  • Azure 订阅 - 免费创建订阅
  • 最新版本的 Node.js
  • 拥有 Azure 订阅后,请在 Azure 门户中创建人脸资源,以获取密钥和终结点。 部署后,选择”转到资源”。
    • 需要从创建的资源获取密钥和终结点,以便将应用程序连接到人脸 API。
    • 可以使用免费定价层 (F0) 试用该服务,然后再升级到付费层进行生产。

创建环境变量

在此示例中,将凭据写入运行应用程序的本地计算机上的环境变量。

转到 Azure 门户。 如果在“先决条件”部分创建的资源部署成功,请选择“后续步骤”下的“转到资源”。 在“密钥和终结点”页的“资源管理”下,可以找到密钥和终结点。 你的资源密钥与你的 Azure 订阅 ID 不同。

若要为密钥和终结点设置环境变量,请打开控制台窗口,并按照操作系统和开发环境的说明进行操作。

  • 若要设置 FACE_APIKEY 环境变量,请将 <your_key> 替换为资源的其中一个密钥。
  • 若要设置 FACE_ENDPOINT 环境变量,请将 <your_endpoint> 替换为资源的终结点。

重要

如果使用 API 密钥,请将其安全地存储在某个其他位置,例如 Azure Key Vault 中。 请不要直接在代码中包含 API 密钥,并且切勿公开发布该密钥。

有关 Azure AI 服务安全性的详细信息,请参阅对 Azure AI 服务的请求进行身份验证

setx FACE_APIKEY <your_key>
setx FACE_ENDPOINT <your_endpoint>

添加环境变量后,可能需要重启任何正在运行的、将读取环境变量的程序(包括控制台窗口)。

识别和验证人脸

  1. 创建新的 Node.js 应用程序

    在控制台窗口(例如 cmd、PowerShell 或 Bash)中,为应用创建一个新目录并导航到该目录。

    mkdir myapp && cd myapp
    

    运行 npm init 命令以使用 package.json 文件创建一个 node 应用程序。

    npm init
    
  2. 安装 @azure-rest/ai-vision-face npm 包:

    npm install @azure-rest/ai-vision-face
    

    应用的 package.json 文件将使用依赖项进行更新。

  3. 创建名为 index.js 的文件,在文本编辑器中打开该文件,并粘贴以下代码:

    注意

    如果你还没有通过入口表单获得对人脸服务的访问权限,则其中一些功能将不起作用。

    const { randomUUID } = require("crypto");
    
    const { AzureKeyCredential } = require("@azure/core-auth");
    
    const createFaceClient = require("@azure-rest/ai-vision-face").default,
      { getLongRunningPoller } = require("@azure-rest/ai-vision-face");
    
    const sleep = (ms) => new Promise((resolve) => setTimeout(resolve, ms));
    
    const main = async () => {
      const endpoint = process.env["FACE_ENDPOINT"] ?? "<endpoint>";
      const apikey = process.env["FACE_APIKEY"] ?? "<apikey>";
      const credential = new AzureKeyCredential(apikey);
      const client = createFaceClient(endpoint, credential);
    
      const imageBaseUrl =
        "https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/";
      const largePersonGroupId = randomUUID();
    
      console.log("========IDENTIFY FACES========");
      console.log();
    
      // Create a dictionary for all your images, grouping similar ones under the same key.
      const personDictionary = {
        "Family1-Dad": ["Family1-Dad1.jpg", "Family1-Dad2.jpg"],
        "Family1-Mom": ["Family1-Mom1.jpg", "Family1-Mom2.jpg"],
        "Family1-Son": ["Family1-Son1.jpg", "Family1-Son2.jpg"],
      };
    
      // A group photo that includes some of the persons you seek to identify from your dictionary.
      const sourceImageFileName = "identification1.jpg";
    
      // Create a large person group.
      console.log(`Creating a person group with ID: ${largePersonGroupId}`);
      await client.path("/largepersongroups/{largePersonGroupId}", largePersonGroupId).put({
        body: {
          name: largePersonGroupId,
          recognitionModel: "recognition_04",
        },
      });
    
      // The similar faces will be grouped into a single large person group person.
      console.log("Adding faces to person group...");
      await Promise.all(
        Object.keys(personDictionary).map(async (name) => {
          console.log(`Create a persongroup person: ${name}`);
          const createLargePersonGroupPersonResponse = await client
            .path("/largepersongroups/{largePersonGroupId}/persons", largePersonGroupId)
            .post({
              body: { name },
            });
    
          const { personId } = createLargePersonGroupPersonResponse.body;
    
          await Promise.all(
            personDictionary[name].map(async (similarImage) => {
              // Check if the image is of sufficent quality for recognition.
              const detectResponse = await client.path("/detect").post({
                contentType: "application/json",
                queryParameters: {
                  detectionModel: "detection_03",
                  recognitionModel: "recognition_04",
                  returnFaceId: false,
                  returnFaceAttributes: ["qualityForRecognition"],
                },
                body: { url: `${imageBaseUrl}${similarImage}` },
              });
    
              const sufficientQuality = detectResponse.body.every(
                (face) => face.faceAttributes?.qualityForRecognition === "high",
              );
              if (!sufficientQuality) {
                return;
              }
    
              if (detectResponse.body.length != 1) {
                return;
              }
    
              // Quality is sufficent, add to group.
              console.log(
                `Add face to the person group person: (${name}) from image: (${similarImage})`,
              );
              await client
                .path(
                  "/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces",
                  largePersonGroupId,
                  personId,
                )
                .post({
                  queryParameters: { detectionModel: "detection_03" },
                  body: { url: `${imageBaseUrl}${similarImage}` },
                });
            }),
          );
        }),
      );
      console.log("Done adding faces to person group.");
    
      // Start to train the large person group.
      console.log();
      console.log(`Training person group: ${largePersonGroupId}`);
      const trainResponse = await client
        .path("/largepersongroups/{largePersonGroupId}/train", largePersonGroupId)
        .post();
      const poller = await getLongRunningPoller(client, trainResponse);
      await poller.pollUntilDone();
      console.log(`Training status: ${poller.getOperationState().status}`);
      if (poller.getOperationState().status !== "succeeded") {
        return;
      }
    
      console.log("Pausing for 60 seconds to avoid triggering rate limit on free account...");
      await sleep(60000);
    
      // Detect faces from source image url and only take those with sufficient quality for recognition.
      const detectResponse = await client.path("/detect").post({
        contentType: "application/json",
        queryParameters: {
          detectionModel: "detection_03",
          recognitionModel: "recognition_04",
          returnFaceId: true,
          returnFaceAttributes: ["qualityForRecognition"],
        },
        body: { url: `${imageBaseUrl}${sourceImageFileName}` },
      });
      const faceIds = detectResponse.body.filter((face) => face.faceAttributes?.qualityForRecognition !== "low").map((face) => face.faceId);
    
      // Identify the faces in a large person group.
      const identifyResponse = await client.path("/identify").post({
        body: { faceIds, largePersonGroupId: largePersonGroupId },
      });
      await Promise.all(
        identifyResponse.body.map(async (result) => {
          try {
            const getLargePersonGroupPersonResponse = await client
              .path(
                "/largepersongroups/{largePersonGroupId}/persons/{personId}",
                largePersonGroupId,
                result.candidates[0].personId,
              )
              .get();
            const person = getLargePersonGroupPersonResponse.body;
            console.log(
              `Person: ${person.name} is identified for face in: ${sourceImageFileName} with ID: ${result.faceId}. Confidence: ${result.candidates[0].confidence}`,
            );
    
            // Verification:
            const verifyResponse = await client.path("/verify").post({
              body: {
                faceId: result.faceId,
                largePersonGroupId: largePersonGroupId,
                personId: person.personId,
              },
            });
            console.log(
              `Verification result between face ${result.faceId} and person ${person.personId}: ${verifyResponse.body.isIdentical} with confidence: ${verifyResponse.body.confidence}`,
            );
          } catch (error) {
            console.log(`No persons identified for face with ID ${result.faceId}`);
          }
        }),
      );
      console.log();
    
      // Delete large person group.
      console.log(`Deleting person group: ${largePersonGroupId}`);
      await client.path("/largepersongroups/{largePersonGroupId}", largePersonGroupId).delete();
      console.log();
    
      console.log("Done.");
    };
    
    main().catch(console.error);
    
  4. 在快速入门文件中使用 node 命令运行应用程序。

    node index.js
    

输出

========IDENTIFY FACES========

Creating a person group with ID: a230ac8b-09b2-4fa0-ae04-d76356d88d9f
Adding faces to person group...
Create a persongroup person: Family1-Dad
Create a persongroup person: Family1-Mom
Create a persongroup person: Family1-Son
Add face to the person group person: (Family1-Dad) from image: (Family1-Dad1.jpg)
Add face to the person group person: (Family1-Mom) from image: (Family1-Mom1.jpg)
Add face to the person group person: (Family1-Son) from image: (Family1-Son1.jpg)
Add face to the person group person: (Family1-Dad) from image: (Family1-Dad2.jpg)
Add face to the person group person: (Family1-Mom) from image: (Family1-Mom2.jpg)
Add face to the person group person: (Family1-Son) from image: (Family1-Son2.jpg)
Done adding faces to person group.

Training person group: a230ac8b-09b2-4fa0-ae04-d76356d88d9f
Training status: succeeded
Pausing for 60 seconds to avoid triggering rate limit on free account...
No persons identified for face with ID 56380623-8bf0-414a-b9d9-c2373386b7be
Person: Family1-Dad is identified for face in: identification1.jpg with ID: c45052eb-a910-4fd3-b1c3-f91ccccc316a. Confidence: 0.96807
Person: Family1-Son is identified for face in: identification1.jpg with ID: 8dce9b50-513f-4fe2-9e19-352acfd622b3. Confidence: 0.9281
Person: Family1-Mom is identified for face in: identification1.jpg with ID: 75868da3-66f6-4b5f-a172-0b619f4d74c1. Confidence: 0.96902
Verification result between face c45052eb-a910-4fd3-b1c3-f91ccccc316a and person 35a58d14-fd58-4146-9669-82ed664da357: true with confidence: 0.96807
Verification result between face 8dce9b50-513f-4fe2-9e19-352acfd622b3 and person 2d4d196c-5349-431c-bf0c-f1d7aaa180ba: true with confidence: 0.9281
Verification result between face 75868da3-66f6-4b5f-a172-0b619f4d74c1 and person 35d5de9e-5f92-4552-8907-0d0aac889c3e: true with confidence: 0.96902

Deleting person group: a230ac8b-09b2-4fa0-ae04-d76356d88d9f

Done.

清理资源

如果想要清理并移除 Azure AI 服务订阅,可以删除资源或资源组。 删除资源组同时也会删除与之相关联的任何其他资源。

后续步骤

在本快速入门中,你已了解如何使用适用于 JavaScript 的人脸客户端库来执行基本人脸识别。 接下来,了解不同的人脸检测模型,学习如何为你的用例指定适当的模型。

开始使用人脸 REST API 进行人脸识别。 通过人脸服务,可以访问用于检测和识别图像中的人脸的高级算法。

注意

此快速入门使用 cURL 命令来调用 REST API。 也可以使用编程语言调用 REST API。 使用语言 SDK 可以更容易实现人脸识别等复杂方案。 请参阅 GitHub 示例,查看 C#PythonJavaJavaScriptGo 的相关示例。

先决条件

  • Azure 订阅 - 免费创建订阅
  • 拥有 Azure 订阅后,请在 Azure 门户中创建人脸资源,以获取密钥和终结点。 部署后,选择”转到资源”。
    • 需要从创建的资源获取密钥和终结点,以便将应用程序连接到人脸 API。 你稍后会在快速入门中将密钥和终结点粘贴到下方的代码中。
    • 可以使用免费定价层 (F0) 试用该服务,然后再升级到付费层进行生产。
  • PowerShell 6.0 及以上版本,或类似的命令行应用程序。
  • 已安装 cURL

识别和验证人脸

注意

如果你还没有通过入口表单获得对人脸服务的访问权限,则其中一些功能将不起作用。

  1. 首先,在源人脸上调用检测 API。 这是我们试图从更大的群体中识别的人脸。 将以下命令复制到文本编辑器,插入自己的密钥和终结点,然后将其复制到 shell 窗口中并运行它。

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/detect?returnFaceId=true&returnFaceLandmarks=false&recognitionModel=recognition_04&returnRecognitionModel=false&detectionModel=detection_03&faceIdTimeToLive=86400" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{""url"":""https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/identification1.jpg""}"
    

    将返回的人脸 ID 字符串保存到临时位置。 你将在最后再次使用它。

  2. 接下来需要创建一个 LargePersonGroup,并为其指定一个与正则表达式模式 ^[a-z0-9-_]+$ 匹配的任意 ID。 此对象将存储多人的聚合人脸数据。 运行以下命令,插入自己的密钥。 或者,在请求正文中更改组的名称和元数据。

    curl.exe -v -X PUT "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
        ""name"": ""large-person-group-name"",
        ""userData"": ""User-provided data attached to the large person group."",
        ""recognitionModel"": ""recognition_04""
    }"
    

    将创建的组的指定 ID 保存到临时位置。

  3. 接下来,你将创建属于该组的人员对象。 运行以下命令,插入自己的密钥和上一步中的 LargePersonGroup 的 ID。 此命令创建名为 Family1-Dad 的人员。

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/persons" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
        ""name"": ""Family1-Dad"",
        ""userData"": ""User-provided data attached to the person.""
    }"
    

    运行此命令后,使用不同的输入数据再次运行它,以创建更多人员对象:Family1-Mom、Family1-Son、Family1-Daughter、Family2-Lady 和 Family2-Man。

    保存创建的每个人员的 ID;务必记下哪些人名具有哪个 ID。

  4. 接下来,需要检测新人脸并将其与已有的人员对象相关联。 以下命令从图像 Family1-Dad1.jpg 检测人脸,并将其添加到对应人员。 需要将 personId 制定为创建 Family1-Dad 人员对象时返回的 ID。 图像名称对应于所创建人员的名称。 此外,请在相应的字段中输入 LargePersonGroup ID 和密钥。

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/persons/{personId}/persistedfaces?detectionModel=detection_03" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{""url"":""https://raw.githubusercontent.com/Azure-Samples/cognitive-services-sample-data-files/master/Face/images/Family1-Dad1.jpg""}"
    

    然后,使用不同的源图像和目标人员对象再次运行上述命令。 可用的图像包括:Family1-Dad1.jpg、Family1-Dad2.jpg、Family1-Mom1.jpg、Family1-Mom2.jpg、Family1-Son1.jpg、Family1-Son2.jpg、Family1-Daughter1.jpg、Family1-Daughter2.jpg、Family2-Lady1.jpg、Family2-Lady2.jpg、Family2-Man1.jpg 和 Family2-Man2.jpg。 请确保你在 API 调用中指定的人员 ID 与请求正文中的图像文件名称相匹配。

    在此步骤结束时,应有多个人员对象,每个对象都有一个或多个对应的人脸,可以直接从提供的图像检测到。

  5. 接下来,使用当前人脸数据训练 LargePersonGroup 。 训练操作教模型如何将面部特征(有时从多个源图像聚合而来)与每个人相关联。 在运行命令之前插入 LargePersonGroup ID 和密钥。

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/train" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data ""
    
  6. 检查训练状态是否为已成功。 否则,请等待一段时间,然后再次进行查询。

    curl.exe -v "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}/training" -H "Ocp-Apim-Subscription-Key: {subscription key}"
    
  7. 现在,你已准备好使用第一步中的源人脸 ID 和 LargePersonGroup ID 调用识别 API。 将这些值插入请求正文中的相应字段,并插入密钥。

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/identify" -H "Content-Type: application/json" -H "Ocp-Apim-Subscription-Key: {subscription key}" --data-ascii "{
        ""largePersonGroupId"": ""INSERT_PERSONGROUP_ID"",
        ""faceIds"": [
            ""INSERT_SOURCE_FACE_ID""
        ],
        ""maxNumOfCandidatesReturned"": 1,
        ""confidenceThreshold"": 0.5
    }"
    

    响应返回一个人员 ID,指示该人员由源人脸识别。 它应该是对应于 Family1-Dad 人员的 ID,因为源人脸是这个人。

  8. 若要执行人脸验证,需使用上一步返回的人员 ID、LargePersonGroup ID 以及源人脸 ID。 请将这些值插入请求正文中的字段,并插入密钥。

    curl.exe -v -X POST "https://{resource endpoint}/face/v1.0/verify" `
    -H "Content-Type: application/json" `
    -H "Ocp-Apim-Subscription-Key: {subscription key}" `
    --data-ascii "{
        ""faceId"": ""INSERT_SOURCE_FACE_ID"",
        ""personId"": ""INSERT_PERSON_ID"",
        ""largePersonGroupId"": ""INSERT_PERSONGROUP_ID""
    }"
    

    响应应提供布尔验证结果以及置信度值。

清理资源

若要删除在本练习中创建的 LargePersonGrou,请运行 LargePersonGroup - Delete 调用。

curl.exe -v -X DELETE "https://{resource endpoint}/face/v1.0/largepersongroups/{largePersonGroupId}" -H "Ocp-Apim-Subscription-Key: {subscription key}"

如果想要清理并移除 Azure AI 服务订阅,可以删除资源或资源组。 删除资源组同时也会删除与之相关联的任何其他资源。

后续步骤

在本快速入门中,你已了解如何使用人脸 REST API 来执行基本人脸识别任务。 接下来,了解不同的人脸检测模型,学习如何为你的用例指定适当的模型。