How to integrate live stream end point and machine learning end point?

YASH SHAH 0 Reputation points
2023-05-21T13:48:08.2066667+00:00

I have an azure media services - live stream - end points playback url. On the other end I have a model deployed and registered on Azure Machine Learning end point via automated ML. I want to use this model to perform object detection on the live stream and show the visualization of bounding boxes.

Im new to azure and working on this since a while but i am not able to integrate these.

Can anyone help with steps on how to solve this ? or is there any better way to perform object detection on live stream?

I would prefer to stick to YoloV5s model.

Thankyou in advance!

Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
2,689 questions
Azure Virtual Machines
Azure Virtual Machines
An Azure service that is used to provision Windows and Linux virtual machines.
7,480 questions
Azure Media Services
Azure Media Services
A group of Azure services that includes encoding, format conversion, on-demand streaming, content protection, and live streaming services.
313 questions
Azure Computer Vision
Azure Computer Vision
An Azure artificial intelligence service that analyzes content in images and video.
338 questions
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Sedat SALMAN 13,265 Reputation points
    2023-05-22T07:14:56.2133333+00:00

    To integrate a live stream endpoint from Azure Media Services with an object detection model deployed on an Azure Machine Learning endpoint, you can follow these steps:

    Set up Azure Media Services:

    • Create an Azure Media Services account and configure it to receive the live stream from your video source. This involves setting up the live stream ingest settings, channels, and streaming endpoints.

    Deploy and register the object detection model:

    • Train or use an existing object detection model, such as YOLOv5, and deploy it as an endpoint using Azure Machine Learning. This step involves creating an inference pipeline, packaging the model, and deploying it to an Azure Machine Learning endpoint.

    Implement the integration:

    • Write code or a script that performs the following steps:
    • Capture the live stream playback URL from the Azure Media Services endpoint.
    • Continuously retrieve video frames from the live stream using the playback URL.
    • Send the frames to the Azure Machine Learning endpoint for object detection inference using the deployed model.
    • Receive the inference results, including bounding box coordinates and class labels, from the Azure Machine Learning endpoint.
    • Overlay the bounding boxes and labels on the video frames to visualize the object detection results.
    • Display or stream the processed video frames with visualizations.

    Scale and optimize the solution (optional):

    • Depending on your requirements and workload, you might need to consider scaling and optimizing the solution.
    • For increased scalability and performance, consider using Azure Virtual Machines or other compute resources with higher specifications.
    • If you anticipate high throughput, you might need to optimize the code or architecture to handle the volume of video frames efficiently.

    It's important to note that this integration involves custom development and requires coding and implementation expertise. You can use Azure SDKs or APIs for Azure Media Services and Azure Machine Learning to facilitate the integration.

    0 comments No comments