Deploy a model to Azure Container Instances with CLI (v1)
Important
This article shows how to use the CLI and SDK v1 to deploy a model. For the recommended approach for v2, see Deploy and score a machine learning model by using an online endpoint.
Learn how to use Azure Machine Learning to deploy a model as a web service on Azure Container Instances (ACI). Use Azure Container Instances if you:
- prefer not to manage your own Kubernetes cluster
- Are OK with having only a single replica of your service, which might affect uptime
For information on quota and region availability for ACI, see Quotas and region availability for Azure Container Instances article.
Important
It is highly advised to debug locally before deploying to the web service, for more information, see Debug Locally
You can also refer to Azure Machine Learning - Deploy to Local Notebook
Prerequisites
An Azure Machine Learning workspace. For more information, see Create an Azure Machine Learning workspace.
A machine learning model registered in your workspace. If you don't have a registered model, see How and where to deploy models.
The Azure CLI extension (v1) for Machine Learning service, Azure Machine Learning Python SDK, or the Azure Machine Learning Visual Studio Code extension.
Important
Some of the Azure CLI commands in this article use the
azure-cli-ml
, or v1, extension for Azure Machine Learning. Support for the v1 extension will end on September 30, 2025. You will be able to install and use the v1 extension until that date.We recommend that you transition to the
ml
, or v2, extension before September 30, 2025. For more information on the v2 extension, see Azure ML CLI extension and Python SDK v2.The Python code snippets in this article assume that the following variables are set:
ws
- Set to your workspace.model
- Set to your registered model.inference_config
- Set to the inference configuration for the model.
For more information on setting these variables, see How and where to deploy models.
The CLI snippets in this article assume that you've created an
inferenceconfig.json
document. For more information on creating this document, see How and where to deploy models.
Limitations
Note
- Deploying Azure Container Instances in a virtual network is not supported. Instead, for network isolation, consider using managed online endpoints.
- To ensure effective support, it is essential to supply the necessary logs for your ACI containers. Without these logs, technical support cannot be guaranteed. It is recommended to use log analytics tools by specifying
enable_app_insights=True
in your deployment configuration to manage and analyze your ACI container logs efficiently.
Deploy to ACI
To deploy a model to Azure Container Instances, create a deployment configuration that describes the compute resources needed. For example, number of cores and memory. You also need an inference configuration, which describes the environment needed to host the model and web service. For more information on creating the inference configuration, see How and where to deploy models.
Note
- ACI is suitable only for small models that are under 1 GB in size.
- We recommend using single-node AKS to dev-test larger models.
- The number of models to be deployed is limited to 1,000 models per deployment (per container).
Using the SDK
APPLIES TO: Python SDK azureml v1
from azureml.core.webservice import AciWebservice, Webservice
from azureml.core.model import Model
deployment_config = AciWebservice.deploy_configuration(cpu_cores = 1, memory_gb = 1)
service = Model.deploy(ws, "aciservice", [model], inference_config, deployment_config)
service.wait_for_deployment(show_output = True)
print(service.state)
For more information on the classes, methods, and parameters used in this example, see the following reference documents:
Using the Azure CLI
APPLIES TO: Azure CLI ml extension v1
To deploy using the CLI, use the following command. Replace mymodel:1
with the name and version of the registered model. Replace myservice
with the name to give this service:
az ml model deploy -n myservice -m mymodel:1 --ic inferenceconfig.json --dc deploymentconfig.json
The entries in the deploymentconfig.json
document map to the parameters for AciWebservice.deploy_configuration. The following table describes the mapping between the entities in the JSON document and the parameters for the method:
JSON entity | Method parameter | Description |
---|---|---|
computeType |
NA | The compute target. For ACI, the value must be ACI . |
containerResourceRequirements |
NA | Container for the CPU and memory entities. |
cpu |
cpu_cores |
The number of CPU cores to allocate. Defaults, 0.1 |
memoryInGB |
memory_gb |
The amount of memory (in GB) to allocate for this web service. Default, 0.5 |
location |
location |
The Azure region to deploy this Webservice to. If not specified the Workspace location will be used. More details on available regions can be found here: ACI Regions |
authEnabled |
auth_enabled |
Whether to enable auth for this Webservice. Defaults to False |
sslEnabled |
ssl_enabled |
Whether to enable TLS for this Webservice. Defaults to False. |
appInsightsEnabled |
enable_app_insights |
Whether to enable AppInsights for this Webservice. Defaults to False |
sslCertificate |
ssl_cert_pem_file |
The cert file needed if TLS is enabled |
sslKey |
ssl_key_pem_file |
The key file needed if TLS is enabled |
cname |
ssl_cname |
The CNAME for if TLS is enabled |
dnsNameLabel |
dns_name_label |
The dns name label for the scoring endpoint. If not specified a unique dns name label will be generated for the scoring endpoint. |
The following JSON is an example deployment configuration for use with the CLI:
{
"computeType": "aci",
"containerResourceRequirements":
{
"cpu": 0.5,
"memoryInGB": 1.0
},
"authEnabled": true,
"sslEnabled": false,
"appInsightsEnabled": false
}
For more information, see the az ml model deploy reference.
Using VS Code
See how to manage resources in VS Code.
Important
You don't need to create an ACI container to test in advance. ACI containers are created as needed.
Important
We append hashed workspace id to all underlying ACI resources which are created, all ACI names from same workspace will have same suffix. The Azure Machine Learning service name would still be the same customer provided "service_name" and all the user facing Azure Machine Learning SDK APIs do not need any change. We do not give any guarantees on the names of underlying resources being created.
Next steps
- How to deploy a model using a custom Docker image
- Deployment troubleshooting
- Update the web service
- Use TLS to secure a web service through Azure Machine Learning
- Consume a ML Model deployed as a web service
- Monitor your Azure Machine Learning models with Application Insights
- Collect data for models in production