Uppgradera datahantering till SDK v2
I V1 kan en Azure Mašinsko učenje-datauppsättning antingen vara en Filedataset
eller en Tabulardataset
.
I V2 kan en Azure Mašinsko učenje-datatillgång vara en uri_folder
, uri_file
eller mltable
.
Konceptuellt kan du mappa Filedataset
till uri_folder
, och uri_file
eller Tabulardataset
till mltable
.
- URI:er (
uri_folder
,uri_file
) – en enhetlig resursidentifierare är en referens till en lagringsplats på din lokala dator eller i molnet, för enkel åtkomst till data i dina jobb. - MLTable – en metod för att abstrahera tabelldataschemadefinitionen. användare av dessa data kan enklare materialisera tabellen till en Pandas/Dask/Spark-dataram.
Den här artikeln jämför datascenarier i SDK v1 och SDK v2.
Skapa en filedataset
/uri-typ av datatillgång
SDK v1 – Skapa en
Filedataset
from azureml.core import Workspace, Datastore, Dataset # create a FileDataset pointing to files in 'animals' folder and its subfolders recursively datastore_paths = [(datastore, 'animals')] animal_ds = Dataset.File.from_files(path=datastore_paths) # create a FileDataset from image and label files behind public web urls web_paths = ['https://azureopendatastorage.blob.core.windows.net/mnist/train-images-idx3-ubyte.gz', 'https://azureopendatastorage.blob.core.windows.net/mnist/train-labels-idx1-ubyte.gz'] mnist_ds = Dataset.File.from_files(path=web_paths)
SDK v2
Skapa en
URI_FOLDER
typdatatillgångfrom azure.ai.ml.entities import Data from azure.ai.ml.constants import AssetTypes # Supported paths include: # local: './<path>' # blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>' # ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/' # Datastore: 'azureml://datastores/<data_store_name>/paths/<path>' my_path = '<path>' my_data = Data( path=my_path, type=AssetTypes.URI_FOLDER, description="<description>", name="<name>", version='<version>' ) ml_client.data.create_or_update(my_data)
Skapa en
URI_FILE
typdatatillgång.from azure.ai.ml.entities import Data from azure.ai.ml.constants import AssetTypes # Supported paths include: # local: './<path>/<file>' # blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>/<file>' # ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/<file>' # Datastore: 'azureml://datastores/<data_store_name>/paths/<path>/<file>' my_path = '<path>' my_data = Data( path=my_path, type=AssetTypes.URI_FILE, description="<description>", name="<name>", version="<version>" ) ml_client.data.create_or_update(my_data)
Skapa en tabelldatauppsättning/datatillgång
SDK v1
from azureml.core import Workspace, Datastore, Dataset datastore_name = 'your datastore name' # get existing workspace workspace = Workspace.from_config() # retrieve an existing datastore in the workspace by name datastore = Datastore.get(workspace, datastore_name) # create a TabularDataset from 3 file paths in datastore datastore_paths = [(datastore, 'weather/2018/11.csv'), (datastore, 'weather/2018/12.csv'), (datastore, 'weather/2019/*.csv')] weather_ds = Dataset.Tabular.from_delimited_files(path=datastore_paths)
SDK v2 – Skapa
mltable
datatillgång via yaml-definitiontype: mltable paths: - pattern: ./*.txt transformations: - read_delimited: delimiter: , encoding: ascii header: all_files_same_headers
from azure.ai.ml.entities import Data from azure.ai.ml.constants import AssetTypes # my_path must point to folder containing MLTable artifact (MLTable file + data # Supported paths include: # local: './<path>' # blob: 'https://<account_name>.blob.core.windows.net/<container_name>/<path>' # ADLS gen2: 'abfss://<file_system>@<account_name>.dfs.core.windows.net/<path>/' # Datastore: 'azureml://datastores/<data_store_name>/paths/<path>' my_path = '<path>' my_data = Data( path=my_path, type=AssetTypes.MLTABLE, description="<description>", name="<name>", version='<version>' ) ml_client.data.create_or_update(my_data)
Använda data i ett experiment/jobb
SDK v1
from azureml.core import ScriptRunConfig src = ScriptRunConfig(source_directory=script_folder, script='train_titanic.py', # pass dataset as an input with friendly name 'titanic' arguments=['--input-data', titanic_ds.as_named_input('titanic')], compute_target=compute_target, environment=myenv) # Submit the run configuration for your training run run = experiment.submit(src) run.wait_for_completion(show_output=True)
SDK v2
from azure.ai.ml import command from azure.ai.ml.entities import Data from azure.ai.ml import Input, Output from azure.ai.ml.constants import AssetTypes # Possible Asset Types for Data: # AssetTypes.URI_FILE # AssetTypes.URI_FOLDER # AssetTypes.MLTABLE # Possible Paths for Data: # Blob: https://<account_name>.blob.core.windows.net/<container_name>/<folder>/<file> # Datastore: azureml://datastores/paths/<folder>/<file> # Data Asset: azureml:<my_data>:<version> my_job_inputs = { "raw_data": Input(type=AssetTypes.URI_FOLDER, path="<path>") } my_job_outputs = { "prep_data": Output(type=AssetTypes.URI_FOLDER, path="<path>") } job = command( code="./src", # local path where the code is stored command="python process_data.py --raw_data ${{inputs.raw_data}} --prep_data ${{outputs.prep_data}}", inputs=my_job_inputs, outputs=my_job_outputs, environment="<environment_name>:<version>", compute="cpu-cluster", ) # submit the command returned_job = ml_client.create_or_update(job) # get a URL for the status of the job returned_job.services["Studio"].endpoint
Mappning av viktiga funktioner i SDK v1 och SDK v2
Funktioner i SDK v1 | Grov mappning i SDK v2 |
---|---|
Metod/API i SDK v1 | Metod/API i SDK v2 |
Nästa steg
Mer information finns i dokumentationen här: