TranslationRecognizer Class
Performs translation on the speech input.
- Inheritance
-
TranslationRecognizer
Constructor
TranslationRecognizer(translation_config: SpeechTranslationConfig, auto_detect_source_language_config: AutoDetectSourceLanguageConfig | None = None, audio_config: AudioConfig | None = None)
Parameters
Name | Description |
---|---|
translation_config
Required
|
The config for the translation recognizer. |
auto_detect_source_language_config
|
The auto detection source language config Default value: None
|
audio_config
|
The config for the audio input. Default value: None
|
Methods
add_target_language |
Add language to the list of target languages for translation. Note Added in version 1.7.0. |
recognize_once |
Performs recognition in a blocking (synchronous) mode. Returns after a single utterance is recognized. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. The task returns the recognition text as result. For long-running multi-utterance recognition, use start_continuous_recognition_async instead. |
recognize_once_async |
Performs recognition in a non-blocking (asynchronous) mode. This will recognize a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. For long-running multi-utterance recognition, use start_continuous_recognition_async instead. |
remove_target_language |
Remove language from the list of target languages for translation. Note Added in version 1.7.0. |
start_continuous_recognition |
Synchronously initiates continuous recognition operation. User has to connect to EventSignal to receive recognition results. Call stop_continuous_recognition_async to stop the recognition. |
start_continuous_recognition_async |
Asynchronously initiates continuous recognition operation. User has to connect to EventSignal to receive recognition results. Call stop_continuous_recognition_async to stop the recognition. |
start_keyword_recognition |
Synchronously configures the recognizer with the given keyword model. After calling this method, the recognizer is listening for the keyword to start the recognition. Call stop_keyword_recognition() to end the keyword initiated recognition. |
start_keyword_recognition_async |
Asynchronously configures the recognizer with the given keyword model. After calling this method, the recognizer is listening for the keyword to start the recognition. Call stop_keyword_recognition_async() to end the keyword initiated recognition. |
stop_continuous_recognition |
Synchronously terminates ongoing continuous recognition operation. |
stop_continuous_recognition_async |
Asynchronously terminates ongoing continuous recognition operation. |
stop_keyword_recognition |
Synchronously ends the keyword initiated recognition. |
stop_keyword_recognition_async |
Asynchronously ends the keyword initiated recognition. |
add_target_language
Add language to the list of target languages for translation.
Note
Added in version 1.7.0.
add_target_language(language: str)
Parameters
Name | Description |
---|---|
language
Required
|
The language code to add. |
recognize_once
Performs recognition in a blocking (synchronous) mode. Returns after a single utterance is recognized. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. The task returns the recognition text as result. For long-running multi-utterance recognition, use start_continuous_recognition_async instead.
recognize_once() -> TranslationRecognitionResult
Returns
Type | Description |
---|---|
The result value of the synchronous recognition. |
recognize_once_async
Performs recognition in a non-blocking (asynchronous) mode. This will recognize a single utterance. The end of a single utterance is determined by listening for silence at the end or until a maximum of 15 seconds of audio is processed. For long-running multi-utterance recognition, use start_continuous_recognition_async instead.
recognize_once_async() -> ResultFuture
Returns
Type | Description |
---|---|
A future containing the result value of the asynchronous recognition. |
remove_target_language
Remove language from the list of target languages for translation.
Note
Added in version 1.7.0.
remove_target_language(language: str)
Parameters
Name | Description |
---|---|
language
Required
|
The language code to remove. |
start_continuous_recognition
Synchronously initiates continuous recognition operation. User has to connect to EventSignal to receive recognition results. Call stop_continuous_recognition_async to stop the recognition.
start_continuous_recognition()
start_continuous_recognition_async
Asynchronously initiates continuous recognition operation. User has to connect to EventSignal to receive recognition results. Call stop_continuous_recognition_async to stop the recognition.
start_continuous_recognition_async() -> ResultFuture
Returns
Type | Description |
---|---|
A future that is fulfilled once recognition has been initialized. |
start_keyword_recognition
Synchronously configures the recognizer with the given keyword model. After calling this method, the recognizer is listening for the keyword to start the recognition. Call stop_keyword_recognition() to end the keyword initiated recognition.
start_keyword_recognition(model: KeywordRecognitionModel)
Parameters
Name | Description |
---|---|
model
Required
|
the keyword recognition model that specifies the keyword to be recognized. |
start_keyword_recognition_async
Asynchronously configures the recognizer with the given keyword model. After calling this method, the recognizer is listening for the keyword to start the recognition. Call stop_keyword_recognition_async() to end the keyword initiated recognition.
start_keyword_recognition_async(model: KeywordRecognitionModel)
Parameters
Name | Description |
---|---|
model
Required
|
the keyword recognition model that specifies the keyword to be recognized. |
Returns
Type | Description |
---|---|
A future that is fulfilled once recognition has been initialized. |
stop_continuous_recognition
Synchronously terminates ongoing continuous recognition operation.
stop_continuous_recognition()
stop_continuous_recognition_async
Asynchronously terminates ongoing continuous recognition operation.
stop_continuous_recognition_async()
Returns
Type | Description |
---|---|
A future that is fulfilled once recognition has been stopped. |
stop_keyword_recognition
Synchronously ends the keyword initiated recognition.
stop_keyword_recognition()
stop_keyword_recognition_async
Asynchronously ends the keyword initiated recognition.
stop_keyword_recognition_async()
Returns
Type | Description |
---|---|
A future that is fulfilled once recognition has been stopped. |
Attributes
authorization_token
The authorization token that will be used for connecting to the service.
Note
The caller needs to ensure that the authorization token is valid. Before the
authorization token expires, the caller needs to refresh it by calling this setter with a
new valid token. Otherwise, the recognizer will encounter errors during recognition.
canceled
Signal for events containing canceled recognition results (indicating a recognition attempt that was canceled as a result or a direct cancellation request or, alternatively, a transport or protocol failure).
Callbacks connected to this signal are called with a TranslationRecognitionCanceledEventArgs, instance as the single argument.
endpoint_id
The endpoint ID of a customized speech model that is used for recognition, or a custom voice model for speech synthesis.
properties
A collection of properties and their values defined for this Recognizer.
recognized
Signal for events containing final recognition results (indicating a successful recognition attempt).
Callbacks connected to this signal are called with a TranslationRecognitionEventArgs, instance as the single argument, dependent on the type of recognizer.
recognizing
Signal for events containing intermediate recognition results.
Callbacks connected to this signal are called with a TranslationRecognitionEventArgs, instance as the single argument.
session_started
Signal for events indicating the start of a recognition session (operation).
Callbacks connected to this signal are called with a SessionEventArgs instance as the single argument.
session_stopped
Signal for events indicating the end of a recognition session (operation).
Callbacks connected to this signal are called with a SessionEventArgs instance as the single argument.
speech_end_detected
Signal for events indicating the end of speech.
Callbacks connected to this signal are called with a RecognitionEventArgs instance as the single argument.
speech_start_detected
Signal for events indicating the start of speech.
Callbacks connected to this signal are called with a RecognitionEventArgs instance as the single argument.
synthesizing
The event signals that a translation synthesis result is received.
Callbacks connected to this signal are called with a TranslationSynthesisEventArgs instance as the single argument.
target_languages
The target languages for translation.
Note
Added in version 1.7.0.
Azure SDK for Python