Note
Please see Azure Cognitive Services for Speech documentation for the latest supported speech solutions.
SpeechRecognitionEngine.SpeechDetected Event
Raised when the SpeechRecognitionEngine detects input that it can identify as speech.
Namespace: Microsoft.Speech.Recognition
Assembly: Microsoft.Speech (in Microsoft.Speech.dll)
Syntax
'Declaration
Public Event SpeechDetected As EventHandler(Of SpeechDetectedEventArgs)
'Usage
Dim instance As SpeechRecognitionEngine
Dim handler As EventHandler(Of SpeechDetectedEventArgs)
AddHandler instance.SpeechDetected, handler
public event EventHandler<SpeechDetectedEventArgs> SpeechDetected
Remarks
Each speech recognizer has an algorithm to distinguish between silence and speech. When the SpeechRecognitionEngine performs a speech recognition operation, it raises the SpeechDetected event when its algorithm identifies the input as speech. The AudioPosition property of the associated SpeechDetectedEventArgs object indicates location in the input stream where the recognizer detected speech. The SpeechRecognitionEngine raises the SpeechDetected event before it raises any of the SpeechHypothesized, SpeechRecognized, or SpeechRecognitionRejected events.
For more information see the Recognize, RecognizeAsync, EmulateRecognize, and EmulateRecognizeAsync methods.
When you create a SpeechDetected delegate, you identify the method that will handle the event. To associate the event with your event handler, add an instance of the delegate to the event. The event handler is called whenever the event occurs, unless you remove the delegate. For more information about event-handler delegates, see Events and Delegates.
Examples
The following example is part of a console application for choosing origin and destination cities for a flight. The application recognizes phrases such as "I want to fly from Miami to Chicago." The example uses the SpeechDetected event to report the AudioPosition each time speech is detected.
using System;
using Microsoft.Speech.Recognition;
namespace SampleRecognition
{
class Program
{
static void Main(string[] args)
// Initialize a SpeechRecognitionEngine object.
{
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine())
{
// Create a grammar.
Choices cities = new Choices(new string[] {
"Los Angeles", "New York", "Chicago", "San Francisco", "Miami", "Dallas" });
GrammarBuilder gb = new GrammarBuilder();
gb.Append("I would like to fly from");
gb.Append(cities);
gb.Append("to");
gb.Append(cities);
// Create a Grammar object and load it to the recognizer.
Grammar g = new Grammar(gb);
g.Name = ("City Chooser");
recognizer.LoadGrammarAsync(g);
// Attach event handlers.
recognizer.LoadGrammarCompleted +=
new EventHandler<LoadGrammarCompletedEventArgs>(recognizer_LoadGrammarCompleted);
recognizer.SpeechDetected +=
new EventHandler<SpeechDetectedEventArgs>(recognizer_SpeechDetected);
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
// Set the input to the recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start recognition.
recognizer.RecognizeAsync();
// Keep the console window open.
Console.ReadLine();
}
}
// Handle the SpeechDetected event.
static void recognizer_SpeechDetected(object sender, SpeechDetectedEventArgs e)
{
Console.WriteLine(" Speech detected at AudioPosition = {0}", e.AudioPosition);
}
// Handle the LoadGrammarCompleted event.
static void recognizer_LoadGrammarCompleted(object sender, LoadGrammarCompletedEventArgs e)
{
Console.WriteLine("Grammar loaded: " + e.Grammar.Name);
}
// Handle the SpeechRecognized event.
static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine(" Speech recognized: " + e.Result.Text);
}
}
}
See Also
Reference
SpeechRecognitionEngine Members