I love this example here https://github.com/Azure-Samples/cognitive-services-speech-sdk/tree/master/quickstart/csharp/uwp/virtual-assistant
Speech to text => Custom QnA (old QnAbot) => text to Speech Is this possible?
Using Cognitive Services | Language Studio (Preview) and | Speech , is it possible to "stack" projects?
Namely, can one create (1) a speech-to-text project; run that created text thru a (2) custom question answering project; and then, run that output thru a (3) text-to-speech project for the user to hear. Or is a subset possible (e.g. 1+2 or 2+3)?
In short, create an app where the user asks a question/speaks and the app verbally answers back.
Sure I'm not the first to ask, but I've been researching and haven't found a full answer (this field is changing fast!).
2 answers
Sort by: Most helpful
-
Oscar Maqueda 611 Reputation points Microsoft Employee
2021-11-29T18:17:53.88+00:00 -
GiftA-MSFT 11,171 Reputation points
2021-11-30T18:09:36.373+00:00 Hi, in addition to above response, here's a tutorial showing how to voice enable your bot. Hope this helps!
--- *Kindly Accept Answer if the information helps. Thanks.*