Lexicon is ignored in SSML using MS cognitive speech service
I try to use a lexicon but the TTS service ignores it. This is my lexicon xml file: <lexicon xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.w3.org/2005/01/pronunciation-lexicon…
Dalia - Mexican voice
The Dalia natural voice for mexican spanish is changing and taking an castellan accent. This is not good. Could be an error only for my speech account?.
Speech to text => Custom QnA (old QnAbot) => text to Speech Is this possible?
Using Cognitive Services | Language Studio (Preview) and | Speech , is it possible to "stack" projects? Namely, can one create (1) a speech-to-text project; run that created text thru a (2) custom question answering project; and then, run…
Pronunciation API call parameters
Hello, I wanted to replicate settings that are applied to API calls on the demo page, https://speech.microsoft.com/portal/1f34ee8f280d4b2bb4e31c0e4e752dec/pronunciationassessmenttool However, in my case system returns slightly different results.…
Text-to-speech is not available
Unable to audition the voice, and the text cannot be converted to voice and the audio content cannot be exported
"Did you update the subscription information?" error in Cognitive Services
I am trying to use the text to speech service in Unity. I get authentication response: [OK], then a Token, then when I try to use the service I get forbidden errors like "Forbidden" or 403 or "Did you update the subscription…
Speech Studio could not accept my voice talent.
Hi. Every time I upload a speech file to the speech studio to create a voice talent, it always fails. To me, the audio files sound fine. I would like you to help me with this issue and figure out how to fix it.
Text to speech not working after deploying to Azure App service bot framework v4
I am using bot framework v4. I need to send a voice to the user on the initial greeting message. It is working in an emulator. But when deployed to the Azure app service it is not working. So I used ngrok to run with my local code on the direct line…
Cannot Get Response from Custom Commands
I am trying to create a custom commands project in Azure Speech Studio. However, after having followed the instructions by adding a speech service, LUIS authentication service, and LUIS prediction service to my resource group, and after creating a custom…
NO LONGER ABLE TO GENERATE NEURAL VOICES (error 403) after unsuspending my subscription
I have the F0 plan and I m using the text to speech neural voice generation api. I m very far from having exceedeed my quota since i only generate a few short texts per month. I got my FRE trial subscription suspended after 30 days, forcing me to pick a…
speech sdk Connection failed
It can run on my windows,but when i take it to centos7,the demo of tts will print error(Connection failed (no connection to the remote host). Internal error: 1. Error details: Code: -2. USP state: 2. Received audio size: 0 bytes.)What is the reason?
congnitive service speech recognizes nothing
I am learning Azure cognitive service speech with the environment of VS 2019 and xamarin C# for Android app. I have got the subscription key. However, the speech engine does not recognize anything whatever the language is English, German, .... The…
Speech to text preprocessing audio wav
Hello, Would it be possible to have more information on preprocessing when using the speech to text service ? For example, is the human voice automatically extracted from the audio? I would like to know if it is necessary to do this preprocessing in…
Custom speech model: data upload error
I am trying to upload data in 'audio + transcript' format. Getting following error everytime: Acoustic data import failed: Zero transcriptions could be parsed from the given input. Error: invalid input line format Tried copying my human labled…
speakSsmlAsync returns invalid audio file
We are using Javascript to access the API using speakSsmlAsync on the SpeechSynthesizer. We are expecting mp3 files. When we try to play these, in most software they don't play (QuickTime for example). We are setting the audioConfig like this const…
Percept audio device not responding
I have setup a Percept audio device, but the device does not respond to the keyword or any command. I looked at the speech log files and see a 401 error when I say the keyword (Computer). I also have the vision device set up and it is working (i.e. it…
Train model Speech to text
Hello, I did an application that use speech cognitive service to transcript audio files. I use Speech to Text API v3.0 ( https://westus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-0/operations/CopyModelToSubscription ) and the…
differentiation of speakers - speech to text
Hi! I want to know if it is possible to differentiate speakers in a conversion from an audio file to a text file. I don't want to define profiles or recognize who is speaking, I just want to know when a person is speaking and when another person starts…
Viseme Event time offsets in Custom Neural Voice are weird.
I found the Viseme Event time offsets in Custom Neural Voice are strange. (ko-KR in use) The following cases are the results of outputting the Visems Event from the voice synthesized by the same Text with different VoiceNames (InJoonNeural,…
Speech to Text API v3.0 CREATE Transcription (201 Created) GET Transcripton (401 PermissionDenied)
Hello, I am trying to get a transcription of a text via an audio file using the Cognitive services API. When I do a Create Transcription, I get apim-request but when I do a Get Transcription, I get an error message. Here are the screenshots. …