C# Azure SpeechSynthesizer Timeout

Burning Aurora 0 Reputation points
2024-06-23T15:20:19.51+00:00

I recently started running into a "Timeout while synthesizing" error when trying to use the OpenAI TTS Voices. Coming from the northcentralus region on a S0 plan. It will sporadically work, but am not seeing anything in my logs about rate limits. Even the simplest 2 word sentence will fail. It has been working for weeks up until a few days ago. I'm really scratching my head here so any assistance would be great. A "dumbed" down version of the code I'm running is below.

Using the sample code from SpeechStudio without making any changes to what is provided I get "Connection was closed by the remote host. Error code: 1013. Error details: Status(StatusCode="ResourceExhausted", Detail="AOAI Resource Exhausted") USP state: Sending. Received audio size: 0 bytes."

 			// To support Chinese Characters on Windows platform
           if (Environment.OSVersion.Platform == PlatformID.Win32NT)
           {
               Console.InputEncoding = System.Text.Encoding.Unicode;
               Console.OutputEncoding = System.Text.Encoding.Unicode;
           }
 
           // Creates an instance of a speech config with specified subscription key and service region.
           var config = SpeechConfig.FromSubscription("subscriptionKey", "northcentralus");
 
           // Creates a speech synthesizer using the default speaker as audio output.
           using var fileOutput = Microsoft.CognitiveServices.Speech.Audio.AudioConfig.FromWavFileOutput(@"C:\temp\TestFile.wav");
           using var synthesizer = new SpeechSynthesizer(config, fileOutput);
           // Sets the synthesis output format.
           // The full list of supported format can be found here:
           // https://docs.microsoft.com/azure/cognitive-services/speech-service/rest-text-to-speech#audio-outputs
           //config.SetSpeechSynthesisOutputFormat(SpeechSynthesisOutputFormat.Audio16Khz128KBitRateMonoMp3);
 
           // Receive a text from console input and synthesize it to speaker.
           using var result = await synthesizer.SpeakSsmlAsync("[pause] Test Output");
           if (result.Reason == ResultReason.SynthesizingAudioCompleted)
               Console.WriteLine($"Successfully created {fileName}");
           else if (result.Reason == ResultReason.Canceled)
           {
               var cancellation = SpeechSynthesisCancellationDetails.FromResult(result);
               var error = new StringBuilder();
               error.AppendLine($"Error creating {fileName}{Environment.NewLine}");
               error.AppendLine($"CANCELED: Reason={cancellation.Reason}");
 
               if (cancellation.Reason == CancellationReason.Error && cancellation.ErrorCode != CancellationErrorCode.TooManyRequests)
               {
                   error.AppendLine($"CANCELED: ErrorCode={cancellation.ErrorCode}");
                   error.AppendLine($"CANCELED: ErrorDetails=[{cancellation.ErrorDetails}]");
               }
               else if (cancellation.Reason == CancellationReason.Error && cancellation.ErrorCode == CancellationErrorCode.TooManyRequests)
               {
                   Console.WriteLine($"Sleeping for 1 minute");
                   Thread.Sleep(60000);
               }
               throw new Exception(error.ToString());
           }
Azure AI Speech
Azure AI Speech
An Azure service that integrates speech processing into apps and services.
1,516 questions
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Azar 21,230 Reputation points MVP
    2024-06-23T20:24:54.16+00:00

    Hi there Burning Aurora

    Thanks for using QandA platform

    I guess this issue is due to the duration limit 10 minutes , guess thats the reason why you are hitting the limits,

    You will have to use the long audio synthesize larger content. Please see the referenced page.

    https://video2.skills-academy.com/en-in/azure/ai-services/speech-service/batch-synthesis

    If this helps kindly accept the answer thanks ,much,