The issue got fixed on its own somehow
inference with my deployed openai model seems blocked returning error 400
I used the completion service from a NodeJS app - It was working, but then it started returning 400 errors. Even chat/completion is not working on the AI Studio. I think I reached some limits. I can't figure out where I can see the reason for such an issue and how to fix it.
2 answers
Sort by: Most helpful
-
-
YutongTie-MSFT 50,856 Reputation points
2024-09-05T01:02:48.2866667+00:00 Hello @Louenas Hamdi
Thanks for reaching out to us, and appreciated the confirmation. The issue should be gone at this moment, please let us know if you see this issue again.
Feel free to open a new thread if you see other issues.
Also, you may want to check on the below document for how to monitor Azure OpenAI - https://video2.skills-academy.com/en-us/azure/ai-services/openai/how-to/monitor-openai
Regards,
Yutong