inference with my deployed openai model seems blocked returning error 400

Louenas Hamdi 0 Reputation points
2024-09-04T15:12:13.1233333+00:00

I used the completion service from a NodeJS app - It was working, but then it started returning 400 errors. Even chat/completion is not working on the AI Studio. I think I reached some limits. I can't figure out where I can see the reason for such an issue and how to fix it.

Azure OpenAI Service
Azure OpenAI Service
An Azure service that provides access to OpenAI’s GPT-3 models with enterprise capabilities.
2,920 questions
{count} votes

2 answers

Sort by: Most helpful
  1. Louenas Hamdi 0 Reputation points
    2024-09-04T20:54:10.65+00:00

    The issue got fixed on its own somehow

    0 comments No comments

  2. YutongTie-MSFT 50,856 Reputation points
    2024-09-05T01:02:48.2866667+00:00

    Hello @Louenas Hamdi

    Thanks for reaching out to us, and appreciated the confirmation. The issue should be gone at this moment, please let us know if you see this issue again.

    Feel free to open a new thread if you see other issues.

    Also, you may want to check on the below document for how to monitor Azure OpenAI - https://video2.skills-academy.com/en-us/azure/ai-services/openai/how-to/monitor-openai

    Regards,

    Yutong

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.