Azure Computer Vision Resource not found when called using the SDK or accessed via the browser.
I created an Azure Computer Vision resource to explore its features. I'll be using it inside a mobile app and make use of the ImageAnalysis SDK: https://www.nuget.org/packages/Azure.AI.Vision.ImageAnalysis/1.0.0-beta.3.
After I created the resource, I noticed the SDK throws me an error saying: "(hostname nor servname provided, or not known (<vision-resource-name>.cognitiveservices.azure.com:443))".
I noticed that when I try to access the url directly from the browser it shows a json message:
{
"error": {
"code": "404",
"message": "Resource not found"
}
}
is there anything else that needs to be setup when creating an (Azure) Computer Vision resource?
how can I use it in the ImageAnalysis SDK?
Azure Computer Vision
-
navba-MSFT 23,625 Reputation points • Microsoft Employee
2024-08-14T05:42:41.6533333+00:00 @Javi Guarin Welcome to Microsoft Q&A Forum, Thank you for posting your query here! .
.
Firstly, directly testing / browsing the endpoint URL from browser is not a valid test.
.
Ensure that the endpoint and the keys are entered correctly in your code. I used the same sample from the article, and it worked fine. I got a 200 success status code.
.
I am pasting my sample code here for your reference:
.
Image Analysis Caption
string endpoint = "https://MyVisionName.cognitiveservices.azure.com/"; string key = "337e4XXXXXXXXXXXX633cd7f"; // Create an Image Analysis client. ImageAnalysisClient client = new ImageAnalysisClient(new Uri(endpoint), new AzureKeyCredential(key)); // Use a file stream to pass the image data to the analyze call FileStream stream = new FileStream("image-analysis-sample.jpg", FileMode.Open); // Get a caption for the image. ImageAnalysisResult result = client.Analyze( BinaryData.FromStream(stream), VisualFeatures.Caption, new ImageAnalysisOptions { GenderNeutralCaption = true }); // Print caption results to the console Console.WriteLine($"Image analysis results:"); Console.WriteLine($" Caption:"); Console.WriteLine($" '{result.Caption.Text}', Confidence {result.Caption.Confidence:F4}");
Extracting text from image uploaded
string endpoint = "https://MyVisionName.cognitiveservices.azure.com/"; string key = "337e4XXXXXXXXXXXX633cd7f"; // Create an Image Analysis client. ImageAnalysisClient client = new ImageAnalysisClient(new Uri(endpoint), new AzureKeyCredential(key)); FileStream stream = new FileStream("image-analysis-sample1.jpg", FileMode.Open); // Extract text (OCR) from an image stream. ImageAnalysisResult result = client.Analyze( BinaryData.FromStream(stream), VisualFeatures.Read); // Print text (OCR) analysis results to the console Console.WriteLine("Image analysis results:"); Console.WriteLine(" Read:"); foreach (DetectedTextBlock block in result.Read.Blocks) foreach (DetectedTextLine line in block.Lines) { Console.WriteLine($" Line: '{line.Text}', Bounding Polygon: [{string.Join(" ", line.BoundingPolygon)}]"); foreach (DetectedTextWord word in line.Words) { Console.WriteLine($" Word: '{word.Text}', Confidence {word.Confidence.ToString("#.####")}, Bounding Polygon: [{string.Join(" ", word.BoundingPolygon)}]"); } } Console.ReadLine(); }
Hope this helps. If you have any follow-up questions, please let me know. I would be happy to help.
-
navba-MSFT 23,625 Reputation points • Microsoft Employee
2024-08-16T07:44:42.1733333+00:00 @Javi Guarin Just following up to check if my suggestion helped. Please let me know if you have any further queries. I would be happy to help.
-
Javi Guarin 20 Reputation points
2024-08-20T04:54:41.1033333+00:00 hi Navba, sorry for the late response. I was under the impression the Computer Vision requires an additional resource running before it would work. Additionally the emulator that I used didn't have a proper internet connection, apologies for that user error.
I could now connect to the resource without issues except that I keep getting 400 bad request response.
-
navba-MSFT 23,625 Reputation points • Microsoft Employee
2024-08-20T05:35:12.11+00:00 @Javi Guarin Thanks for getting back.
.
Questions:
- Could you please share your use case?
- What feature of Computer Vision are you using? Is it to extract text from image? Get a caption? Identify image etc ?
- Did you use my above sample code ?
- If not, Please share your sample code here.
- Also share the detailed error message of the HTTP 400 error.
Awaiting your reply.
-
Javi Guarin 20 Reputation points
2024-08-20T08:55:40.79+00:00 I'm using computer vision to extract text from an image. I used the sample code based of the nuget C# sdk: https://www.nuget.org/packages/Azure.AI.Vision.ImageAnalysis.
The error message I'm getting says: "The image size is not allowed to be zero or larger than 20971520 bytes". although I can confirm an image is generated I'm looking whether there's a way to see in the azure resource if what kind of request was received by the computer vision resource.
-
Javi Guarin 20 Reputation points
2024-08-20T09:01:05.8733333+00:00 I'm using computer vision to extract text from an image. I've used the sample code attached on the C# SDK: https://www.nuget.org/packages/Azure.AI.Vision.ImageAnalysis
I did however had to add other changes on top as the image would come from Android's Image Analyzer.When I pass the image to the SDK's ImageAnalysisClient.Analyze I always get an error: The image size is not allowed to be zero or larger than 20971520 bytes.
-
navba-MSFT 23,625 Reputation points • Microsoft Employee
2024-08-20T09:50:48.1066667+00:00 @Javi Guarin Thanks for your reply.
The service is throwing an error because my image file is too large. How can I work around this?
The file size limit for most Azure AI Vision features is 4 MB for the 3.2 version of the API and 20MB for the 4.0 version, and the client library SDKs can handle files up to 6 MB. For more information, see the Image Analysis input limits.
.
If you want to preview the image before sending it to Azure Computer Vision to ensure it meets all the requirements. Here are a few steps you can follow:
- Convert ImageProxy to Bitmap: First, convert the
ImageProxy
from CameraX to aBitmap
. - Display the Bitmap: Use an
ImageView
in your Android app to display theBitmap
.
Here’s a sample code snippet to help you with this:
// Convert ImageProxy to Bitmap private Bitmap ImageProxyToBitmap(ImageProxy image) { var planes = image.GetPlanes(); var buffer = planes[0].Buffer; byte[] bytes = new byte[buffer.Remaining()]; buffer.Get(bytes); return BitmapFactory.DecodeByteArray(bytes, 0, bytes.Length); } // Display the Bitmap in an ImageView private void DisplayImage(Bitmap bitmap) { ImageView imageView = FindViewById<ImageView>(Resource.Id.imageView); imageView.SetImageBitmap(bitmap); } // Usage in ImageAnalysis.Analyzer public void Analyze(ImageProxy image) { Bitmap bitmap = ImageProxyToBitmap(image); DisplayImage(bitmap); // Further processing... image.Close(); }
This code will help you convert the ImageProxy to a Bitmap and display it in an ImageView for preview before sending it to Azure Computer Vision.
- Convert ImageProxy to Bitmap: First, convert the
-
Javi Guarin 20 Reputation points
2024-08-20T15:29:14.2533333+00:00 What if I want to pass the image from image proxy?
-
navba-MSFT 23,625 Reputation points • Microsoft Employee
2024-08-21T06:33:39.69+00:00 @Javi Guarin If you want to pass the image directly from
ImageProxy
to Azure Computer Vision, you’ll need to convert theImageProxy
to a format that Azure Computer Vision can accept, such as a byte array. Here’s how you can do it:- Convert ImageProxy to Byte Array: Extract the image data from
ImageProxy
and convert it to a byte array. - Send Byte Array to Azure Computer Vision: Use the Azure SDK to send the byte array to the Computer Vision API.
Here is the sample code:
// Convert ImageProxy to Byte Array private byte[] ImageProxyToByteArray(ImageProxy image) { var planes = image.GetPlanes(); var buffer = planes.Buffer; byte[] bytes = new byte[buffer.Remaining()]; buffer.Get(bytes); return bytes; } // Send Byte Array to Azure Computer Vision private async Task AnalyzeImageAsync(byte[] imageBytes) { var client = new ComputerVisionClient(new ApiKeyServiceClientCredentials("YOUR_API_KEY")) { Endpoint = "YOUR_ENDPOINT" }; using (var stream = new MemoryStream(imageBytes)) { var features = new List<VisualFeatureTypes?> { VisualFeatureTypes.Description }; var analysis = await client.AnalyzeImageInStreamAsync(stream, features); // Process the analysis results } } // Usage in ImageAnalysis.Analyzer public async void Analyze(ImageProxy image) { byte[] imageBytes = ImageProxyToByteArray(image); await AnalyzeImageAsync(imageBytes); image.Close(); }
Hope this answers.
- Convert ImageProxy to Byte Array: Extract the image data from
-
navba-MSFT 23,625 Reputation points • Microsoft Employee
2024-08-22T07:25:40.1766667+00:00 @Javi Guarin Just following up to check if my suggestion helped. Please let me know if you have any further queries. I would be happy to help.
Sign in to comment