[Speed Problem] How to get the color 2D coordinates(x,y) of each pixel of the Depth camera

Hime Alt 31 Reputation points
2021-03-09T01:01:42.363+00:00

Hi, I'm developing an 3D visualization app for Azure Kinect DK in C / C ++.

In case of Kinect v2, I can get the color 2D coordinates(x,y) of each pixel of the Depth camera using ICoordinateMapper-> MapDepthFrameToColorSpace() at high speed.
I used to draw a body mesh with color-camera-resolution image in real time.
(Refer 22sec-35sec : https://youtu.be/NERfvP4JwB0?t=22)

How can I get the same thing faster with the Azure Kinect DK?

My current code is below and it takes 50-60ms only in this part.

unsigned short depthBuffFromK4a = (unsigned short)k4a_image_get_buffer(image);
k4a_float2_t pixPosDin;
k4a_float2_t pixPosCout;
k4a_result_t apiResult;
int valid;
unsigned int buffPos = 0;
for (int y = 0; y < _bufferHeightD; ++y) {
for (int x = 0; x < _bufferWidthD; ++x) {
pixPosDin.xy.x = (float)x;
pixPosDin.xy.y = (float)y;
//2d_2d function
result = k4a_calibration_2d_to_2d(
&_calibration,
&pixPosDin,
static_cast<float>(depthBuffFromK4a[buffPos]),
K4A_CALIBRATION_TYPE_DEPTH,
K4A_CALIBRATION_TYPE_COLOR,
&pixPosCout,
&valid
);
if (result == K4A_RESULT_SUCCEEDED && valid == 1) {
//----description for valid value----//
}
else {
//----error description----//
}
++buffPos;
}
}


I also tried multithreading, but 25ms was the limit.

Can you tell me how to get the data at high speed?

Thank you for reading.

Azure Kinect DK
Azure Kinect DK
A Microsoft developer kit and peripheral device with advanced artificial intelligence sensors for sophisticated computer vision and speech models.
292 questions
{count} vote

Accepted answer
  1. QuantumCache 20,266 Reputation points
    2021-03-17T09:22:32.5+00:00

    Hello @Hime Alt Below is the response from the Microsoft Product Team, I hope this will help with your initial query!

    Please be advised to read the Azure Kinect Sensor SDK image transformations | Microsoft Learn. The goal of the transformation functions are fast GPU accelerated RGBD mapping and 2D depth image to 3D point cloud conversion. Also, be advised to look at the Azure Kinect Viewer source code which includes visualizing a 3D color point cloud.

    Regarding Mesh, there is no mesh API in AKDK , if we are referring to just single view mesh, then we can try to compute the mesh with some off-the-shelf algorithm that involves estimating surface normal, and faces from point cloud). If we are referring to Kinect fusion type of mesh reconstruction with camera moving, then AKDK does not include the Kinect fusion API (however there is an example in the sample repo for Kinfu using opencv).

    Please comment in the below section if you have any comments or suggestions\feedbacks.
    If the response is helpful, please click "Accept Answer" and upvote it.

    1 person found this answer helpful.

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.