About Image Rectification

JAMES MORGENSTERN 176 Reputation points
2021-01-11T17:41:35.947+00:00

I am using Kinect DK for a robotic bin-picking task. As such, I have mounted the Kinect so that it is looking straight downward at a flat table top. I have mechanically aligned and verified that the Kinect XY plane is parallel to the tabletop -- to within a degree or so. I imaged the table top with the depth sensor (NFOV attached)!

55429-backnfov

and taking an excerpt from the middle of the depth image I get the image I shared below !

55493-backroi

; because the Kinect and the table top are parallel the Kinect -- since the depth image reports the distance from the XY plane to the table top parallel to the Z axis of the Kinect -- the image should be uniform. But it is not. There is a decided change in the depth correlated with changes in Y; I have extracted a plot of the profile of depth values roughly parallel to the Y axis which is in the image below!

55449-backprofile300

A crude calculation shows that the slope of the profile is roughly 11.5 or 12 degrees which is not even close to being flat as should be expected. BUT the Kinect documentation does point out that the range sensor is rotated by 6 degrees about the X axis. So it seems to me that the processing carried out to convert the range sensor data into a rectified depth image would include a rotation about the X axis in order to bring the depth image into conformance with the Kinect coordinate system; and it looks to me like instead of rotating 6 degrees properly the Microsoft processing is rotating 6 degrees in the wrong direction thus creating a total rotation of 12 degrees as I have measured.

My question then is this: Is there in fact an error in rotation in the processing of the range data into the depth image ?

Azure Kinect DK
Azure Kinect DK
A Microsoft developer kit and peripheral device with advanced artificial intelligence sensors for sophisticated computer vision and speech models.
292 questions
{count} votes

2 answers

Sort by: Most helpful
  1. António Sérgio Azevedo 7,666 Reputation points Microsoft Employee
    2021-01-20T01:04:12.86+00:00

    @JAMES MORGENSTERN ,
    Here is the response I received from Product Team:

    1. The coordinate system of Azure Kinect depth and camera described in the doc is accurate “The depth camera is tilted 6 degrees downwards of the color camera”
    2. Based on that and according to your explanation for your setup, the Z-depth measurements near the table front should be smaller than the rest of the table. This is illustrated in the image that you shared “55429-backnfov.jpg” where the table front looks darker than the rest of the table
    3. Generally I would not trust the accuracy of the manual alignment of the camera wrt the table. The right way to do this is to estimate the table pose in camera coordinate system using the pose estimation functions in OpenCV

    Thank you so much for your time and let me know if you have further questions?

    Remember:

    • Please accept an answer if correct. Original posters help the community find answers faster by identifying the correct answer. Here is how.
    • Want a reminder to come back and check responses? Here is how to subscribe to a notification.

  2. Quentin Miller 351 Reputation points
    2021-03-25T21:51:30.293+00:00

    @JAMES MORGENSTERN assuming you use the k4a transformation functions you can consider the two cameras to be on the same plane.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.