Kinect Azure DK: few technical questions before buying

RShep33 11 Reputation points
2021-01-12T09:57:44.073+00:00

Good morning,

I'm working in a laboratory in Spain that works on language. We have two projects right now in which we'd like to do a full body tracking and face tracking while participants are gesturing or using sign language. We did some tests using a multi-camera setup, and quickly ended up replacing this for a Kinect (v1). Now, the Kinect seems to work quite well for our project, but two other versions came out since that: Kinect v2 and Kinect Azure DK. This last product seems to be equipped with the best sensors, so we considered buying two (one for the body, one for the face). However, before we make our purchase, we have a few technical questions we would like to ask.

1) In other posts, we've seen that Kinect Azure DK was "meant for developpers, not consumers". Moreover, the SDK on Github hasn't been updated in months. Does that mean that the Kinect Azure team stopped working on the project? And if so, does that mean that the product is "left" for developpers to create some new applications, not really for researchers?

2) Some people seem to have problems with the change of paradigm of Kinect Azure DK compared to Kinect v2. From what we understood, the Kinect Azure DK uses machine learning to provide a real-time body tracking, while the Kinect v2 uses a random forest model, which seems to show better results. An user asked for a legacy option almost a year ago, to what Microsoft services replied "need feedback" - there is no news since. Is there any way we can have any update on the implementation (or non-implementation) of this feature?

3) Most of the issues happening with the paradigm for Kinect Azure DK seem to come from the online, real-time body tracking. We actually don't need a real-time body tracking, just to record the different points of the body at a quite constant frame rate. Would Kinect Azure DK work "better" or at least differently in offline body tracking? Does it even offer this option?

4) When we used Kinect v1 as a test for our project, we noticed that the framerate was quite variable, oscillating around 15 FPS. Would the framerate be higher and more constant using Kinect Azure DK?

5) It seems that, if we wanna use two Azure Kinect DK sensors (one for the face, and one for the body), we need three computers : one for each device, and one "host" computer for the synchronization. The truth is, we don't really need the sensors to be synchronized as we can "resynchronize" them afterwards. Do we really need three computers or two would be enough, in that case? This is important as we need to buy new computers for this purpose.

6) Regarding these computers, should we go for the minimal required equipments as shown here, or aim for higher?

We would really appreciate having answers to these technical questions, as the customer service wasn't able to give light on these (redirecting us more than 5 times in one call...), and a "Kinect expert" was supposed to call us back and never did.

Thanks for your answers!

Azure Kinect DK
Azure Kinect DK
A Microsoft developer kit and peripheral device with advanced artificial intelligence sensors for sophisticated computer vision and speech models.
292 questions
{count} votes

3 answers

Sort by: Most helpful
  1. Quentin Miller 351 Reputation points
    2021-01-29T23:08:30.123+00:00

    @RShep33 we are actively work on updates to both the Sensor and Body Tracking SDKs which should be available by the end of March. AKDK is focused at developers want to use Kinect depth technology for proof-of-concept and pilot phases of development. There are customers who are also using the device for light commercial use. There is a program with third parties - Analog Devices, SICK, Leopard Imaging, D3 Engineering and others - to develop third party modules and cameras that will work with the Sensor SDK (and hence Body Tracking SDK). I can connect you with that team if you are interested in learning more.

    There are no plans to support random forest. We chose the DNN path as it opens doors to other AI techniques we are actively developing. The current model is on par with the random forest model from an accuracy POV. The community issue to over performance. the next release of the Body Tracking SDK has a 3-4x performance boost and works on non-NVIDIA GPUs (Windows only).

    I am not sure what you mean by online vs offline body tracking. The current SDK is offline i.e. you require no Internet connection.

    It is hard to comment on fps without understanding the compute performance. You can sustain 30fps on the suggested minimum hardware.

    Without fully understanding your application I cannot comment on the amount of compute you require.

    1 person found this answer helpful.

  2. QuantumCache 20,266 Reputation points
    2021-01-25T19:41:04.897+00:00

    Hello @RShep33 Thanks for your interest in Azure Kinect DK. Please comment in the below section for further discussion in this matter. Please refer to my inline comments.

    1) In other posts, we've seen that Kinect Azure DK was "meant for developpers, not consumers". Moreover, the SDK on Github hasn't been updated in months. Does that mean that the Kinect Azure team stopped working on the project? And if so, does that mean that the product is "left" for developpers to create some new applications, not really for researchers?

    Yes, Azure Kinect DK device is meant for developers & researchers and not for consumers. Please follow ->microsoft/Azure-Kinect-Sensor-SDK for the official Github.

    2) Some people seem to have problems with the change of paradigm of Kinect Azure DK compared to Kinect v2. From what we understood, the Kinect Azure DK uses machine learning to provide a real-time body tracking, while the Kinect v2 uses a random forest model, which seems to show better results. An user asked for a legacy option almost a year ago, to what Microsoft services replied "need feedback" - there is no news since. Is there any way we can have any update on the implementation (or non-implementation) of this feature?

    I agree with your comment! Sorry for the inconvenience caused by this topic and feedback. As the Kinect DK product team prioritizes the received feedback, we will follow up on this and will let you know the progress soon on this forum. Thanks for bringing this to our notice.

    3) Most of the issues happening with the paradigm for Kinect Azure DK seem to come from the online, real-time body tracking. We actually don't need a real-time body tracking, just to record the different points of the body at a quite constant frame rate. Would Kinect Azure DK work "better" or at least differently in offline body tracking? Does it even offer this option?

    I would need to double-check on this, will get back on this.

    4) When we used Kinect v1 as a test for our project, we noticed that the framerate was quite variable, oscillating around 15 FPS. Would the framerate be higher and more constant using Kinect Azure DK?

    I would suggest referring to this documentation on Depth camera supported operating modes, Color camera supported operating modes, Please comment below if these doc sections didn't help much.

    5) It seems that, if we wanna use two Azure Kinect DK sensors (one for the face, and one for the body), we need three computers : one for each device, and one "host" computer for the synchronization. The truth is, we don't really need the sensors to be synchronized as we can "resynchronize" them afterwards. Do we really need three computers or two would be enough, in that case? This is important as we need to buy new computers for this purpose.

    The PC host hardware requirement is dependent on the application/algorithm/sensor frame rate/resolution executed on the host PC. The body tracking PC host requirement is more stringent than the general PC host requirement. I don't recall any documentation which talks about requiring the 3rd PC to synchronize 2 devices.
    Please refer to this Doc: Minimum host PC hardware requirements
    You can do this all with one computer if it has enough compute and 2 USB controllers.

    6) Regarding these computers, should we go for the minimal required equipments as shown here, or aim for higher?

    Lower end or older CPUs may also work depending on your use-case. Performance differs also between Windows/Linux operating systems and graphics drivers in use. But I highly recommend checking on your use case scenarios. Ref:


  3. Quentin Miller 351 Reputation points
    2021-02-01T22:04:52.777+00:00

    Yes, you can post process recordings for body tracking. We are also developing an Azure body tracking service that will support medium-high latency and offline applications.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.