3D hand kinematics prediction by a deep learning model with sEMG as input

Raul Sîmpetru, a student assistant in our lab, recently published a preprint paper Sensing the Full Dynamics of the Human Hand with a Neural Interface and Deep Learning in bioRxiv.

We would like to share a short preview video that illustrates the efficacy of the deep learning model he developed. In the top row of the video, a subject’s hand is simultaneously recorded by four cameras. The subject follows a hand movement that is shown in front of him on an external display. The bottom row shows the predicted and extracted 3D hand kinematics. The hand on the left shows the prediction from the deep learning model based on the HD-sEMG signal, while the hand on the right shows the computed 3D hand kinematics from the combined camera images. As can be seen, Raul’s model can predict 3D hand kinematics relatively accurately.

We are keeping our fingers crossed for a publication in a Journal soon and eagerly await a new version of the deep learning model that can predict 3D hand kinematics in real time!

Check out the other prediction videos in the  playlist.