The Microsoft Kinect sensor is largely used to detect and recognize body gestures and layout with enough reliability, accuracy and precision in a quite simple way.
However, the pretty low resolution of the optical sensors does not allow the device to detect gestures of body parts, such as the fingers of a hand, with the same straightforwardness.
Given the clear application of this technology to the field of the user interaction within immersive multimedia environments, there is the actual need to have a reliable and effective method to detect the pose of some body parts. In this paper we propose a method based on a neural network to detect in real time the hand pose, to recognize whether it is closed or not.
The neural network is used to process information of color, depth and skeleton coming from the Kinect device. This information is preprocessed to extract some significant feature. The output of the neural network is then filtered with a time average, to reduce the noise due to the fluctuation of the input data. We analyze and discuss three possible implementations of the proposed method, obtaining an accuracy of 90% under good conditions of lighting and background, and even reaching the 95% in best cases, in real time.
Questo è uno degli articoli scientifici pubblicati da uno o più collaboratori e data scientist di synbrAIn.
Se sei interessato a saperne di più, leggi l’intero articolo qui.
Pingback: Rilevamento continuo dell'apertura della mano tramite un dispositivo simile a Kinect - synbrAIn
Pingback: Progettare interazioni gestuali senza contatto per esposizioni pubbliche in natura - synbrAIn
Pingback: Riconoscimento di gesti mediante dispositivi a basso costo: tecniche, applicazioni e prospettive - synbrAIn
Pingback: Middleware modulare per la gestione gestuale di dati e dispositivi - synbrAIn
Pingback: KIND-DAMA: un middleware modulare per la gestione dei dati dei dispositivi simili a Kinect - synbrAIn
Pingback: Sistema gestuale touchless per un accesso esteso alle informazioni all'interno di un campus - synbrAIn
Pingback: Elicitazione e valutazione dei gesti di zoom per l'interazione touchless con i display desktop - synbrAIn