Abstract
In the last few years, the use of gestural data has become a key enabler for human-computer interaction (HCI) applications.
The growing diffusion of low-cost acquisition devices has thus led to the development of a class of middleware aimed at ensuring a fast and easy integration of such devices within the actual HCI applications. The purpose of this paper is to present a modular middleware for gestural data and devices management.
First, we describe a brief review of the state of the art of similar middleware. Then, we discuss the proposed architecture and the motivation behind its design choices.
Finally, we present a use case aimed at demonstrating the potential uses as well as the limitations of our middleware.
Conclusion and Future Works
In this paper we have described a modular middleware which aims at making the development of gesture interaction applications easy. In particular, the middleware provides some basic communication features to access gestural input devices such as the Microsoft Kinect or the RealSense cameras. All the middleware functionalities are provided by REST-based web services.
We have conducted a study on the performance of the middleware aimed at discovering its hardware and software limitations. First, the study reports the exact amount of read/write bandwidth for each of the currently supported device streams, as well as the required amount of CPU instructions in a typical task of gesture recognition; this data can be used by the middleware end-users which can easily check the requirements of their own applications.
We have also tested the middleware performance in a real-case scenario in order to check (i) how many clients can simultaneously read a single device stream with a real time constraint and (ii) how many recognition tasks can be run in parallel before the CPU load becomes unmanageable for the middleware.
The obtained results allow room for future improvements. First of all, we are planning to add support for multicast connections which should allow much more simultaneously connected clients to read different device streams. Also we are working on the implementation of isolated gesture recognition, which is much less computationally expensive than its continuous counterpart. This would allow for more simultaneous recognition tasks to be performed in real-time.
Finally, we are also working to include support to other classes of input devices (such as the Leap Motion Controller) as well as to new gesture recognition algorithms, for example, the Dynamic Time Warping or Support Vector Machines in order to allow users to choose the more suitable recognition algorithm for their own gesture datasets.
Questo è uno degli articoli scientifici pubblicati da uno o più collaboratori e data scientist di synbrAIn.
Se sei interessato a saperne di più, leggi l’intero articolo qui.