Approfondisci l’interazione uomo-macchina: tecnologie avanzate, AI e soluzioni per migliorare l’esperienza utente.

Designing Touchless Gestural Interactions for Public Displays In-the-Wild

Public displays, typically equipped with touchscreens, are used for interactions in public spaces, such as streets or fairs. Currently low-cost visual sensing technologies, such as Kinect-like devices and high quality cameras, allow to easily implement touchless interfaces.

Continua a leggereDesigning Touchless Gestural Interactions for Public Displays In-the-Wild

Deep learning and wearable sensors for the diagnosis and monitoring of Parkinson’s disease: A systematic review

Parkinson’s disease (PD) is a neurodegenerative disorder that produces both motor and non-motor complications, degrading the quality of life of PD patients. Over the past two decades, the use of wearable devices in combination with machine learning algorithms has provided promising methods for more objective and continuous monitoring of PD.

Continua a leggereDeep learning and wearable sensors for the diagnosis and monitoring of Parkinson’s disease: A systematic review

Enhancing video game experience with playtime training and tailoring of virtual opponents: Using Deep Q-Network based Reinforcement Learning on a Multi-Agent Environment

When interacting with fictional environments, the users' sense of immersion can be broken when characters act in mechanical and predictable ways.

Continua a leggereEnhancing video game experience with playtime training and tailoring of virtual opponents: Using Deep Q-Network based Reinforcement Learning on a Multi-Agent Environment

Detecting Emotions from Illustrator Gestures —The Italian Case

he evolution of computers in recent years has given a strong boost to research techniques aimed at improving human–machine interaction. These techniques tend to simulate the dynamics of the human–human interaction process, which is based on our innate ability to understand the emotions of other humans. In this work, we present the design of a classifier to recognize the emotions expressed by human beings, and we discuss the results of its testing in a culture-specific case study. The classifier relies exclusively on the gestures people perform, without the need to access additional information, such as facial expressions, the tone of a voice, or the words spoken. The specific purpose is to test whether a computer can correctly recognize emotions starting only from gestures. More generally, it is intended to allow interactive systems to be able to automatically change their behaviour based on the recognized mood, such as adapting the information contents proposed or the flow of interaction, in analogy to what normally happens in the interaction between humans. The document first introduces the operating context, giving an overview of the recognition of emotions and the approach used. Subsequently, the relevant bibliography is described and analysed, highlighting the strengths of the proposed solution. The document continues with a description of the design and implementation of the classifier and of the study we carried out to validate it. The paper ends with a discussion of the results and a short overview of possible implications.

Continua a leggereDetecting Emotions from Illustrator Gestures —The Italian Case

Exploiting Cognitive Architectures to Design Storytelling Activities for NarRob

In this work, we exploited the potential of a cognitive architecture to model the characters of a story in an interactive storytelling system. The system is accessible through NarRob, a humanoid storyteller robot. Our main goal was to implement the cognitive processes of the agents played by the robot within a narrative context environment.

Continua a leggereExploiting Cognitive Architectures to Design Storytelling Activities for NarRob

Child–display interaction: Lessons learned on touchless avatar-based large display interfaces

During the last decade, touchless gestural interfaces have been widely studied as one of the most promising interaction paradigms in the context of pervasive displays. In particular, avatars and silhouettes have proved to be effective in making the touchless capacity of displays self-evident.

Continua a leggereChild–display interaction: Lessons learned on touchless avatar-based large display interfaces

Predicting mid-air gestural interaction with public displays based on audience behaviour

Knowledge about the expected interaction duration and expected distance from which users will interact with public displays can be useful in many ways.

Continua a leggerePredicting mid-air gestural interaction with public displays based on audience behaviour

No more posts to load