Situated Analysis of Human-Machine Interaction in Smart Environments (SAHMI)
SAHMI is a collaborative research project between ISIR, UTC Compiègne, and the INSEAD behavioral lab. It started in September 2017. This page gives an overview of its objectives, the people involved, and its progress.
Project Description
Smart environments are physical environments augmented with sensors, actuators, displays, and computational elements, designed to augment human abilities and enhance human experience. One objective of smart environments is to obtain a communication between humans and machines (avatars, robots) as similar as possible to natural interactions with other individuals. To achieve this, the system aggregates and interprets physical, cognitive, and emotional signals from a wide range of sensors. Users can thus evaluate the quality of the interaction/systems only by the adequation between their actions and the system’s feedback, this because most of the data remain hidden inside the system and are thus inaccessible to their users. While this inaccessibility can be desirable for a perfectly calibrated smart environment for novice end-users, it might not be appropriate when designing/customizing a system or when expert users need additional, task-relevant information.
This project aims at investigating how to bind data from multiple sensors to spatially situated output forms (visual, haptic, auditory) to improve the design, the implementation, and the evaluation of multimodal interactions in smart environments. It involves enriching the physical (or virtual) environment with advanced representations to help users to perceive the internal state of the system depending on their perceptive and cognitive abilities (e.g., novice vs. expert users), the nature of the task (e.g., primary or secondary), and the complexity of the environment (e.g., the geometry of the room). In this project we plan to focus primarily on visual and haptic representations as these provide a rich set of research questions. For instance, it is not clear how users perceive and interpret a chart displayed on non-flat devices (e.g., pliers) or objects (e.g., workbench), how notifications interfere with primary tasks depending on the modality (visual or haptic), or how to distribute information on different sensory channels. Also it is not clear whether perceptual and cognitive biases remain the same in physical environments, augmented reality (AR), and virtual reality (VR).
The expected outcomes of this project are:
- A better understanding how action in 3D (physical and virtual) environments affects perception and cognition;
- The definition of the principles necessary to provide situated data representations in smart environments and
- A system providing visual and haptic representations in smart environments.
People involved in this project
- ISIR
- Yvonne Jansen (coordinator)
- Gilles Bailly
- Sinan Haliyo
- Malika Auvray
- Steve Haroz (post doc)
- Cedric Honnet (engineer)
- UTC
- Charles Lenay
- Gunnar Declerck
- Dominique Aubert
- INSEAD
- Liselotte Petterssen