Who we are

The HCI Sorbonne group exists since October 2016. We are a young, dynamic group of people working on human computer interaction (HCI) problems within the multidisciplinary Interaction team of the Institute for Intelligent Systems and Robotics (ISIR) which is part of the Sorbonne Université and the Centre National de la Recherche Scientifique (CNRS).

HCI is a multidisciplinary research field at the intersection between computer science, psychology, design, and engineering. Being embedded in a larger multidisciplinary lab puts us in a favorable position to collaborate in areas relevant to HCI such as smart workspaces or interactive robotics.

What we do

The goal of the HCI group is to understand how people perceive and interact with information and technologies and how to augment these technologies to increase users’ expertise or to generally support their cognition. For more information, see our research page.

Previous Events

06/05/2019

3 papers at ACM CHI 2019

Our group presents three articles at this year’s CHI conference in Glasgow, UK:

27/03/2019

Guest Talk: Camille Jeunet from CNRS / Université de Toulouse

Title : Brain-Computer Interface : Learn how to use it and use it to learn.

Where: Room H20, ISIR, Sorbonne Université, Campus Jussieu,

When: March 27, 2019 à 10:00AM

Who: Camille Jeunet

Host: Gilles Bailly

Language: English

Abstract : Les interfaces cerveau-ordinateur (ICO) sont des technologies permettant à l’utilisateur de contrôler une application par le seul biais de son activité cérébrale, mesurée la plupart du temps grâce à un ElectroEncéphaloGraphe (EEG). Malgré le fait qu’elles soient extrêmement prometteuses,  notamment pour contrôler des technologies d’assistance, les ICO restent peu développées en dehors des laboratoires de recherche. Une raison majeure est leur manque de fiabilité : les études montrent que 15 à 30% des utilisateurs n’arriveraient pas à utiliser une ICO. Parmi les différents facteurs pouvant expliquer ce manque de fiabilité, nous nous intéressons à un élément peu étudié jusqu’à présent : l’apprentissage humain. En effet, utiliser une ICO requiert l’acquisition de compétences spécifiques et donc un entraînement approprié. Or, les protocoles d’entraînement standards actuels sont inappropriés et doivent être améliorés. Mes recherches visent donc à améliorer l’entraînement à l’usage des ICO afin que les utilisateurs puissent plus facilement « apprendre à s’en servir ». Pour ce faire, nous comptons utiliser les informations recueillies lors de l’utilisation de l’ICO, et notamment l’activité EEG de l’apprenant, afin d’inférer son état et d’adapter dynamiquement l’entraînement (p. ex., nature et difficulté des tâches, fréquence et information fournie par le feedback) en fonction de cet état. Il s’agit donc de détourner l’usage standard de l’ICO et de « s’en servir pour apprendre ». Autrement dit, nous souhaitons combiner une utilisation ‘active’ de l’ICO (i.e., utilisation des signaux EEG comme commandes pour une application) à une utilisation ‘passive’ de celle-ci (i.e., utilisation des signaux EEG comme indicateurs de l’état de l’apprenant). L’objectif fondamental de ces travaux de recherche est de comprendre et modéliser les processus sous-tendant l’apprentissage ICO sous ses différentes dimensions : neurophysiologique, cognitive, psychologique et sociale. Derrière cet objectif fondamental se trouve un objectif plus applicatif : démocratiser l’usage des ICO pour l’entraînement cognitif/moteur (amélioration de la performance des athlètes, rééducation des patients post-AVC, …) et le contrôle d’applications (technologies d’assistance, jeux vidéo, …) afin de favoriser l’autonomie de sujets tout-venant et de patients.

Bio : Camille Jeunet is a CNRS research scientist, working in the CLLE Lab (Univ. Toulouse Jean Jaurès, CNRS, Toulouse, France). her main topic of interest is Brain-Computer Interfaces (BCIs) & Neurofeedback (NF). She does her best to lead her research using an interdisciplinary approach combining cognitive sciences, psychology, neuroscience, computer sciences and sport sciences in order to better understand the mechanisms underlying human performance and learning in BCIs/NF.

13/03/2019

Guest Talk: Alvaro Cassinelli

Experiments on alternative locomotion techniques for micro-robots

Where: Room 304 (Campus Jussieu)
When: 2:00PM
Host: Nicolas Bredeche

In this informal talk I will describe some experiments on alternative 
methods of locomotion for micro or macro objects not using wheels or 
other electro-mechanical methods – in particular, the idea is to harness 
vibrations from the environment and coupling it with internal degrees of 
freedom of the system using machine learning. These techniques, along 
others now being studied can be extended to study swarming robots as 
well as control the vibration phase of the quantum hydrodynamic walkers.

The talk will not be theoretical in nature, but rather experimental and 
based on computer simulations – my background is quite particular I hold 
a PhD from Paris XI on Optoelectronic Stochastic Parallel Processors and 
I worked for 15 years as director of a human computer and robotics 
laboratory at the University of Tokyo (and as an independent Media 
Artist…). That is to say, I very much look forward your feedback as 
well as learn from you!

12/12/2018

Guest Talk: Anne Roudaut from University of Bristol

When: December 12th, 10AM

Where: Room H20, Pyramide (ISIR), Campus Jussieu, Sorbonne Université

Title: Toward Highly Reconfigurable Interactive Devices

Abstract: The static shape of computers is the bottleneck of today’s interactive systems. I argue that we need shape-changing computers that are malleable and reconfigure into any shapes and provide affordances that unleash users interactive potential. However, despite tremendous breakthroughs in advanced materials, their implementation is far off because we don’t understand how to support interactions with them. In this talk I will present my work toward creating malleable and reconfigurable interactive devices, as well as future challenges (more at http://anneroudaut.fr/).

Bio: Anne Roudaut is a Senior Lecturer, Leverhulme Trust fellow and co-leader of the Bristol Interaction Group at the University of Bristol (UK). Her research is rooted within Human Computer Interaction with leaves growing toward Material Engineering and Soft Robotics. Her research approach is a blend of theory, experimentation, and software/hardware design and her goal is to help designers create the best possible interfaces and devices we will soon have in our hands. Before arriving in Bristol she spent two years as a research assistant at the Hasso Plattner Institute and she did her Ph.D at Telecom ParisTech.

Host: Gilles Bailly

26/10/2018

1 Paper at IEEE VIS 2018

Thursday 25/10

Mitigating the Attraction Effect with Visualizations
Evanthia Dimara, Gilles Bailly, Anastasia Bezerianos, and Steven Franconeri

Session: Perception & Cognition 2

Abstract
Human decisions are prone to biases, and this is no less true for decisions made within data visualizations. Bias mitigation strategies often focus on the person, by educating people about their biases, typically with little success. We focus instead on the system, presenting the first evidence that altering the design of an interactive visualization tool can mitigate a strong bias — the attraction effect. Participants viewed 2D scatterplots where choices between superior alternatives were affected by the placement of other suboptimal points. We found that highlighting the superior alternatives weakened the bias, but did not eliminate it. We then tested an interactive approach where participants completely removed locally dominated points from the view, inspired by the elimination by aspects strategy in the decision-making literature. This approach strongly decreased the bias, leading to a counterintuitive suggestion: tools that allow removing inappropriately salient or distracting data from a view may help lead users to make more rational decisions.

Links
https://hal.inria.fr/hal-01845004
https://aviz.fr/deletion

26/10/2018

1 Paper at IHM 2018

Wednesday 24/10

Best Paper Award 🏅
Caractérisation de la transition entre les menus et les raccourcis claviers
Gilles Bailly, Emmanouil Giannisakis, Marion Morel, Catherine Achard

Session: Adapter, s’adapter

Abstract
This paper aims at better understanding the transition from menus to shortcuts. We first discuss the limitations and the opportunities of the theoretical and empirical characterizations of this transition. We then consider keyboard shortcuts as a case study and manually annotate empirical data to estimate several behavioral markers such as the initial switch, the transition duration or the performance dip. These markers serve to precisely characterize and compare three interaction techniques. Finally, we compare two methods to automatically characterize the transition from menus to shortcuts

19/10/2018

PhD Defense – Emeline Brulé

When: October 19th, 2018, 9:00 AM

Where: Paris Telecom

Title: Understanding the experiences of schooling of visually impaired children: A French ethnographic and design inquiry.

Jury:

Madeleine Akrich, DR, CSI, Mines ParisTech, CNRS i3, PSL University (rapporteur)

Gilles Bailly, CR CNRS (HDR), Sorbonne Université, ISIR (PhD supervisor)

Annie Gentès, MCF (HDR), CNRS i3, Télécom ParisTech (PhD supervisor)

Ann Heylighen, Professor, KU Leuven (examiner)

Wendy Mackay, DR, INRIA Saclay (rapporteur)

Marine Royer, MCF, University of Nîmes (examiner)

 

Abstract: In 2005, France passed a law on equal rights and opportunities, participation and citizenship of people with disabilities. It consecrated the right of all children to attend their neighbourhood school and re-organized the provision of services to this population, including the provision of assistive technologies. This research conducted a decade later between 2014 and 2017, investigates visually impaired children’s experiences of schooling and the roles of technologies in supporting their well-being at school.

I developed a mixed-methods interdisciplinary approach, blending qualitative sociological research with Human-Computer Interaction experiments. Specifically, I conducted a two-years-long ethnographic study at a service provider for visually impaired children in the South of France, during which I made several design interventions. This field-work is contextualized by a critical review of the statistics on the schooling of visually impaired children provided by the Ministry of Education.

I use an ecological understanding of resilience to examine children’s narratives about school, across different schooling modalities (e.g., mainstream and special education school) and sociodemographic characteristics. I discuss the resources and strategies children and their carers use to open opportunities for well-being at school, including uses of technologies. I contextualize these by investigating desirable schooling outcomes that define who is resilient and what resilience is for.

From there I propose to develop a non-visual approach to the (geography) curriculum inspired by the sensory turn. By changing what is considered as a valued way of learning, this thesis aims at providing opportunities to develop the sense of belonging and the perception of self-efficacy in the classroom. It informs us on the uses of hearing, smell, taste, and kinesthesia in geography; it supports pupils in reshaping of the learning activities and spaces; Finally, it opens opportunities for collective geographical knowledge rooted in experiences of social inequalities. More broadly, it opens a discussion on building collective well-being and resilience in schools.

Keywords: Meaning-making; Resilience; Geography; Geography Curriculum; Multisensory; Sensory turn; Sensory Knowledge; Disability; Education; School; Human-Computer Interaction; Design; Pupils; Children; Probes; Classroom.

16/07/2018

Guest Talk: Vir Phoha from Syracuse University

When: July 16th, 11:00 AM

Where: Sorbonne Université (Campus Jussieu), ISIR Lab (Pyramid), Room H20 (with air-conditioning)

Title: Machine Learning and Interpretation of Human Interactions from Mobile and Mixed Reality Systems

Speaker: Vir Phoha (Professor at Syracuse University)

Host: Gilles Bailly

Abstract: I posit the birth of a new research field focusing on multi-faceted human-machine interface in any computing environment. Gestures, touch, gait, autonomic, and cognitive response will form the basis of this emerging multi-faceted interface to computing devices. I will present results of learning and analysis of behavior streams, resulting from gestures, typing, swiping; mouse movement; head and torso movement; and gait as captured through sensors, such as accelerometer, gyroscope, keyboard sensors etc. on mobile devices including smart phone and Google Glass. These patterns provide rich information, which can be used for biometric authentication and predictive modelling.

As the multi-faceted interface begins to steadily replace the mouse and keyboard as the dominant human-machine interface, many phenomena relating to how humans interact with computers will have to be re-evaluated under the new interface. The recent introduction of devices for Holographic Computing, Virtual Reality (VR), and Augmented Reality (AR) has opened a new and yet unexplored facet of computing. I will address question of whether authentication could indeed be practical on real devices using behavioral and autonomic and natural body responses measured through smart phones, head-mounted displays and other input devices for AR, VR, and Holographic computing.

Bio:  Dr. Vir Phoha is a Professor of Computer Science in the College of Engineering and Computer Science at Syracuse University. Before joining Syracuse University, Dr. Phoha was a W.W. Chew Endowed Professor of Computer Science at Louisiana Tech University. From 2007-2014 Professor directed the Center for Secure Cyberspace, which was started through an $8M grant from the State of Louisiana.

He has written six books and over 200 research papers. He has 14 US patents and his technology is widely used in industry. Professor Phoha received an MS (1990) and Ph.D. (1992) degree in computer science from Texas Tech University, Lubbock. His research interests include study of human behavioral patterns captured through wearable and mobile devices. In particular, he has interest in behavioral biometrics, authentication in networks; Machine Learning (Bayesian, reinforcement, evolutionary), and Data Mining.
Professor Phoha is an ACM Distinguished Scientist and is a Fellow of SDPS. Recently, he received the 2017 IEEE Region 1 Technical Innovation Award for Contributions towards Principles of Behavioral Biometrics.

21/06/2018

The 10th ACM SIGCHI Symposium on Engineering Interactive Computing Systems takes place at UPMC

EICS 2018 is the tenth international conference devoted to engineering usable and effective interactive computing systems. The event is co-organized by Yvonne Jansen (CNRS/UPMC, technical program chair) together with Emmanuel Pietriga (Inria, general chair) and Kris Luyten (Hasselt University, general co-chair).
http://eics.acm.org/2018

30/05/2018

Guest Talk: Edward Lank from the University of Waterloo

When: Wednesday, May 30th, 11:00 AM

Where: Sorbonne Université, Campus Jussieu, Room 211 (second floor) Tower 55-65

Title: WRiST: Wearables for Rich, Subtle, and Transient Interactions in Ubiquitous Environments

Speaker: Edward Lank

Host: Gilles Bailly

Abstract: Modern personal computing devices – tablets, smartphones, smartwatches, fitness trackers, remote controls – represent an ever-more-pervasive component of everyday life. Alongside this trend toward an increase in computing that you bring with you, we also continually encounter devices that exist both in tangible spaces and in the electronic domain. While this computationally augmented reality exists around us, interactions with this embedded, encountered computation rarely achieves the fluidity that one would desire. As a simple example, consider connecting your laptop or smartphone to a data projector. It almost always works well … except when it does not.

In this talk, I will detail some of our early work leveraging personal devices as computational proxies to control and manipulate external computation. Motivated by our early work on motion gestures for smartphone input, I will describe on-going research projects leveraging commodity smartwatches as proxies to support WIMP-style input, text input, and rich gestural input to external computation.  Because this work lies in the domain of free-space gestural input, I will also discuss our work on enhancing both the reliability and the perceived reliability of gesture recognition for end-users.  I will conclude by outlining our emerging vision of wearables and personal devices as a platform for the rich, subtle, and transient interactions necessary in an ever-more ubicomp-based reality.

Bio: Edward Lank is a faculty member in the David R. Cheriton School of Computer Science at the University of Waterloo where, effectively July 1st, he will hold the rank of Professor. Edward joined Waterloo in 2006.  With Michael Terry, he co-founded the Waterloo Human-Computer Interaction Lab, a lab he continues to co-direct.  Prior to joining the faculty at Waterloo, Edward was an Assistant Professor at San Francisco State University, a Post-Doctoral Intern at Xerox’s Palo Alto Research Centre, the Chief Technical Officer of MediaShell Corporation, and a sessional lecturer at Queen’s University.  Edward Lank received his Bachelor of Science Degree (Honours Physics) from the University of Prince Edward Island in 1994 and a Ph.D. in Computer Science from Queen’s University in 2001.  Edward’s current research interests are varied, and include intelligent user interfaces, surface and free-space gestural input, sketch and drawing interfaces, usable privacy and security, and other aspects of interactive system design.

You can find out more about his current research projects from his webpage

 

16/05/2018

Guest talk: Alix Goguey from Swansea University


When
: May 16th, 2018, 11:00 AM

Where: Sorbonne Université, Campus Jussieu, Pyramid H20.

Title: Augmenting Touch Expressivity to Improve the Touch Modality

AbstractDuring the last decades, touch surfaces have become more and more ubiquitous. Whether on tablets, on smartphones or on laptops, touch surfaces are used by a majority of us on a daily basis. However, the limited expressivity – the different channels used to convey information to the system – of the touch modality restricts drastically the amount of features that can be controlled via touch only. For instance, a typical smartphone touchscreen only provides the absolute position of a contact on the screen, thus applications usually offer only one way to carry out tasks (which can augment user frustration or cap performances) or restrict possibilities (e.g. Photoshop on desktop offers more than 600 commands but only about 40 on smartphones and tablets). In this talk, I will present an overview of my research and discuss different ways to tackle this problem, augment touch expressivity and user efficiency: from tools that helps better designing touch interfaces to the use of new input dimensions in original interaction techniques. 

Speaker: Alix Goguey

Bio: Alix Goguey is a post postdoctoral fellow working with Matt Jones in the FIT lab at Swansea University. He was previously working with Carl Gutwin in the Interaction Lab at the University of Saskatchewan, Canada. He received his Ph.D. in Computer Science in October 2016 in the Mjolnir research group at Inria Lille – Nord Europe, France, under the supervision of Géry Casiez. His work focuses on understanding and designing interaction techniques on touch input devices and particularly through the use of new information such as finger identification. To learn more about Alix’s work: www.alixgoguey.fr

 

02/05/2018

Tiffany Wun joins the group as a M.Sc. exchange student

Tiffany Wun from the iLab at the University of Calgary (Canada) joins the group for 3 months.

23/04/2018

HCI Sorbonne @ CHI 2018

This year the HCI Sorbonne group presents 3 articles at CHI 2018 in Montréal:

30/03/2018

Guest Talk: Annabelle Goujon from Université de Bourgogne-Franche-Comté

When: March 30th, 2018, 14:00AM
Where: ISIR Lab, Room 211 (Tower 55-65)
Title: Implicit and explicit Statistical Learning during the Analysis of Visual Scenes: Evidence form Contextual Cueing
Speaker: Annabelle Goujon
Abstract: How does the visual system prioritize the relevant information for further processing? By structuring the world and by making it coherent and predictable, Statistical Learning would play a key role in object recognition, scene identification, attentional guidance and navigation in complex, dynamic environments. Statistical Learning refers to an unconscious cognitive process in which repeated patterns, or regularities, are extracted from sensory inputs. In this regard, the Contextual Cueing paradigm constitutes an elegant way to understand how learning mechanisms can detect contextual regularities during visual search, allowing an optimization of basic visual processing and/or attentional deployment in subsequent encounters. In this presentation, I will review and discuss the main mechanisms likely to be involved in contextual cueing phenomena, as well as the implicit vs. explicit nature of learning that take place.

Bio: Since September 2015, Annabelle Goujon is lecturer in Cognitive Sciences at the University of Bourgogne Franche-Comté. Her research covers different domains, that is, the Perception of visual scenes, Implicit learning, Statistical learning and Contextual cueing. More recently, her works aim at investigating more specifically how implicit and explicit/declarative memory systems interact in the formation and the consolidation of sensory memories in long term memory.

15/03/2018

Guest Talk: Christophe Jouffrais from CNRS-IRIT and NUS, Singapour

When: March 15th, 2018, 2:00PM

Where: Sorbonne Uni, towers 65/66 – room 304 (3rd floor)

Title: Accessible interactive graphics for visually impaired users

SpeakerChristophe Jouffrais

Abstract: STEM is an American acronym for science, technology, engineering and mathematics. These four disciplines rely on graphical representations and are considered central in technologically advanced societies. Obviously graphs are inherently visual and therefore inaccessible for the visually impaired (approx 5% of the world population.) This has important consequences on education, social inclusion and quality of life.

Raised-lines maps are the most common tool for providing access to tactile graphics but have numerous limitations (cost, limited number of elements displayed, knowledge of Braille, etc.) A few research projects aimed to overcome these limitations by designing interactive systems for accessing digital image [see 1 for review]. Based on this previous work, we developed a set of devices based on tactile exploration that allow non-visual access to images. The interactive audio-tactile device called Mappie [2] allows access to multiple levels of information. It is based on a raised-line map overlay placed over a tactile surface. In addition to map exploration, it provides advanced interactions functions (e.g. learning routes). We showed that it is effective for acquiring spatial concepts, and is more usable than regular raised-lines map [3]. It is currently used by professionals of low vision and commercialization is planned. We have also designed a device allowing the visually impaired to build and explore tangible representations of digital graphics [4]. It is based on the design of tangible objects that represent important elements on a map, and which can be linked to each other in order to create interactive lines and areas. Adapted non-visual guidance assists users in placing and linking objects to build new graphical representations. We have shown that this device is usable by visually impaired users to build and explore graphs of various complexities. More recently, we designed a device based on a smartwatch and filtering functions that allows visually impaired users to explore virtual maps in mobility [5].

Bio: Dr. Christophe Jouffrais is with the IRIT Lab (UMR5505, CNRS & Univ of Toulouse) in Toulouse, FR. Recently, he joined the IPAL research lab in Singapore. He is a senior CNRS researcher with a background in Cognitive Science. He holds a European PhD (2000) in cognitive neuroscience from the University of Lyon, FR and the University of Fribourg, CH. His current research focuses on non-visual spatial perception, action and cognition in visually impaired human, with an emphasis on non-visual human-computer interaction, and Assistive Technologies. Ongoing research projects aimed at designing technologies that help visually impaired users to understand and interact with maps.

MoreCherchonsPourVoir.org

Video broadcast + archive:

01/03/2018

Jingjing Xie and Lucas Rodrigues join as M.Sc. interns

Jingjing Xie and Lucas Rodrigues, two M.Sc. students from University Paris-Saclay join the group as interns for 6 months to prepare their Master theses.

30/01/2018

HCI lecture at Sorbonne Université

Our HCI lecture starts January 30th.

20/12/2017

Guest Talk: Rebecca Kleinberger from MIT Media Lab

When: Dec 20th, 2017, 11:00AM
Where: ISIR Lab, Room H20
Title: Vocal explorations in HCI

Abstract: In my work at the Opera of the Future group I build tools and experiences to explore the relationship people have with their own voice and the voices of others. Our voice is an important part of our individuality. From the voice of others, we are able to understand a wealth of non-linguistic information, such as identity, social-cultural clues and emotional state. But the relationship we have with our own voice is less obvious. We don’t hear it the same way others do, and our brain treats it differently from any other sound we hear. Yet its sonority is highly linked to our body and mind, and is deeply connected with how we are perceived by society and how we see ourselves. At the MIT Media Lab we design and create transformative experiences that merge science, art, engineering and design. I explore the use of new technologies (virtual reality, rapid prototyping, deep learning, real-time digital signal processing, lasers, wearable technologies and robotics) and HCI techniques to transform our perception of voices to better our interpersonal interactions. My previous projects range from devices that exteriorize voices, wearables sensor masks to provide interactive visualization of vocal vibration patterns for vocal training or speech disorder treatment, stuttering or prosody acquisition; bluetooth wearable haptic anchor for auditory hallucinations to help schizophrenic patients to feel external voices and easily distinguish internal hallucinatory voices; deep learning-based real-time speaker recognition system designed to be used in real-world settings and raise group intelligence or interactive web vocal application for mass musical collaboration.

Bio: Rebecca Kleinberger is a PhD candidate at the MIT Media Lab. Her work mixes science, engineering, design and art to explore ways to craft experiences for self-reflection and human connection. As part of the Opera of the Future group at the MIT Media Lab, she creates unique experiences to help people connect with themselves and with others. She accomplishes this using approaches that include virtual reality, rapid prototyping, deep learning, real-time digital signal processing, lasers, wearable technologies and robotics.

More:
– Portfolio: rebecca.media.mit.edu
– Personal page: media.mit.edu/people/rebklein/overview

13/12/2017

Guest Talk: Gonçalo Lopes From UCL / NeuroGears

When: Dec 13th, 2017, 11:00AM
Where: UPMC, towers 65/66 – room 304 (3rd floor)
Title: “What Neuroscience taught me about Robotics

Abstract: I have been trying to build autonomous real-time intelligent systems for more than ten years. This took me from computer science academia into applied research, but the goal of synthesizing autonomous behavior adapted to the external environment remained elusive. An opportunity then presented itself to join a rising multidisciplinary PhD program in Systems Neuroscience. In this talk, I will present the two major outputs that resulted from this six-year experience. First, the engineering challenges of measuring brain physiology in freely moving animals resulted in the development of Bonsai, a visual programming language for the rapid prototyping of reactive systems. Second, I will discuss the results of our investigations into the role of motor cortex. We investigated the behavior of rats facing various motor problems, with or without motor cortex. Surprisingly, we found that rats lacking the entire motor cortex do not show any obvious movement impairments. Indeed, their performance is entirely on par with controls, even in dynamic environments, except when presented with unexpected motor challenges that demand fast and flexible readjustment of the entire motor system to a new situation — the kind of challenges where robots also dramatically fail. If given the opportunity to train and repeat the challenging situation over and over, even rats without motor cortex will be able to learn how to adapt and optimize their behavior to overcome the obstacles. It is the resilience and robustness to unexpected failures of control — when the actual motor problem needs to be framed all over again — that seems to be one of the primordial roles of mammalian motor cortex.

Bio: Gonçalo is a software engineer turned neuroscientist, fascinated by the behaviour of intelligent systems. With a background of applied research in virtual and augmented reality, parallel processing and autonomous agents, he joined the Champalimaud Neuroscience program in 2010, hoping to find better ways of building machines that learn by themselves. Gonçalo completed his PhD with Adam Kampff and Joe Paton, trying to understand the role of motor cortex in the control of movement in non-primate mammals. Along the way, he extended his experience making interactive systems to rodents and other animal models. Gonçalo developed the Bonsai visual programming language as a way to rapidly prototype interactive neuroscience experiments.

More:
– UCL Intelligent Systems Lab: kampff-lab.org
– His new company: neurogears.org

Video archive:

08/11/2017

Internships

See our summer internships

31/10/2017

UEIS 2017: New Trends in User Expertise and Interactive Systems

UEIS 2017 is the first International Symposium on User Expertise and Interactive Systems. UEIS is a place where researchers and practitioners from Human-Computer Interaction, Cognitive Science, Experimental psychology discuss the latest trends in command selection and user expertise on PCs, tablets, smartphones, smartwatches, tabletops and interactive walls

UEIS 2017 will take place at Sorbonne Université, Paris (France) the 31st October 2017.

website: www.hci.upmc.fr/hci/ueis17/

 

14/06/2017

Guest Talk: Ignacio Avellino from ExSitu, Inria Saclay

Title: Remote collaboration across large interactive spaces
Where: ISIR Lab, Room H20
When: June 14, 10:00AM

Abstract: In my thesis, I study communication for remote collaboration across wall-sized displays, a technology that allows two collaborators to move in a large space and to interact using their bodies. These characteristics make traditional tools (e.g. a skype call) fall short. Informed by structured observation, I built CamRay, a system that uses camera arrays to capture user’s faces as they move, and display their video across a remote tiled wall-sized display. I study how to support different types of communication in remote collaboration by leveraging collaborator’s movement. I perform two main studies that inform the design of future systems for collaboration across wall-sized displays.

Short bio: Ignacio Avellino is a PhD Candidate at the ExSitu, Inria Saclay, under the supervision of Michel Beaudouin-Lafon and Cédric Fleury. He was born in Uruguay where he obtained his computer engineer degree, working also in industry for one and a half years in software design and development. He completed a double-degree HCI master program, jointly at RWHT Aachen and UNITN Trento universities, where he focused on using everyday objects as input devices. Currently he is in his last year of his PhD thesis; his research focuses on enabling effective communication through technology.

11/06/2017

Pedagogy & Physicalization – Workshop at DIS’17

Designing Learning Activities around Physical Data Representations
The workshop is organized by Trevor Hogan, Uta Hinrichs, Yvonne Jansen, Samuel Huron, Pauline Gourlet, Eva Hornecker, and Bettina Nissen.
See the workshop website for more information.

31/05/2017

Guest Talk: Justin Mathew from INRIA

Title: Interaction techniques for 3D audio production and mixing
Where: ISIR Lab, Room H20
When: May 31, 4:00PM

Abstract: There has been a significant interest in providing immersive listening experiences for a variety of applications, and recent improvements in audio reproduction have provided the capability for 3D audio practitioners to produce realistic and imaginative immersive auditory scenes. Even though technologies to reproduce 3D audio content are becoming readily available for consumers, producing and authoring this type of content is difficult due to the variety of rendering techniques, perceptual considerations, and limitations of available user interfaces. This presentation discusses work that investigated these issues through the development of a framework of design spaces from three different viewpoints: Ethnographic, Spatial Perception, and Interaction Design. From these three viewpoints, we identified design criteria required for 3D audio user interfaces that developed the framework of design spaces that could help designers better account for important dimensions in the design process, analyze functionalities, and significantly improve user interfaces for 3D audio production tools.

Short Bio: Justin D. Mathew is a current PhD student in the Mjolnir Team (Inria-Lille), LIMSI-CNRS, and Université Paris-Saclay working on visualization and interaction techniques for 3D audio production tools. He received his Bachelor’s of Computer and Electrical Engineering at the University of Rochester (Rochester, NY) and his Master’s in Music Technology at New York University (New York, NY). His main interests are in the research and design of creative digital tools, 3D audio and graphics, and information retrieval applications.

17/03/2017

Journée Interaction Homme-Machine et Intelligence Artificielle

L’Association Française pour l’Intelligence Artificielle (AFIA) et l’Association Francophone d’Interaction Homme-Machine (AFIHM) avec le soutien du Labex SMART organisent la troisième journée IHM et IA.
Quand: March 17, 2017
Où: Amphi 25, UPMC (more)

La participation est gratuite et ouverte à tous

28/02/2017

Colloquium of computer Science of UPMC Sorbonne Universités

Michel Beaudouin-Lafon (Université Paris-Sud – LRI) is the next invited speaker of the colloquium in computer science of UPMC Sorbonne University.

Title: Interfaces Homme-Machine : Unifier les Principes pour Diversifier l’Interaction
Where: Amphi 25 Univsersité Pierre et Marie-Curie (more)
When: Tuesday, February 28, 2017: 6:00PM
This event is free and open to the public. A cocktail will take place at 5:15 in front of the Amphi 25.

23/02/2017

SAHMI project proposal accepted

Following the January 2017 call for collaborative projects within the SMART labex, our proposal for SAHMI – Situated Analysis of Human-Machine Interaction in Smart Environments was accepted. The project involves researchers from ISIR, the Costech lab at UTC Compiègne, and gets research support from the INSEAD-Sorbonne University Behavioural Lab. The project will fund two post-doctoral researchers and one research engineer.

17/01/2017

Guest Talk: Paul Strohmeier from University of Copenhagen

Title: Coupling Motion and Perception for Haptic Interfaces
Where: Room H20, ISIR, UPMC
When: January 17, 2017, 11:00AM

Paul is PhD candidate in the department of Computer Science in Copenhagen, with Kasper Hornbæk.
His research topics currently are eTextiles and haptic feedback, but his broader research interest is HCI that focuses on the physical human body as an active agent in the world.
He described his research interests in more detail in a recent Graduate Student Consortium at TEI.

17/01/2017

Invited speaker: Paul Strohmeier

Title: Coupling Motion and Perception for Haptic Interfaces
Where: Room H20, ISIR, UPMC
When: January 17, 2017, 11:00AM

06/01/2017

3 papers, 1 LBW, 1 demo, 1 T-Shirt Design Contest at ACM CHI 2017

25/10/2016

Two journal articles presented at IEEE VIS

Both articles will appear in the January issue of Transactions on Visualization and Computer Graphics 23 (1).