T13: Gesture Elicitation

Saturday, 27 July 2019, 13:30 – 17:30
Back to Tutorials' Program

 

Jean Vanderdonckt (short bio)

Université catholique de Louvain, Belgium

Radu-Daniel Vatavu (short bio)

University Stefan cel Mare of Suceava, Romania

 

Objectives:

This tutorial will introduce participants to a key design stage of gesture interaction: how to conduct a gesture elicitation study. Such a study consists of prompting participants (e.g., future users of a system) to choose their preferred gestures for carrying out a series of tasks through a gesture user interface and to come up with a consensus set of preferred gestures through a structured protocol.

After attending this tutorial, delegates will be able

  • To understand and manipulate the key concepts of gesture user interfaces and their influence on user experience of graphical user interfaces.
  • To conduct a gesture elicitation study by themselves, based on provided resources (e.g., spreadsheets, questionnaires, forms, software).
  • To access software resources existing for gesture elicitation, recognition, analysis, and development.

 

Content and benefits:

A Gesture Elicitation Study (GES) is decomposed into six steps:

  1. Define a study: expresses parameters and specifications required to conduct a GES, such as the set of tasks (e.g., insert, search, compare), their referents (e.g., images or videos), the context of use, and the various data to be collected.
  2. Conduct a study: runs the previously defined GES, which results in a set of elicited gestures along with their associated data (e.g., goodness-of-fit)
  3. Classify gestures: classifies previously elicited gestures according to any classification method, based on criteria.
  4. Measure gestures: computes any measure of interest (e.g., quality, agreement) on previously classified gestures.
  5. Discuss gestures: induces a discussion among the various stakeholders to come up with a consensus on elicited gestures (e.g., discard, keep).
  6. Export gestures: converts consensus gestures to integrate them into a gesture recognizer and/or any module for supporting gesture recognition.

The tutorial adopts a step-by-step approach: each step will be first explained, each delegate will herself run the same step for a GES of their choice (e.g., gestures for image browsing on a smartphone).

 

Target Audience:

Designers (primarily), developers, experimenters, researchers, investigators, and practitioners interested by using gestures in user interfaces

Bio Sketches of Presenters:

Jean Vanderdonckt, Ph.D., works in Human-Computer Interaction with focus on Engineering Interactive Computing Systems (EICS). He has developed some gesture recognition algorithms, such as LVS (https://link.springer.com/chapter/10.1007%2F978-3-540-74796-3_14) and !FTL (https://dl.acm.org/citation.cfm?doid=3242969.3243032).
Jean Vanderdonckt is a Full Professor at Université catholique de Louvain (Belgium), where he teaches Human-Computer Interaction, Information Systems, Model-based User Interface Design and Development. He is an ACM Distinguished Speaker and ACM Distinguished Scientist.

Radu-Daniel Vatavu, Ph.D., works in Human-Computer Interaction with focus on designing novel interactions and gesture user interfaces. He developed several gesture recognition algorithms (the last one being the $Q), gesture analysis tools (e.g., GHoST), and gesture methodologies such as gesture elicitation studies. Radu-Daniel Vatavu is a Professor at the Computer Science Department of the University of Suceava, where he teachesg Algorithms Design, Pattern Recognition, Advanced Programming, and Advanced Artificial Intelligence.