The 7th Workshop on Eye Gaze in Intelligent Human Machine Interaction
at ACM ICMI 2014, Istanbul, Turkey
November 16th, 2014
Papers deadline: July 22nd, 2014 (extended)

Eye gaze is one of the most important aspects in understanding and modeling human-human communication, and it has great potential also in improving human-machine and robot interaction. In human face-to-face communication, eye gaze plays an important role in floor and turn management, grounding, and engagement in conversation. In human-computer interaction research, social gaze, gaze directed at an interaction partner, has been a subject of increased attention.
This is the seventh workshop in Eye Gaze in Intelligent Human Machine Interaction, and in the previous workshops we have discussed a wide range of issues for eye gaze involved in multimodal human-machine/robot/agent interaction; technologies for sensing human attentional behaviors, attentional behaviors in problem-solving and task-performing, multimodal communication, interpretation and generation. In addition to these topics, this workshop will also focus on eye gaze in multiparty interaction and real-world applications, especially on mobile platforms where remarkable progress has been achieved in recent years.
This workshop aims to continue in these lines and explore the growing area of gaze in intelligent interaction research by bringing together researchers from domains of human sensing, multimodal processing, humanoid interfaces, intelligent user interfaces, and communication science. We will exchange ideas to develop and improve methodologies for this research area with the long-term goal of establishing a strong interdisciplinary research community in “attention aware interactive systems”.

This workshop solicits papers that address the following topics
(but not limited to):
* Technologies and methods for sensing and interpretation of human mental state by gaze in dyadic / multiparty interaction
- Sensing, tracking, and interpreting attentional behaviors using bodily motions and multimodal and biometric signals in intelligent user interface
- Utilizing gaze behaviors and other potential modalities to read human mind, estimation of personality and conversational attitude

* Eye gaze in multimodal generation and behavior production in conversational humanoids
- Selecting appropriate eye-gaze behaviors for virtual agents and communication robots in the interaction with humans
- Effective multimodal expressions by combining eye gaze and other modalities

* Technologies and methods for utilizing gaze behaviors in the real world
- Applications utilizing gaze behaviors on mobile platforms
- Utilizing gaze behaviors in navigation or other human-vehicle interaction applications

* Empirical studies of attentional behaviors
- Attentional behaviors in dyads and multiparty face-to-face and remote conversations and collaboration
- Implications of analysis of human attentional behaviors towards HCI design

* Evaluation and design issues for using eye gaze in multimodal interfaces
- Evaluation methods for gaze-based multimodal interfaces
- Designs of user studies to measure user experience and to identify the real impact of eye gaze in multimodal interfaces

* Any other new directions for gaze in multimodal interaction
- Pervasive and ubiquitous interaction
- Future scenarios

There are two categories of paper submissions.
Long paper: The maximum length is 6 pages.
Short paper: The maximum length is 3 pages.
Each submission will be reviewed by three members of the program committee. The accepted papers will be published in the workshop proceedings. Best papers will be selected for an inclusion to a special issue in a journal. Submitted papers should conform to the ACM publication format. For templates and examples follow the link:
Please submit your papers from the following site:

Paper submission due: July 15th, 2014 July 22nd, 2014 (extended)
Notification of acceptance: August 22nd 2014 August 25th, 2014 (extended)
Camera-ready due: September 5th, 2014
Workshop date: November 16th, 2014

Hung-Hsuan Huang – Ritsumeikan University, Japan
Roman Bednarik – University of Eastern Finland, Finland
Kristiina Jokinen – University of Helsinki, Finland
Yukiko Nakano – Seikei University, Japan

Samer Al Moubayed – Disney Research , United States
Sean Andrist – University of Wisconsin, United States; Disney Research, United States
Brennon Bortz – Virginia Polytechnic Institute and State University, United States
Hendrik Buschmeier – Bielefeld University, Germany
Marianne Carvalho Bezerra Cavalcante – Universidade Federal da Paraíba, Brazil
Jari Kangas – University of Tampere, Finland
Casey Kennington – Bielefeld University, Germany
Yoshinori Kuno – Saitama University, Japan
Bert Oben – University of Leuven, Belgium
Catharine Oertel Genannt Bierbach – KTH-Royal Institute of Technology, Sweden
Giorgio Roffo – University of Verona, Italy
Cristina Segalin – University of Verona, Italy
Ingo Siegert – Otto von Guericke University Magdeburg, Germany
Ramanathan Subramanian – University of Trento, Italy
Enrique Sánchez-Lozano – University of Vigo, Spain
Soroush Vosoughi – Massachusetts Institute of Technology, United States
Hana Vrzakova – University of Eastern Finland, Finland