How Pairs Interact Over a Multimodal Digital Table
Tse, E., Shen, C., Greenberg, S. and Forlines, C. (2007)
How Pairs Interact Over a Multimodal Digital Table. In Proc. ACM CHI Conference on Human Factors in Computing Systems. ACM Press, pages 215-218, April 27 - May 3. Tech Note.
View Publication and Related Materials
![]() | PDF Paper (2007-HowPairsInteract.CHI.pdf) |
Abstract
Co-located collaborators often work over physical tabletops using combinations of expressive hand gestures and verbal utterances. This paper provides the first observations of how pairs of people communicated and interacted in a multimodal digital table environment built atop existing single user applications. We contribute to the understanding of these environments in two ways. First, we saw that speech and gesture commands served double duty as both commands to the computer, and as implicit communication to others. Second, in spite of limitations imposed by the underlying single-user application, people were able to work together simultaneously, and they performed interleaving acts: the graceful mixing of inter-person speech and gesture actions as commands to the system. This work contributes to the intricate understanding of multi-user multimodal digital table interaction.
Bibtex entry
@INPROCEEDINGS { 2007-HowPairsInteract.CHI,
CLASS = { CONFARTICLE },
AUTHOR = { Tse, E. and Shen, C. and Greenberg, S. and Forlines, C. },
TITLE = { How Pairs Interact Over a Multimodal Digital Table },
BOOKTITLE = { Proc. ACM CHI Conference on Human Factors in Computing Systems },
PAGES = { 215-218 },
YEAR = { 2007 },
MONTH = { April 27 - May 3 },
PUBLISHER = { ACM Press },
NOTE = { Tech Note },
}