Tangible review

An evaluation of reality tradeoffs in tangible and semaphoric interaction modalities
Katherine Rix
2015

Abstract

As more processing power becomes available for interaction-oriented computation, new interaction modalities are introduced and old alternatives revisited. We explore the benefits of using complementary modalities in a multimodal user interface (MMUI). When considering a multimodal system, it is important to consider the appropriateness of each modality, and their respective strengths and weaknesses. In this paper we review the reality-based interaction framework (RBI) as an evaluation tool for modalities that are based on everyday interactions. We then apply these criteria to tangible and semaphoric modalities, and consider the specific characteristics of each one. Finally, we argue that there is enough potential synergy between these modalities to stimulate investigation into a semaphoric-tangible MMUI.

References

  1. Yacine Bellik, Issam Rebaï, Edyta Machrouch, Yasmin Barzaj, Christophe Jacquet, Gaëtan Pruvost, and Jean-Paul Sansonnet. 2009. Multimodal interaction within ambient environments: An exploratory study. Human-Computer Interaction (INTERACT 2009), 89-92.
  2. Richard A. Bolt. 1980. "Put-that-there": Voice and gesture at the graphics interface. Proceedings of the 7th annual conference on Computer graphics and interactive techniques (SIGGRAPH '80). ACM, New York, NY, USA, 262-270.
  3. Alexandre Gillet, Michel Sanner, Daniel Stoffler, David Goodsell, and Arthur Olson. 2004. Augmented reality with tangible auto-fabricated models for molecular biology applications. Proceedings of the conference on Visualization '04 (October 2004), IEEE Computer Society, Washington, DC, USA, 235-242.
  4. Juan David Hincapié-Ramos, Xiang Guo, Paymahn Moghadasian, and Pourang Irani. 2014. Consumed endurance: a metric to quantify arm fatigue of mid-air interactions. Proceedings of the 32nd annual ACM conference on Human factors in computing systems (CHI '14). ACM, New York, NY, USA, 1063-1072.
  5. Hiroshi Ishii. 2008. The tangible user interface and its evolution. Commun. ACM 51, 6 (June 2008), 32-36.
  6. Robert J.K. Jacob, Audrey Girouard, Leanne M. Hirshfield, Michael S. Horn, Orit Shaer, Erin Treacy Solovey, and Jamie Zigelbaum. 2008. Reality-based interaction: a framework for post-WIMP interfaces. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI '08). ACM, New York, NY, USA, 201-210.
  7. Frederic Kerber, Pascal Lessel, and Antonio Krüger. 2015. Same-side Hand Interactions with Arm-placed Devices Using EMG. Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA '15). ACM, New York, NY, USA, 1367-1372.
  8. Ga Won Kim, Ji Jyoun Lim, and Myung Hwan Yun. 2013. Identifying users' perceived values toward reality-based interaction, using semantic network analysis. 人間工学 49, Supplement (2013). JES, Japan, S443-S446.
  9. Zhiyuan Lu, Xiang Chen, Qiang Li, Xu Zhang, and Ping Zhou. 2014. A hand gesture recognition framework and wearable gesture-based interaction prototype for mobile devices. IEEE Transactions on Human-Machine Systems 44, 2 (April 2014). IEEE, 293-299.
  10. Ryan P. McMahan, Alexander Joel D. Alon, Shaimaa Lazem, Robert J. Beaton, David Machaj, Michael Schaefer, Mara G. Silva, Anamary Leal, Robert Hagan, and Doug A. Bowman. 2010. Evaluating natural interaction techniques in video games. 2010 IEEE Symposium on 3D User Interfaces (3DUI) (March 2010). IEEE, 11-14.
  11. Brad A. Myers. 1998. A brief history of human-computer interaction technology. interactions 5, 2 (March 1998), 44-54.
  12. Donald A. Norman. 2010. Natural user interfaces are not natural. interactions 17, 3 (May 2010), 6-10.
  13. Kenton O'Hara, Richard Harper, Helena Mentis, Abigail Sellen, and Alex Taylor. 2013. On the naturalness of touchless: Putting the "interaction" back into NUI. ACM Trans. Comput.-Hum. Interact. 20, 1, Article 5 (April 2013), 25 pages.
  14. Sharon Oviatt. 1999. Ten myths of multimodal interaction. Commun. ACM 42, 11 (November 1999), 74-81.
  15. Sharon Oviatt, Rachel Coulston, and Rebecca Lunsford. 2004. When do we interact multimodally?: cognitive load and multimodal communication patterns. Proceedings of the 6th international conference on Multimodal interfaces (ICMI '04). ACM, New York, NY, USA, 129-136.
  16. Stefan Profanter. 2014. Implementation and Evaluation of multimodal input/output channels for task-based industrial robot programming. Master's thesis. Technische Universität München. Retrieved March 28, 2015 from https://arxiv.org/pdf/1503.04967v1.pdf
  17. Orit Shaer, Consuelo Valdes, Sirui Liu, Kara Lu, Kimberley Chang, Wendy Xu, Traci L. Haddock, Swapnil Bhatia, Douglas Densmore, and Robert Kincaid. 2014. Designing reality-based interfaces for experiential bio-design. Personal and Ubiquitous Computing 18, 6 (2014), 1515-1532.
  18. Lucio Davide Spano, Antonio Cisternino, and Fabio Paternò. A compositional model for gesture definition. Human-Centered Software Engineering (2012). Springer Berlin Heidelberg, 34-52.
  19. Priyamvada Tripathi and Sethuraman Panchanathan. 2008. Implication of multimodality in ambient interfaces. Proceedings of the 2008 Ambi-Sys workshop on Haptic user interfaces in ambient media systems (HAS '08). ICST (Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering), ICST, Brussels, Belgium, Article 10, 10 pages.
  20. John Underkoffler and Hiroshi Ishii. 1999. Urp: a luminous-tangible workbench for urban planning and design. Proceedings of the SIGCHI conference on Human Factors in Computing Systems (CHI '99). ACM, New York, NY, USA, 386-393.