Home Conference Overview Conference Committee Exhibiting & Recruiting Sponsoring About Portland Our Sponsors Conference Program Conference Program Presenting at CHI Exhibitors Recruiters Registration Housing Student Volunteers Call For Participation Introduction Submissions Overview Archived CFPs All submissions closed | Interactivity Co-Chairs Jan Borchers, RWTH Aachen University, Germany Mitchell Gass, uLab | PDA, USA Eric Lee, RWTH Aachen University, Germany Session 1 - Touchy: Tangible Interfaces Session Chair: Barry Brown, University of Glasgow, UK Tuesday, 11:30-13:00, Room A105-A106 The Virtual Raft Project: A Mobile Interface for Interacting with Communities of Autonomous Characters Bill Tomlinson, Man Lok Yau, Jessica O’Connell, Ksatria Williams, So Yamaoka, University of California Irvine, USA This paper presents a novel and intuitive paradigm for interacting with autonomous animated characters. This paradigm utilizes a mobile device to allow people to transport characters among different virtual environments. The central metaphor in this paradigm is that virtual space is like land and real space is like water for virtual characters. The tangible interface described here serves as a virtual raft with which people may carry characters across a sea of real space from one virtual island to another. By increasing participants' physical engagement with the autonomous characters, this interaction paradigm contributes to the believability of those characters. GelForce: A Traction Field Tactile Interface Kevin Vlack, Terukazu Mizota, Naoki Kawakami, Kazuto Kamiyama, Hiroyuki Kajimoto, Susumu Tachi, University of Tokyo, Japan We propose a tactile sensor based on computer vision that measures a dense traction field, or a distribution of 3D force vectors over a 2D surface, which humans also effectively sense through a dense array of mechanoreceptors in the skin. The proposed "GelForce" tactile sensor has an elegant and organic design and can compute large and structurally rich traction fields in real time. We present how this sensor can serve as a powerful and intuitive computer interface for both existing and emerging desktop applications. Magic Cubes for Social and Physical Family Entertainment ZhiYing Zhou, Adrian David Cheok, Yu Li, National University of Singapore, Singapore, Hirokazu Kato, Osaka University, Japan Physical and social interactions are constrained, and natural interactions are lost in most of present digital family entertainment systems. Magic Cubes strive for bringing the computer storytelling, doll's house, and board game back into reality so that the children can interact socially and physically as what we did in the old days. Magic Cubes are novel augmented reality systems that explore to use cubes to interact with three dimensional virtual fantasy world. Magic Cubes encourage discussion, idea exchange, collaboration, social and physical interactions among families. Session 2 - Spaced Out: 3D Interaction Techniques Session Chair: Jan Borchers, RWTH Aachen University, Germany Tuesday, 14:30-16:00, Room A105-A106 Smart Laser-Scanner for 3D Human-Machine Interface Alvaro Cassinelli, Stephane Perrin, Masatoshi Ishikawa, University of Tokyo, Japan The problem of tracking hands and fingers on natural scenes has received much attention using passive acquisition vision systems and computationally intense image processing. We are currently studying a simple active tracking system using a laser diode, steering mirrors, and a single non-imaging detector, which is capable of acquiring three dimensional coordinates in real time without the need of any image processing at all. Essentially, it is a smart rangefinder scanner that instead of continuously scanning over the full field of view restricts its scanning area, on the basis of a real-time analysis of the backscattered signal, to a very narrow window precisely the size of the target. The complexity of the whole setup is equivalent to that of a portable laser-based barcode reader, making the system compatible with wearable computers. TRIBA: A Cable Television Retrieval & Awareness System Michael Tseng, Jon Kolko, Savannah College of Art and Design, USA This paper discusses the design of a physical and digital system intended to allow for easy manipulation and interaction with the tremendous amount of options present in advanced multimedia devices, such as digital cable television. As user demand for access to large quantities of data increases, and cable companies offer more choices to their audiences, traditional content selection techniques become less useful and much more difficult to understand. TRIBA is the result of a ten week research and design exploration investigating how users can easily manipulate and comprehend tremendously large data sets. The findings of this research indicate a need for utilizing interactive agents to bridge the gap between the user and their goal. As technology is created and consumer electronics becomes more integrated into our lives, devices speak a language that users are expected to learn. Magic Land: Live 3D Human Capture Mixed Reality Interactive System Tran Cong Thien Qui, Ta Huynh Duy Nguyen, Asitha Mallawaarachchi, Ke Xu, Wei Liu, Shang Ping Lee, ZhiYing Zhou, Sze Lee Teo, Hui Siang Teo, Le Nam Thang, Yu Li, Adrian David Cheok, Christopher Lindinger, Gernot Ziegler, Roland Haring, Wolfgang Ziegle, Markus Weilguny, National University of Singapore, Singapore, Hirokazu Kato, Osaka University, Japan "Magic Land" is a cross-section of art and technology. It not only demonstrates the latest advances in human-computer interaction and human-human communication: mixed reality, tangible interaction, and 3D-live human capture technology; but also defines new approaches of dealing with live mixed reality content for artists of any discipline. In this system, the user is captured by cameras from many angles and her live 3D avatar is created to be confronted with 3D computer-generated virtual animations. The avatars and virtual objects can interact with each other in a virtual scenery in the mixed reality context; and users can tangibly interact with these characters using their own hands. Session 3 - Light & Easy: Future Interfaces Session Chair: Mitchell Gass, uLab | PDA, USA Tuesday, 16:30-18:00, Room A105-A106 Curvature Dial: Eyes-Free Parameter Entry for GUIs m.c. schraefel, Graham Smith, University of Southampton, UK, Patrick Baudisch, Microsoft Research, USA In this demonstration, we introduce "curve dial" a technique designed to extend gesture-based interactions like FlowMenus with eyes-free parameter entry. FlowMenus, let users enter numerical parameters with "dialing" strokes surrounding the center of a radial menu. This centering requires users to keep their eyes on the Menu in order to align the pen with its center before initiating a gesture. Curve dial instead tracks the curvature of the path created by the pen: since curvature is location-independent, curvature dialing does not require users to keep track of the menu center and is therefore eyes-free. We demonstrate curvature dial with the example of a simple application that allows users to scroll through a document eyes-free. Intelligent Lighting for a better gaming experience Magy Seif El-Nasr, Joseph Zupko, Penn State University, USA, Keith Miron - University of Southern California, USA Lighting assumes many aesthetic and communicative functions in game environments that affect attention, immersion, visibility, and emotions. Game environments are dynamic and highly unpredictable; lighting such experiences to achieve desired visual goals is a very challenging problem. Current lighting methods rely on static manual techniques, which require designers to anticipate and account for all possible situations and user actions. Alternatively, we have developed ELE (Expressive Lighting Engine) -- an intelligent lighting system that automatically sets and adjusts scene lighting in real-time to achieve desired aesthetic and communicative goals. We discuss ELE and its utility in dynamically manipulating the lighting in a scene to direct attention, stimulate tension, and maintain visual continuity. ELE has been integrated within Unreal Tournament 2003. The videos show a demonstration of a first person shooter game developed using the Unreal 2.0 engine, where ELE was configured to dynamically stimulate tension, while maintaining other visual goals. Session 4 - Can You Hear Me Now? Audio Interfaces Session Chair: Eric Lee, RWTH Aachen University, Germany Wednesday, 09:00-10:30, Room A105-A106 SonicTexting Michal Rinott, Interaction Design Institute Ivrea, Italy SonicTexting is a system for inputting text "texting" using gestures and sound. As in musical instruments and everyday mechanical objects, sound in SonicTexting is synchronous and responsive to actions. SonicTexting explores people's hand-ear coordination and demonstrates the use of informative digital sound. It proposes that through touch and sound, a functional activity like text entry can become an experience on the borders between performing a task, playing an instrument and playing a game. In the Mixxx: Novel Digital DJ Interfaces Tue Haste Andersen, University of Copenhagen, Denmark We present an interactive system, Mixxx, for live DJ'ing using digital sound files. The design of the system is approached from two directions: Through Contextual Design using contextual interviews and video recordings and Open Source development where feedback and ideas are generated by developers and users from the open source community. Our contextual interviews show that DJs use a significant amount of their time on tracking and synchronizing songs using the traditional setup with turntables or CD players. By making beat information an integrated part of our DJ software Mixxx, synchronization is done automatically and DJs can use more time to attend other parts of the mix. We provide an intuitive interface for mixing with beat information that allows the same level of flexibility as with the traditional setup but facilitates new creative ways of mixing. |