|Online proceedings with links to ACM Digital Library are available here|
Information for presentation:
|- Presentation time (including Q & A): 25 minitues for full papers and 15 minutes for short papers.|
|- Input: VGA|
|- Screen: 4:3 aspect ratio|
|Applications & Technology (Oct.15, 9:10-10:30) show/hide description
Chair: Daisuke Iwai (Osaka University)
|Thumbs-Up: 3D Spatial Thumb-Reachable Space for One-Handed Thumb Interaction on Smartphones|
|Khalad Hasan, Junhyeok Kim, David Ahlström, Pourang Irani|
We defined Thumbs-Up space, the thumb-reachable in-air input space around a smartphone and presented a set of interaction techniques with Thumbs-Up space to resolve common limitations of one-handed smartphone input.
|Moving Ahead with Peephole Pointing: Modelling Object Selection with Head-Worn Display Field of View Limitations|
|Barrett Ens, David Ahlström, Pourang Irani|
Selecting virtual objects viewed through a head-worn display may require searching for objects that lie beyond the field of view. We model pointing performance when head motion is combined with direct pointing and raycasting.
|Improving Interaction in HMD-Based Vehicle Simulators through Real Time Object Reconstruction|
|Michael Bottone, Kyle Johnsen|
Traditionally, vehicle simulators use spatially-fixed displays to allow users to see vehicle controls. We present the design and usability evaluation of an HMD-based driving simulator that retains this feature.
|Exploring Immersive Interfaces for Well Placement Optimization in Reservoir Models|
|Roberta Cabral Mota, Stephen Cartwright, Ehud Sharlin, Mario Costa Sousa|
We contribute an multi-platform immersive application for reservoir engineers tasked with oil-and-gas well placement optimization. Our paper presents domain expert reflections and our experiences designing an applied immersive application.
|Input Device & Usability (Oct.15, 10:50-12:10) show/hide description
Chair: Wolfgang Stuerzlinger (Simon Fraser University)
|A Metric for Short-Term Hand Comfort and Discomfort: Exploring Hand Posture Evaluation|
|Jonas Mayer, Nicholas Katzakis|
To simplify the creation of expressive pointing based interaction methods, we explore the effects of hand posture comfort and discomfort and create a metric for their quantification.
|Improving Gestural Interaction With Augmented Cursors|
|Ashley Dover, G Michael Poor, Darren Guinness, Alvin Jude|
Examining 2 different augmented cursors against the standard point cursor in a gestural interaction. The augmented cursors showed significant improvements.
|Desktop Orbital Camera Motions using Rotational Head Movements|
|Thibaut Jacob, Gilles Bailly, Eric Lecolinet, Géry Casiez, Marc Teyssier|
How can head movements serve to change the viewpoint in 3D applications to disambiguate the view? We study how to use head movements to perform orbital camera control and report on four user studies.
|On Your Feet! Enhancing Vection in Leaning-Based Interfaces through Multisensory Stimuli|
|Ernst Kruijff, Alexander Marquardt, Christina Trepkowski, Robert Lindeman, Andre Hinkenjann, Jens Maiero, Bernhard E. Riecke|
Using a custom-designed foot haptics system, we show that adding walking-related auditory cues (footstep sounds), visual cues (head-bobbing), and vibrotactile cues under participants' feet enhanced participants' self-motion sensation and involvement/presence.
|Touch and Movement (Oct.15, 13:10-14:30) show/hide description
Chair: Ernst Kruijff (Bonn-Rhein-Sieg University of Applied Sciences)
|A Non-grounded and Encountered-type Haptic Display Using a Drone|
|Kotaro Yamaguchi, Ginga Kato, Yoshihiro Kuroda, Kiyoshi Kiyokawa, Haruo Takemura|
We proposed a novel drone-based haptic device with a force generation mechanism using its own airflow that realizes a non-grounded encountered-type haptic display.
|Enhancement of Motion Sensation by Pulling Clothes|
|Erika Oishi, Masahiro Koge, Sugarragchaa Khurelbaatar, Hiroyuki Kajimoto|
We propose a method of enhancing motion sensation by pulling clothing. Our system uses DC motors and force sensors to present traction force and cause skin deformation.
|Impact of Motorized Projection Guidance on Spatial Memory|
|Hind Gacem, Gilles Bailly, James Eagan, Eric Lecolinet|
Robotic guidance techniques that mimics human pointing enhance not only localisation time but also spatial learning. Active control on its own is insufficient to improve memorization performance.
|Inducing Body-Transfer Illusions in VR by Providing Brief Phases of Visual-Tactile Stimulation|
|Oscar Ariza, Jann Freiwald, Nadine Laage, Michaela Feist, Mariam Salloum, Gerd Bruder, Frank Steinicke|
Evaluation of a framework that includes a pair of gloves featuring vibrotactile feedback, able to reproduce body-transfer illusions in VR providing automatic tactile stimuli instead of manually synchronized stimulation.
| Interaction I (Oct.16, 9:00-10:40) show/hide description
Chair: Barrett Ens (University of Manitoba)
|Interacting with Maps on Optical Head-Mounted Displays|
|David Rudi, Ioannis Giannopoulos, Peter Kiefer, Christian Peier, Martin Raubal|
This paper explores the design space for interacting with maps on Optical (See-Through) Head-Mounted Displays and evaluates the interactions in an experiment with 31 participants.
|Touching the Sphere: Leveraging Joint-Centered Kinespheres for Spatial User Interaction|
|Paul Lubos, Gerd Bruder, Oscar Ariza, Frank Steinicke|
Joint-Centered User Interfaces are a user interface concept aiming to allow efficient, long-term use of spatial user interfaces for productive environments.
|Optimising Free Hand Selection in Large Displays by Adapting to User's Physical Movements|
|Xiaolong Lou, Andol X. Li, Ren Peng, Preben Hansen|
Users often change their positions in free hand interaction with large displays. This work introduces a user's physical movement-adapted technique to improve free hand selection performance.
|Locomotion in Virtual Reality for Individuals with Autism Spectrum Disorder|
|Evren Bozgeyikli, Andrew Raij, Srinivas Katkoori, Rajiv Dubey|
In this study, eight locomotion techniques were implemented in an immersive virtual reality environment and evaluated with high functioning individuals with ASD.
|Interaction II (Oct.16,11:00-12:30) show/hide description
Chair: Kyle Johnsen (University of Georgia)
|SHIFT-Sliding and DEPTH-POP for 3D Positioning|
|Junwei Sun, Wolfgang Stuerzlinger, Dmitri Shuralyov|
For mouse-based 3D object sliding, Shift-Sliding enables users to make objects float or collide. Depth-Pop maps wheel actions to all positions along the mouse ray, where the object is visible, in contact, and collision-free.
|Preference Between Allocentric and Egocentric 3D Manipulation in a Locally Coupled Configuration|
|Paul Issartel, Lonni Besançon, Florimond Guéniat, Tobias Isenberg, Mehdi Ammi|
We study user preference between two opposite mappings for 3D object manipulation based on mobile device motion. Our results provide guidelines to select the most compatible mappings in this configuration.
|Providing Assistance for Orienting 3D Objects Using Monocular Eyewear|
|Mengu Sukan, Carmine Elvezio, Steven Feiner, Barbara Tversky|
We have designed and implemented a novel visualization approach and three additional visualizations representing different paradigms for guiding unconstrained manual 3DOF rotation, targeting monoscopic HWDs.
|Combining Ring Input with Hand Tracking for Precise, Natural Interaction with Spatial Analytic Interfaces|
|Barrett Ens, Ahmad Byagowi, Teng Han, Juan David Hincapié-Ramos, Pourang Irani|
To support effective interaction for everyday analytic tasks on head-worn displays, we combine hand tracking data from a depth camera with input from a ring device, and demonstrate interactions with virtual windows and contents.
|Demo (Oct.15, 14:50-(Fast Forwards) 15:30-17:30(Demo))|
|Sharpen Your Carving Skills in Mixed Reality Space|
|Maho Kawagoe, Mai Otsuki, Fumihisa Shibata, Asako Kimura|
|Stickie: Mobile Device Supported Spatial Collaborations|
|Jaskirat S. Randhawa|
|Shift-Sliding and Depth-Pop for 3D Positioning|
|Junwei Sun, Wolfgang Stuerzlinger, Dmitri Shuralyov|
|Developing Interoperable Experiences with OpenUIX|
|Mikel Salazar, Carlos Laorden|
|TickTockRay Demo: Smartwatch Raycasting for Mobile HMDs|
|Daniel Kharlamov, Krzysztof Pietroszek, Liudmila Tahai|
|Poster (Oct.15, 14:50-(Fast Forwards) 15:30-17:30(Poster))|
|Mushi: A Generative Art Canvas for Kinect Based Tracking|
|Jennifer Weiler, Sudarshan Seshasayee|
|AR Tabletop Interface Using an Optical See-Through HMD|
|Nozomi Sugiura, Takashi Komuro|
|Coexistent Space: Collaborative Interaction in Shared 3D Space|
|Ji-Yong Lee, Joung-Huem Kwon, Sang-Hun Nam, Joong-Jae Lee, Bum-Jae You|
|Development of a Toolkit for Creating Kinetic Garments Based on Smart Hair Technology|
|Mage Xue, Masaru Ohkubo, Miki Yamamura, Hiroko Uchiyama, Takuya Nojima, Yael Friedman|
|Large Scale Interactive AR Display Based on a Projector-Camera System|
|Chun Xie, Yoshinari Kameda, Kenji Suzuki, Itaru Kitahara|
|TickTockRay: Smartwatch Raycasting for Mobile HMDs|
|Krzysztof Pietroszek, Daniel Kharlamov|
|3D Camera Pose History Visualization|
|Mayra Donaji Barrera Machuca, Wolfgang Stuerzlinger|
|Social Spatial Mashup for Place and Object - based Information Sharing|
|Choonsung Shin, Youngmin Kim, Jisoo Hong, Sunghee Hong, Hoonjong Kang|
|Real-time Sign Language Recognition with Guided Deep Convolutional Neural Networks|
|Zhengzhe Liu, Fuyang Huang, Wai Lan Tang, Felix Yim Binh Sze, Jing Qin, Xiaogang Wang, Qiang Xu|
|Window-Shaping: 3D Design Ideation in Mixed Reality|
|Ke Huo, Vinayak, Karthik Ramani|
|KnowWhat: Mid Field Sensemaking for the Visually Impaired|
|Sujeath Pareddy, Abhay Agarwal, Manohar Swaminathan|
|Katsukazan: An Intuitive iOS App for Informing People About Volcanic Activity in Japan|
|Paul Haimes, Tetsuaki Baba|
|Empirical Method for Detecting Pointing Gestures in Recorded Lectures|
|Xiaojie Zha, Marie-luce Bourguet|
|Arm-Hidden Private Area on an Interactive Tabletop System|
|Kai Li, Asako Kimura, Fumihisa Shibata|
|AnyOrbit: Fluid 6DOF spatial navigation of virtual environments using orbital motion|
|Benjamin Outram, Yun Suen Pai, Kevin Fan, Kouta Minamizawa, Kai Kunze|
|KnowHow: Contextual Audio-Assistance for the Visually Impaired in Performing Everyday Tasks|
|Abhay Agarwal, Sujeath Pareddy, Swaminathan Manohar|
|Effect of using Walk-In-Place Interface for Panoramic Video Play in VR|
|Azeem Syed Muhammad, Sang Chul Ahn, Jae-In Hwang|
|Using Area Learning in Spatially-Aware Ubiquitous Environments|
|Edwin Chan, Yuxi Wang, Teddy Seyed, Frank Maurer|
|MocaBit 2.0: A Gamified System to Examine Behavioral Patterns through Granger Causality|
|Sanghyun Yoo, Sudarshan Seshasayee|
|Fast and Accurate 3D Selection using Proxy with Spatial Relationship for Immersive Virtual Environments|
|Jun Lee, Ji-Hyung Park, JuYoung Oh, JoongHo Leek|
|Haptic Exploration of Remote Environments with Gesture-based Collaborative Guidance|
|Seokyeol Kim, Jinah Park|
|Subliminal Reorientation and Repositioning in Virtual Reality During Eye Blinks|
|Eike Langbehn, Gerd Bruder, Frank Steinicke|
|Multimodal Embodied Interface for Levitation and Navigation in 3D Space|
|Monica Perusquia-Hernandez, Tiago Martins, Takahisa Enomoto, Mai Otsuki, Hiroo Iwata, Kenji Suzuki|
|Acquario: A Tangible Spatially-Aware Tool for Information Interaction and Visualization|
|Sydney Pratte, Teddy Seyed, Frank Maurer|
|Grasp, Grab or Pinch? Identifying User Preference for In-Air Gestural Manipulation|
|Alvin Jude, G. Michael Poor, Darren Guinness|
|Biometric Authentication Using the Motion of a Hand|
|Satoru Imura, Hiroshi Hosobe|