Interactions in the Air: Adding Further Depth to Interactive Tabletops

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=93lDf-aAHaE



Duration: 4:09
437 views
8


Although interactive surfaces have many unique and compelling qualities, the interactions they support are by their very nature bound to the display surface. In this paper we present a technique for users to seamlessly switch between interacting on the tabletop surface to above it. Our aim is to leverage the space above the surface in combination with the regular tabletop display to allow more intuitive manipulation of digital content in three dimensions. Our goal is to design a technique that closely resembles the ways we manipulate physical objects in the real-world; conceptually, allowing virtual objects to be ‘picked up’ off the tabletop surface in order to manipulate their three dimensional position or orientation. We chart the evolution of this technique, implemented on two rear projection-vision tabletops. Both use special projection screen materials to allow sensing at significant depths beyond the display. Existing and new computer vision techniques are used to sense hand gestures and postures above the tabletop, which can be used alongside more familiar multi-touch interactions. Interacting above the surface in this way opens up many interesting challenges. In particular it breaks the direct interaction metaphor that most tabletops afford. We present a novel shadow-based technique to help alleviate this issue. We discuss the strengths and limitations of our technique based on our own observations and initial user feedback, and provide various insights from comparing, and contrasting, our tabletop implementations.




Other Videos By Microsoft Research


2016-10-11Precise Selection Techniques for Multi-Touch Screens
2016-10-11Robust Computer Vision-Based Detection of Pinching for One and Two-Handed Gesture Input
2016-10-11Soap: a Pointing Device that Works in Mid-Air
2016-10-11BlueTable: Connecting Wireless Mobile Devices on Interactive Surfaces Using Vision-Based Handshaking
2016-10-11Depth-Sensing Video Cameras for 3D Tangible Tabletop Interaction
2016-10-11SurfaceFusion: Unobtrusive Tracking of Everyday Objects in Tangible User Interfaces
2016-10-11ShapeTouch: Leveraging Contact Shape on Interactive Surfaces
2016-10-11Bringing Physics to the Surface
2016-10-11Separability of Spatial Manipulations in Multi-touch Interfaces
2016-10-11Augmenting Interactive Tables with Mice & Keyboards
2016-10-11Interactions in the Air: Adding Further Depth to Interactive Tabletops
2016-10-11Simulating Grasping Behavior on an Imaging Interactive Surface
2016-10-11Design and Evaluation of Interaction Models for Multi Touch Mice
2016-10-11Pen + Touch = New Tools
2016-10-11Combining Multiple Depth Cameras and Projectors for Interactions On, Above, and Between Surfaces
2016-10-11Using a Depth Camera as a Touch Sensor
2016-10-11Data Miming: Inferring Spatial Object Descriptions from Human Gesture
2016-10-11OmniTouch: Wearable Multitouch Interaction Everywhere
2016-10-11Phone as a Pixel: Enabling Ad-hoc, Large-Scale Displays Using Mobile Devices
2016-10-07Latin American Faculty Summit 2016 - Intelligent Devices
2016-10-07Pixel based Interaction Techniques