HapticBots: Distributed Encountered-type Haptics for VR with Multiple Shape-changing Mobile Robots

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=s5NVJMYhfjk



Duration: 2:36
2,940 views
101


HapticBots introduces a novel encountered-type haptic approach for Virtual Reality (VR) based on multiple tabletop-size shape-changing robots. These robots move on a tabletop and change their height and orientation to haptically render various surfaces and objects on-demand. Compared to previous encountered-type haptic approaches like shape displays or robotic arms, our proposed approach has an advantage in deployability, scalability, and generalizability---these robots can be easily deployed due to their compact form factor. They can support multiple concurrent touch points in a large area thanks to the distributed nature of the robots. We propose and evaluate a novel set of interactions enabled by these robots which include: 1) rendering haptics for VR objects by providing just-in-time touch-points on the user's hand, 2) simulating continuous surfaces with the concurrent height and position change, and 3) enabling the user to pick up and move VR objects through graspable proxy objects. Finally, we demonstrate HapticBots with various applications, including remote collaboration, education and training, design and 3D modeling, and gaming and entertainment.




Other Videos By Microsoft Research


2021-11-08Acrylic, metal & a means of preparation: Imagining & living Black life beyond the surveillance state
2021-11-01FastNeRF: High-Fidelity Neural Rendering at 200FPS [Extended]
2021-10-29Full-Body Motion from a Single Head-Mounted Device: Generating SMPL Poses from Partial Observations
2021-10-29Litmus Predictor
2021-10-25Interview and Q&A with Jenny Sabin, Creator of the Ada Installation in Microsoft Building 99
2021-10-20Microsoft Research 2021 Global PhD Fellowship Recipients
2021-10-19Precision agriculture uses computer science to make farms more efficient and reduce climate change
2021-10-19Working at Microsoft Research Cambridge
2021-10-14Accelerating AI Innovation by Optimizing Infrastructure. With Dr. Muthian Sivathanu
2021-10-11In-Air Device Formations for Cross-Device Interaction via Multiple Spatially-Aware Armatures
2021-10-10HapticBots: Distributed Encountered-type Haptics for VR with Multiple Shape-changing Mobile Robots
2021-10-10X-Rings: A Hand-mounted 360 Degree Shape Display for Grasping in Virtual Reality [UIST 2021]
2021-10-07Convergence between CV and NLP Modeling and Learning
2021-10-05Safe Real-World Autonomy in Uncertain and Unstructured Environments
2021-10-05Women of Color and the Digital Labor of Repair
2021-10-01Fake It Till You Make It: Face Analysis In The Wild Using Synthetic Data Alone
2021-09-23ALIGN: Scaling Up Visual and Vision-Language Representation Learning With Noisy Text Supervision
2021-09-23Zero-Shot Detection via Vision and Language Knowledge Distillation
2021-09-17Three Explorations on Pre-Training: an Analysis, an Approach, and an Architecture
2021-09-16Visual Recognition beyond Appearances, and its Robotic Applications
2021-09-16A Truly Unbiased Model