Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=oNSt92DRkJA



Duration: 3:18
5,439 views
65


Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user’s field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user’s field of view. Mise-Unseen leverages gaze racking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of MiseUnseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user’s references, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.

Learn more about Mise-Unseen and the latest in VR at Microsoft Research: https://www.microsoft.com/en-us/research/blog/a-new-era-of-spatial-computing-brings-fresh-challenges-and-solutions-to-vr/




Other Videos By Microsoft Research


2019-10-24Microsoft PhD Summit 2019: Nathalie Riche [Short Talk]
2019-10-24Microsoft PhD Summit 2019: Ken Hinckley [Short Talk]
2019-10-24'The Global AI Supercomputer' by Donald Kossmann at Microsoft PhD Summit 2019 [Keynote]
2019-10-23Learning to Map Natural Language to General Purpose Source Code
2019-10-23Hand and User Detection with Multiple Users on Large Displays
2019-10-23Machine teaching, LUIS and the democratization of custom AI with Dr. Riham Mansour [Podcast]
2019-10-21Generalization in Reinforcement Learning with Selective Noise Injection
2019-10-21DreamWalker: Substituting Real-World Walking Experiences with a Virtual Reality
2019-10-21Learning Structured Models for Safe Robot Control
2019-10-21RDMA: Provably More Powerful Communication
2019-10-21Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight
2019-10-21CapstanCrunch: A Haptic VR Controller with User-supplied Force Feedback
2019-10-18Social Computing for Social Good in Low-Resource Environments
2019-10-16News from the front in the post-quantum crypto wars with Dr. Craig Costello [Podcast]
2019-10-14Grounding Natural Language for Embodied Agents
2019-10-14Towards Using Batch Reinforcement Learning to Identify Treatment Options in Healthcare
2019-10-14Can quantum mechanics help us learn models of classical systems?
2019-10-14Reinforcement Learning From Small Data in Feature Space
2019-10-14Reward Machines: Structuring Reward Function Specifications and Reducing Sample Complexity...
2019-10-14Safe and Fair Reinforcement Learning
2019-10-14Scalable and Robust Multi-Agent Reinforcement Learning



Tags:
Mise-Unseen
Eye Tracking
Virtual Reality
VR
Virtual Reality Scene Changes
UIST
UIST 2019
HCI
human-computer interaction
Andy Wilson
Eyal Ofek
Mar Gonzalez Franco
Christian Holz
Microsoft Research
MSR