Escapement: A Tool for Interactive Prototyping with Video via Sensor-Mediated Abstraction of Time

Subscribers:
343,000
Published on ● Video Link: https://www.youtube.com/watch?v=hjeBMj7nUMw



Duration: 4:06
680 views
23


This CHI 2023 research video shows the Escapement interactive video-prototyping tool in action.

Escapement is a video prototyping tool that introduces a powerful new concept for prototyping screen-based interfaces by flexibly mapping sensor values to dynamic playback control of videos. This recasts the time dimension of video mock-ups as sensor-mediated interaction.

This abstraction of time as interaction, which we dub video-escapement prototyping, empowers designers to rapidly explore and viscerally experience direct touch or sensor-mediated interactions across one or more device displays. Our system affords cross-device and bidirectional remote (tele-present) experiences via cloud-based state sharing across multiple devices. This makes Escapement especially potent for exploring multi-device, dual-screen, or remote-work interactions for screen-based applications.

We introduce the core concept of sensor-mediated abstraction of time for quickly generating video-based interactive prototypes of screen-based applications, share the results of observations of long-term usage of video-escapement techniques with experienced interaction designers, and articulate design choices for supporting a reflective, iterative, and open-ended creative design process.

See more at https://www.microsoft.com/en-us/research/video/escapement-a-tool-for-interactive-prototyping-with-video-via-sensor-mediated-abstraction-of-time/




Other Videos By Microsoft Research


2023-09-22Final intern talk: Improving Frechet Audio Distance for Generative Music Evaluation
2023-09-15Microsoft Research India - who we are.
2023-08-09Keypoint Detection for Measuring Body Size of Giraffes: Enhancing Accuracy and Precision
2023-08-04Scalable and Efficient AI: From Supercomputers to Smartphones
2023-07-18AI for Precision Health
2023-07-07Multilingual Evaluation of Generative AI (MEGA)
2023-07-07The Whole Truth and Nothing But the Truth: Faithful and Controllable Dialogue Response Generation...
2023-07-07Privacy-Preserving Domain Adaptation of Semantic Parsers
2023-05-30Microsoft’s Holoportation™ Communications Technology: Facilitating 3D Telemedicine
2023-05-05Human-Centered AI: Ensuring Human Control While Increasing Automation
2023-05-03Escapement: A Tool for Interactive Prototyping with Video via Sensor-Mediated Abstraction of Time
2023-05-03AdHocProx: Sensing Mobile, Ad-Hoc Collaborative Device Formations using Dual Ultra-Wideband Radios
2023-05-01MARI Grand Seminar - Large Language Models and Low Resource Languages
2023-04-27Innovating through uncertainty: Getting super curious and combining disparate elements
2023-04-13WiDS Career Panel: Gabriela de Queiroz, Juliet Hougland (Netflix), and Samantha Sifleet
2023-03-24Learning to Exploit Temporal Structure for Biomedical Vision-Language Processing
2023-03-23Foundation models and the next era of AI
2023-02-24Behind the label: Glimpses of data labelling labours for AI
2023-02-17Art of doing disruptive research
2023-02-17Fighting the Global Social Media Infodemic: from Fake News to Harmful Content
2023-02-15Responsible AI Tracker Tour