Learning Models of Language, Action and Perception for Human-Robot Collaboration

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=lzB7TSSYw5U



Duration: 1:19:38
1,879 views
16


Robots can act as a force multiplier for people, whether a robot assisting an astronaut with a repair on the International Space station, a UAV taking flight over our cities, or an autonomous vehicle driving through our streets. To achieve complex tasks, it is essential for robots to move beyond merely interacting with people and toward collaboration, so that one person can easily and flexibly work with many autonomous robots. The aim of my research program is to create autonomous robots that collaborate with people to meet their needs by learning decision-theoretic models for communication, action, and perception. Communication for collaboration requires models of language that map between sentences and aspects of the external world. My work enables a robot to learn compositional models for word meanings that allow a robot to explicitly reason and communicate about its own uncertainty, increasing the speed and accuracy of human-robot communication. Action for collaboration requires models that match how people think and talk, because people communicate about all aspects of a robot's behavior, from low-level motion preferences (e.g., "Please fly up a few feet") to high-level requests (e.g., "Please inspect the building"). I am creating new methods for learning how to plan in very large, uncertain state-action spaces by using hierarchical abstraction. Perception for collaboration requires the robot to detect, localize, and manipulate the objects in its environment that are most important to its human collaborator. I am creating new methods for autonomously acquiring perceptual models in situ so the robot can perceive the objects most relevant to the human's goals. My unified decision-theoretic framework supports data-driven training and robust, feedback-driven human-robot collaboration.

See more at https://www.microsoft.com/en-us/research/video/learning-models-language-action-perception-human-robot-collaboration/




Other Videos By Microsoft Research


2018-06-26Hybrid Reward Architecture and the Fall of Ms. Pac-Man with Dr. Harm van Seijen
2018-06-26Symbolic Automata for Static Specification Mining
2018-06-26Getting Virtual with Dr. Mar Gonzalez Franco
2018-06-26Visualizing Data and Other Big Ideas with Dr. Steven Drucker
2018-06-26How Programming Languages Quietly Run the World with Dr. Ben Zorn
2018-06-26Functional Programming Languages and the Pursuit of Laziness with Dr. Simon Peyton Jones
2018-06-26The future is quantum with Dr. Krysta Svore
2018-06-26Life at the Intersection of AI and Society with Dr. Ece Kamar
2018-06-26Living, Learning and Creating with Social Robots
2018-06-25LiveHardware Development at UCSC
2018-06-25Learning Models of Language, Action and Perception for Human-Robot Collaboration
2018-06-24PNW PLSE Workshop: Featured Talk: Continuously Integrated Verified Cryptography
2018-06-24PNW PLSE Workshop: Project Everest: Theory meets Reality
2018-06-24PNW PLSE Workshop: Welcome and Introductions
2018-06-20Found in Translation: Achieving Human Parity on Chinese to English News Translation
2018-06-20From Algorithms to Application Impact at Pacific Northwest National Lab (PNNL)
2018-06-13Harry Shum Speaks at the 2018 Allen School of Computer Science & Engineering Graduation
2018-06-13Teaching Computers to See with Dr. Gang Hua
2018-06-13Ethics and Diversity in AI
2018-06-13Mobile Sharing and Companion Experiences for Microsoft Teams Meetings
2018-06-13Mobile Sharing and Companion Experiences for Microsoft Teams Meetings (Audio Description)



Tags:
microsoft research