Efficient Learning from Diverse Sources of Information

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=5CpH8948P4I



Category:
Guide
Duration: 1:07:40
1,226 views
42


Although machine learning has witnessed rapid progress in the last decade, many current learning algorithms are very inefficient in terms of the amount of data it uses, and the time used to train the model. On the other hand, humans excel at many of the learning tasks with very limited data. Why are machines so inefficient, and why can humans learn so well? The key to the answer lies in that humans can learn from diverse sources of information, and are able to use past knowledge to apply in new domains. In this talk, I will study learning from diverse sources of information to make ML algorithms more efficient. In the first part, I will talk about how to incorporate diverse forms of questions into the learning process. Particularly, I will look at the problem of utilizing preference information for learning a regression function and show an interesting connection to nearest neighbors and isotonic regression. In the second part, I will talk about multitask and transfer learning from different domains for natural language understanding. I will explain a sample-reweighting scheme using language models to automatically weight external-domain samples according to their help for the target task.

See more at https://www.microsoft.com/en-us/research/video/efficient-learning-from-diverse-sources-of-information/




Other Videos By Microsoft Research


2020-05-08An Introduction to Graph Neural Networks: Models and Applications
2020-05-07MSR Cambridge Lecture Series: Photonic-chip-based soliton microcombs
2020-05-07Multi-level Optimization Approaches to Computer Vision
2020-05-05How good is your classifier? Revisiting the role of evaluation metrics in machine learning
2020-05-05Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes
2020-05-05Hypergradient descent and Universal Probabilistic Programming
2020-05-04Learning over sets, subgraphs, and streams: How to accurately incorporate graph context
2020-05-04An Ethical Crisis in Computing?
2020-04-21Presentation on “Beyond the Prototype” by Rushil Khurana
2020-04-20Understanding and Improving Database-backed Applications
2020-04-20Efficient Learning from Diverse Sources of Information
2020-04-08Project Orleans and the distributed database future with Dr. Philip Bernstein | Podcast
2020-04-07Reprogramming the American Dream: A conversation with Kevin Scott and J.D. Vance, with Greg Shaw
2020-04-01An interview with Microsoft President Brad Smith | Podcast
2020-03-30Microsoft Rocketbox Avatar library
2020-03-27Virtual reality without vision: A haptic and auditory white cane to navigate complex virtual worlds
2020-03-26Statistical Frameworks for Mapping 3D Shape Variation onto Genotypic and Phenotypic Variation
2020-03-26Can Machines Perceive Emotion?
2020-03-25Microsoft’s AI Transformation, Project Turing and smarter search with Rangan Majumder | Podcast
2020-03-19Enabling Rural Communities to Participate in Crowdsourcing, with Dr. Vivek Seshadri | Podcast
2020-03-19Demo: Enhancing Smartphone Productivity and Reliability with an Integrated Display Cover



Tags:
machine learning
Matthew Hausknecht
Yichong Xu
ML algorithms
microsoft research