Machine Learning Day 2013 - Afternoon Sessions

Subscribers:
351,000
Published on ● Video Link: https://www.youtube.com/watch?v=Oy1PCGOXp0g



Duration: 1:36:07
858 views
8


2:30 Ben Taskar (UW CSE), "Probabilistic Models of Diversity: Determinantal Point Processes" 3:00 Scott Yih (MSR), "Multi-Relational Latent Semantic Analysis" 3:30 Raj Rao (UW CSE), "Opportunities and Challenges for Machine Learning in Brain-Computer Interfacing" Probabilistic Models of Diversity: Determinantal Point Processes in Machine Learning, Ben Taskar (UW CSE) Many real-world problems involve negative interactions; we might want search results to be diverse, sentences in a summary to cover distinct aspects of the subject, or objects in an image to occupy different regions of space. However, traditional structured probabilistic models tend deal poorly with these kinds of situations; Markov random fields, for example, become intractable even to approximate. Determinantal point processes (DPPs), which arise in random matrix theory and quantum physics, behave in a complementary fashion: while they cannot encode positive interactions, they define expressive models of negative correlations that come with surprising and exact algorithms for many types of inference, including conditioning, marginalization, and sampling. I'll present our recent work on a novel factorization and dual representation of DPPs that enables efficient and exact inference for exponentially-sized structured sets. We develop an exact inference algorithm for DPPs conditioned on subset size and derive efficient parameter estimation for DPPs from several types of observations, as well as approximation algorithms for large-scale non-linear DPPs. I'll illustrate the advantages of DPPs on several natural language and computer vision tasks: document summarization, image search and multi-person pose estimation problems in images. Joint work with Alex Kulesza, Jennifer Gillenwater, Raja Affandi and Emily Fox. Multi-Relational Latent Semantic Analysis, Scott Yih (MSR) We present Multi-Relational Latent Semantic Analysis (MRLSA) which generalizes Latent Semantic Analysis (LSA). MRLSA provides an elegant approach to combining multiple relations between words by constructing a 3-way tensor. Similar to LSA, a low-rank approximation of the tensor is derived using a tensor decomposition. Each word in the vocabulary is thus represented by a vector in the latent semantic space and each relation is captured by a latent square matrix. The degree of two words having a specific relation can then be measured through simple linear algebraic operations. We demonstrate that by integrating multiple relations from both homogeneous and heterogeneous information sources, MRLSA achieves state-of-the-art performance on existing benchmark datasets for two relations, antonymy and is-a. Opportunities and Challenges for Machine Learning in Brain-Computer Interfacing, Raj Rao (UW CSE) The field of brain-computer interfacing has seen rapid advances in recent years, with applications ranging from cochlear implants for the deaf to brain-controlled prosthetic arms for the paralyzed. This talk will provide a brief overview of the various types of brain-computer interfaces (BCIs) and the techniques they use for mapping brain signals to control outputs. I will then highlight some opportunities as well as challenges for machine learning in helping facilitate the transition of BCIs from the laboratory to the real world.




Other Videos By Microsoft Research


2016-08-09Data Structures for Efficient Inference and Optimization in Expressive Continuous Domains
2016-08-09Towards A Holistic Approach to Performance Portability for Heterogeneous Systems
2016-08-09Modular reasoning for modular concurrency
2016-08-09Building Better Questionnaires with Probabilistic Modelling
2016-08-08FlashBack: Immersive Virtual Reality on Mobile Devices via Rendering Memoization
2016-08-08SwimTrain: Never swim alone with this cooperative “exergame” for group fitness
2016-08-08Demo of Open Test Platform for LTE/LTE-U
2016-08-08Deeparnab Chakrabarty: Provable Submodular Function Minimization via Fujishige Wolfe Algorithm
2016-08-08Memristors: The Future of Computer Memory and Neuromorphic Circuits?
2016-08-08Number theoretic methods in quantum compiling
2016-08-08Machine Learning Day 2013 - Afternoon Sessions
2016-08-08Discovering the Structure of Visual Categories from Weak Annotations
2016-08-08The Importance of the Center for Computational Thinking
2016-08-08Specification and Verification in Introductory Computer Science
2016-08-08Programming Approximate Systems
2016-08-08Typed functional probabilistic programming: ready for practical use?
2016-08-08eScience in the Medical Domain
2016-08-08Tutorial 2 - Kinect for Windows in Science Applications
2016-08-08Big Planet Big Data Big Science - Deforestation, Roads, Birds, Carbon & Amazon Phenology
2016-08-08From Smart Sensors to City OS (II) - Ryosuke Shibasaki
2016-08-08How I Learned to Stop Worrying and Love the “DOM”



Tags:
microsoft research