How good is your classifier? Revisiting the role of evaluation metrics in machine learning

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=yiFWRcL-4Ts



Duration: 57:01
1,290 views
41


With the increasing integration of machine learning into real systems, it is crucial that trained models are optimized to reflect real-world tradeoffs. Increasing interest in proper evaluation has led to a wide variety of metrics employed in practice, often specially designed by experts. However, modern training strategies have not kept up with the explosion of metrics, leaving practitioners to resort to heuristics.

To address this shortcoming, I will present a simple, yet consistent post-processing rule which improves the performance of trained binary, multilabel, and multioutput classifiers. Building on these results, I will propose a framework for metric elicitation, which addresses the broader question of how one might select an evaluation metric for real world problems so that it reflects true preferences.

See more at https://www.microsoft.com/en-us/research/video/how-good-is-your-classifier-revisiting-the-role-of-evaluation-metrics-in-machine-learning/




Other Videos By Microsoft Research


2020-05-26Auditing Outsourced Services
2020-05-26MSR Distinguished Lecture Series: First-person Perception and Interaction
2020-05-26Large-scale live video analytics over 5G multi-hop camera networks
2020-05-26Kristin Lauter's TED Talk on Private AI at Congreso Futuro during Panel 11 / SOLVE
2020-05-19How an AI agent can balance a pole using a simulation
2020-05-19How to build Intelligent control systems using new tools from Microsoft and simulations by Mathworks
2020-05-13Diving into Deep InfoMax with Dr. Devon Hjelm | Podcast
2020-05-08An Introduction to Graph Neural Networks: Models and Applications
2020-05-07MSR Cambridge Lecture Series: Photonic-chip-based soliton microcombs
2020-05-07Multi-level Optimization Approaches to Computer Vision
2020-05-05How good is your classifier? Revisiting the role of evaluation metrics in machine learning
2020-05-05Fast and Flexible Multi-Task Classification Using Conditional Neural Adaptive Processes
2020-05-05Hypergradient descent and Universal Probabilistic Programming
2020-05-04Learning over sets, subgraphs, and streams: How to accurately incorporate graph context
2020-05-04An Ethical Crisis in Computing?
2020-04-21Presentation on “Beyond the Prototype” by Rushil Khurana
2020-04-20Understanding and Improving Database-backed Applications
2020-04-20Efficient Learning from Diverse Sources of Information
2020-04-08Project Orleans and the distributed database future with Dr. Philip Bernstein | Podcast
2020-04-07Reprogramming the American Dream: A conversation with Kevin Scott and J.D. Vance, with Greg Shaw
2020-04-01An interview with Microsoft President Brad Smith | Podcast



Tags:
machine learning
evaluation metrics
multioutput classifiers
Sanmi Koyejo
Katja Hofmann
microsoft research cambridge