Evidence-based Evaluation for Responsible AI

Subscribers:
351,000
Published on ● Video Link: https://www.youtube.com/watch?v=JhX1CXFzInA



Duration: 30:51
304 views
7


Research Talk
Jonathan Zhu, City University of Hong Kong

Current efforts on responsible AI have focused on why AI should be socially responsible and how to produce responsible AI. An equally important question that hasn’t been adequately addressed is how responsible the deployed AI products are. The question is ignored most of the time, occasionally answered by anecdotal evidence or casual evaluation. We need to understand that good evaluations are not easy, quick, or cheap to carry out. On the contrary, good evaluations rely on evidence that are systematically collected based on proven methods, completely independent from the process, data, and even research staff responsible for the relevant AI products. The evidence-based medicine practice over the last two decades has provided a relevant and informative role model for the AI industry to follow.

Learn more about the Responsible AI Workshop: https://www.microsoft.com/en-us/research/event/responsible-ai-an-interdisciplinary-approach-workshop/

This workshop was part of the Microsoft Research Summit 2022: https://www.microsoft.com/en-us/research/event/microsoft-research-summit-2022/




Other Videos By Microsoft Research


2022-12-12Thompson Sampling in Combinatorial Multi-armed Bandits with Independent Arms
2022-12-12Combinatorial Pure Exploration with Limited Observation and Beyond
2022-12-12Oblivious Online Contention Resolution Schemes
2022-12-12Optimization from Structured Samples for Coverage and Influence Functions
2022-12-12End-to-end Reinforcement Learning for the Large-scale Traveling Salesman Problem
2022-12-12Deep Reinforcement Learning in Supply Chain Optimizations
2022-12-12Inverse Game Theory for Stackelberg Games: The Blessing of Bounded Rationality
2022-12-06Personality Predictions from Automated Video Interviews: Explainable or Unexplainable Models?
2022-12-06Responsible AI: An Interdisciplinary Approach | Panel Discussion
2022-12-06Personalizing Responsibility within AI Systems: A Case for Designing Diversity
2022-12-06Evidence-based Evaluation for Responsible AI
2022-12-06Towards Trustworthy Recommender Systems: From Shallow Models to Deep Models to Large Models
2022-12-06Development of a Game-Based Assessment to Measure Creativity
2022-12-06Interpretability, Responsibility and Controllability of Human Behaviors
2022-12-06On the Adversarial Robustness of Deep Learning
2022-12-06The Long March Towards AI Fairness
2022-12-06Towards Human Value Based Natural Language Processing (NLP)
2022-12-06Responsible AI Research at Microsoft Research Asia
2022-12-06Responsible AI Workshop | Opening Remarks
2022-12-06Low-latency, Real-time Insights from Space
2022-12-06Next Generation Networking and its Platform Workshop | Panel Discussion