Responsible AI: An Interdisciplinary Approach | Panel Discussion

Subscribers:
343,000
Published on ● Video Link: https://www.youtube.com/watch?v=hzCY-HT65vg



Category:
Discussion
Duration: 1:00:45
312 views
6


Host: Xing Xie, Microsoft Research Asia
Panelists:
Pascale Fung, Hong Kong University of Science & Technology
Rui Guo, Renmin University of China
Jun Zhu, Tsinghua University
Jonathan Zhu, City University of Hong Kong
Xiaohong Wan, Beijing Normal University

When studying responsible AI (artificial intelligence), most of the time we are studying its impact on people and society. Sociologists, psychologists, and media scientists have long-term accumulation and research results in these areas. When we talk about fairness, we would better work with sociologists to analyze how AI could lead to stratification of society and polarization of people's opinions. As we study interpretability, we also hope to discuss with psychologists why people essentially need more transparent models, and how to best show the inner mechanisms of AI models. Communication scientists can help us gain a deeper understanding of AI models used in information distribution. From another perspective, we are also very interested in the application of responsible AI in these interdisciplinary areas, to help solve their problems. In this workshop, we invited researchers from different disciplines to discuss with us how we can jointly advance research in responsible AI.

Learn more about the Responsible AI Workshop: https://www.microsoft.com/en-us/research/event/responsible-ai-an-interdisciplinary-approach-workshop/

This workshop was part of the Microsoft Research Summit 2022: https://www.microsoft.com/en-us/research/event/microsoft-research-summit-2022/




Other Videos By Microsoft Research


2022-12-12Efficient Machine Learning at the Edge in Parallel
2022-12-12Machine learning assisted hyper-heuristics for online combinatorial optimization problems
2022-12-12Thompson Sampling in Combinatorial Multi-armed Bandits with Independent Arms
2022-12-12Combinatorial Pure Exploration with Limited Observation and Beyond
2022-12-12Oblivious Online Contention Resolution Schemes
2022-12-12Optimization from Structured Samples for Coverage and Influence Functions
2022-12-12End-to-end Reinforcement Learning for the Large-scale Traveling Salesman Problem
2022-12-12Deep Reinforcement Learning in Supply Chain Optimizations
2022-12-12Inverse Game Theory for Stackelberg Games: The Blessing of Bounded Rationality
2022-12-06Personality Predictions from Automated Video Interviews: Explainable or Unexplainable Models?
2022-12-06Responsible AI: An Interdisciplinary Approach | Panel Discussion
2022-12-06Personalizing Responsibility within AI Systems: A Case for Designing Diversity
2022-12-06Evidence-based Evaluation for Responsible AI
2022-12-06Towards Trustworthy Recommender Systems: From Shallow Models to Deep Models to Large Models
2022-12-06Development of a Game-Based Assessment to Measure Creativity
2022-12-06Interpretability, Responsibility and Controllability of Human Behaviors
2022-12-06On the Adversarial Robustness of Deep Learning
2022-12-06The Long March Towards AI Fairness
2022-12-06Towards Human Value Based Natural Language Processing (NLP)
2022-12-06Responsible AI Research at Microsoft Research Asia
2022-12-06Responsible AI Workshop | Opening Remarks