Interpretability, Responsibility and Controllability of Human Behaviors

Subscribers:
345,000
Published on ● Video Link: https://www.youtube.com/watch?v=nYl1KGICWTQ



Duration: 33:29
243 views
4


Research Talk
Xiaohong Wan, Beijing Normal University

When judging whether a man should take his responsibility for his behavior, the judger often evaluates whether his behavior is interpretable and under his controllability. However, it is difficult to evaluate such quantities from external observers, as the processes and internal states inside the brain are intangible. Furthermore, it is also difficult to evaluate these internal states to details and their causality by himself, even as the owner of the behaviors. Many of human behaviors are driven by fast and intuitive processes, leaving post-hoc explanations of these processes. Even for those controlled processes, the explanations remain largely unclear. In this talk, I would like to discuss these issues in terms of neural mechanisms underlying human behaviors.

Learn more about the Responsible AI Workshop: https://www.microsoft.com/en-us/research/event/responsible-ai-an-interdisciplinary-approach-workshop/

This workshop was part of the Microsoft Research Summit 2022: https://www.microsoft.com/en-us/research/event/microsoft-research-summit-2022/




Other Videos By Microsoft Research


2022-12-12Optimization from Structured Samples for Coverage and Influence Functions
2022-12-12End-to-end Reinforcement Learning for the Large-scale Traveling Salesman Problem
2022-12-12Deep Reinforcement Learning in Supply Chain Optimizations
2022-12-12Inverse Game Theory for Stackelberg Games: The Blessing of Bounded Rationality
2022-12-06Personality Predictions from Automated Video Interviews: Explainable or Unexplainable Models?
2022-12-06Responsible AI: An Interdisciplinary Approach | Panel Discussion
2022-12-06Personalizing Responsibility within AI Systems: A Case for Designing Diversity
2022-12-06Evidence-based Evaluation for Responsible AI
2022-12-06Towards Trustworthy Recommender Systems: From Shallow Models to Deep Models to Large Models
2022-12-06Development of a Game-Based Assessment to Measure Creativity
2022-12-06Interpretability, Responsibility and Controllability of Human Behaviors
2022-12-06On the Adversarial Robustness of Deep Learning
2022-12-06The Long March Towards AI Fairness
2022-12-06Towards Human Value Based Natural Language Processing (NLP)
2022-12-06Responsible AI Research at Microsoft Research Asia
2022-12-06Responsible AI Workshop | Opening Remarks
2022-12-06Low-latency, Real-time Insights from Space
2022-12-06Next Generation Networking and its Platform Workshop | Panel Discussion
2022-12-06OpenNetLab: An Open Platform for RL-based Congestion Control for Real-Time Communication
2022-12-06Enhance Networking Education with OpenNetLab
2022-12-06Coping with High Mobility for Beyond 5G Cellular Networks