Trustworthy AI: Towards Robust and Reliable Model Explanations | AI FOR GOOD DISCOVERY

Channel:
Subscribers:
19,700
Published on ● Video Link: https://www.youtube.com/watch?v=sRTqRYmI-40



Duration: 0:00
316 views
0


As machine learning black boxes are increasingly being deployed in domains such as healthcare and criminal justice, there is growing emphasis on building tools and techniques for explaining these black boxes in an interpretable manner. Such explanations are being leveraged by domain experts to diagnose systematic errors and underlying biases of black boxes. In this talk, I will present some of our recent research that sheds light on the vulnerabilities of popular post hoc explanation techniques such as LIME and SHAP, and also introduce novel methods to address some of these vulnerabilities. More specifically, I will first demonstrate that these methods are brittle, unstable, and are vulnerable to a variety of adversarial attacks. Then, I will discuss two solutions to address some of the vulnerabilities of these methods – (i) a framework based on adversarial training that is designed to make post hoc explanations more stable and robust to shifts in the underlying data; (ii) a Bayesian framework that captures the uncertainty associated with post hoc explanations and in turn allows us to generate explanations with user specified levels of confidences. I will conclude the talk by discussing results from real world datasets to both demonstrate the vulnerabilities in post hoc explanation techniques as well as the efficacy of our aforementioned solutions.

🔴 Watch the latest #AIforGood videos:


Explore more #AIforGood content:
1️ ⃣    • Top Hits  
2️ ⃣    • AI for Good Webinars  
3️ ⃣    • AI for Good Keynotes  

📅 Discover what's next on our programmhttps://aiforgood.itu.int/programme/me/

Social Media:
Websithttps://aiforgood.itu.int/nt/
Twittehttps://twitter.com/ITU_AIForGoodd  
LinkedIn Paghttps://www.linkedin.com/company/265119077  
LinkedIn Grouhttps://www.linkedin.com/groups/85677488  
Instagrahttps://www.instagram.com/aiforgoodd  
Faceboohttps://www.facebook.com/AIforGoodd  

WHAT IS TRUSTWORTHY AI SERIES?
Artificial Intelligence (AI) systems have steadily grown in complexity, gaining predictivity often at the expense of interpretability, robustness and trustworthiness. Deep neural networks are a prime example of this development. While reaching “superhuman” performances in various complex tasks, these models are susceptible to errors when confronted with tiny (adversarial) variations of the input – variations which are either not noticeable or can be handled reliably by humans. This expert talk series will discuss these challenges of current AI technology and will present new research aiming at overcoming these limitations and developing AI systems which can be certified to be trustworthy and robust.

What is AI for Good?
The AI for Good series is the leading action-oriented, global & inclusive United Nations platform on AI. The Summit is organized all year, always online, in Geneva by the ITU with XPRIZE Foundation in partnership with over 35 sister United Nations agencies, Switzerland and ACM. The goal is to identify practical applications of AI and scale those solutions for global impact.

Disclaimer:
The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.

#trustworthyAI #reliableAI




Other Videos By AI for Good


2021-07-11Service Migration Algorithms in Distributed Edge Computing Systems | AI/ML IN 5G CHALLENGE
2021-07-07Anomaly Detection Based on Log Analysis | AI/ML IN 5G CHALLENGE
2021-07-06Accelerating Climate Science with AI | Ban Ki-moon, IPCC, MILA & Oxford Uni | AI FOR GOOD DISCOVERY
2021-07-04AI for Climate Science | #AIforGood
2021-07-04Radio-Strike: Reinforcement Learning Game in Unreal Engine 3-D Environments | AI/ML IN 5G CHALLENGE
2021-06-29Innovation Factory - Live Pitching Session 3 | AI FOR GOOD INNOVATION FACTORY
2021-06-29Lightning-Fast Modulation Classification Hardware-Efficient Neural Networks | AI/ML IN 5G CHALLENGE
2021-06-28Can AI Save the Fashion Industry? | AI FOR GOOD WEBINARS
2021-06-27Understanding How People Move using Modern Civilian Radar | AI/ML IN 5G CHALLENGE
2021-06-24Big Data for Biodiversity: New Technologies in Accounting for Nature | AI FOR GOOD ON THE GO!
2021-06-23Trustworthy AI: Towards Robust and Reliable Model Explanations | AI FOR GOOD DISCOVERY
2021-06-22AI in the Middle East and North Africa: Visions and Realities | AI FOR GOOD WEBINARS
2021-06-21Ignoring the Mirage of Disposable Clinician for Deployment of AI in Medicine | AI FOR GOOD DISCOVERY
2021-06-16Radio Link Failure Prediction | AI/ML IN 5G CHALLENGE
2021-06-09Combinatorial Optimization Challenge: Delivery Route Planning Optimization | AI/ML IN 5G CHALLENGE
2021-06-09Trustworthy AI: Bayesian deep learning | AI FOR GOOD DISCOVERY
2021-06-02Addressing the Dark Sides of AI | AI FOR GOOD WEBINARS
2021-06-01AI Policy, Standard and Metrics for Automated Driving Safety | AI FOR GOOD WEBINARS
2021-05-31Meet AI for Good African Startups – Live Pitching Session 2 | AI FOR GOOD INNOVATION FACTORY
2021-05-26Trustworthy AI: XAI and Trust | AI FOR GOOD DISCOVERY
2021-05-25ML for Joint Sensing and Communication in Future mm Wave IEEE 802.11 WLANs | AI/ML IN 5G CHALLENGE