Trustworthy AI: XAI and Trust | AI FOR GOOD DISCOVERY

Channel:
Subscribers:
19,700
Published on ● Video Link: https://www.youtube.com/watch?v=_-mIeYqNA8U



Duration: 0:00
322 views
0


As part of the Trustworthy AI series, Grégoire Montavon (TU Berlin) presents his research on eXplainable AI (XAI) and trust.

⏱ Shownotes:
00:00 Opening Remarks by ITU
01:00 Introduction by Samek
01:58 Introduction by Gregoire Montavon
02:27 Why do we need Trustworthy in AI?
03:37 Machine Learning Decisions
04:41 Detecting horse example
10:08 How do we get these heatmaps?
10:52 Layerwise Relevance Propagation (LRP)
12:48 Can LRP be Justified Theoretically?
13:17 Deep Taylor Decomposition
14:36 LRP is More Stable than Gradient
16:00 LRP on Different Types of Data/Models
18:23 Advanced Explanation with GNN-LRP
19:00 Systematically Finding Clever Hans
19:58 Idea: Spectral Relevance Analysis (SpRAy)
22:08 The Revolution of Depth
23:08 Clever Hans on the VGG-16 Image Classifier
23:37 XAI Current Challenges
28:27 Towards Trustworthy AI
30:01 Explainable AI book
30:18 www.heatmapping.org
30:48 References
30:54 Q&A Session
31:10 How to measure trustworthiness and the certification process
33:32 How does your LRP compare with Google's XRAI algorithm?
34:33 What are your thoughts on explainability models?
35:33 Class discrimination in AI methods?
37:17 Do you think we can use explanation methods to detect vectors in poisoning attacks?
39:23 Where explanation is going to (the future)?
41:07 Do you think there are some limits for explanations that are hard to explain?
42:28 What do you think about using explanation techniques to detect potentially implausible/incorrect predictions?
43:10 Have you tried to calculate heatmaps for images which have been altered with adversarial perturbations?
45:49 Closing from ITU

🔴 Watch the latest #AIforGood videos:  

Explore more #AIforGood content:
1️ ⃣    • Top Hits  
2️ ⃣    • AI for Good Webinars  
3️ ⃣    • AI for Good Keynotes  

📅 Discover what's next on our programhttps://aiforgood.itu.int/programme/mme/

Social Media:
Websihttps://aiforgood.itu.int/.int
Twitthttps://twitter.com/ITU_AIForGoodod  
LinkedIn Pahttps://www.linkedin.com/company/2651..1.  .
LinkedIn Grohttps://www.linkedin.com/groups/856774848  
Instagrhttps://www.instagram.com/aiforgoodod  
Facebohttps://www.facebook.com/AIforGoodod  

WHAT IS TRUSTWORTHY AI SERIES?
Artificial Intelligence (AI) systems have steadily grown in complexity, gaining predictivity often at the expense of interpretability, robustness and trustworthiness. Deep neural networks are a prime example of this development. While reaching “superhuman” performances in various complex tasks, these models are susceptible to errors when confronted with tiny (adversarial) variations of the input – variations which are either not noticeable or can be handled reliably by humans. This expert talk series will discuss these challenges of current AI technology and will present new research aiming at overcoming these limitations and developing AI systems which can be certified to be trustworthy and robust.

The Trustworthy AI series is moderated by Wojciech Samek, Head of AI Department at Fraunhofer HHI, one of the top 20 AI labs in the worlhttps://www.analyticsinsight.net/top-20-artificial-intelligence-research-labs-in-the-world-in-2021/-...

What is AI for Good?
The AI for Good series is the leading action-oriented, global & inclusive United Nations platform on AI. The Summit is organized all year, always online, in Geneva by the ITU with XPRIZE Foundation in partnership with over 35 sister United Nations agencies, Switzerland and ACM. The goal is to identify practical applications of AI and scale those solutions for global impact.

Disclaimer:
The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.

#trustworthyAI #explainableAI




Other Videos By AI for Good


2021-06-24Big Data for Biodiversity: New Technologies in Accounting for Nature | AI FOR GOOD ON THE GO!
2021-06-23Trustworthy AI: Towards Robust and Reliable Model Explanations | AI FOR GOOD DISCOVERY
2021-06-22AI in the Middle East and North Africa: Visions and Realities | AI FOR GOOD WEBINARS
2021-06-21Ignoring the Mirage of Disposable Clinician for Deployment of AI in Medicine | AI FOR GOOD DISCOVERY
2021-06-16Radio Link Failure Prediction | AI/ML IN 5G CHALLENGE
2021-06-09Combinatorial Optimization Challenge: Delivery Route Planning Optimization | AI/ML IN 5G CHALLENGE
2021-06-09Trustworthy AI: Bayesian deep learning | AI FOR GOOD DISCOVERY
2021-06-02Addressing the Dark Sides of AI | AI FOR GOOD WEBINARS
2021-06-01AI Policy, Standard and Metrics for Automated Driving Safety | AI FOR GOOD WEBINARS
2021-05-31Meet AI for Good African Startups – Live Pitching Session 2 | AI FOR GOOD INNOVATION FACTORY
2021-05-26Trustworthy AI: XAI and Trust | AI FOR GOOD DISCOVERY
2021-05-25ML for Joint Sensing and Communication in Future mm Wave IEEE 802.11 WLANs | AI/ML IN 5G CHALLENGE
2021-05-25AI and Health: Seeing the future: AI-based Risk Assessment Models | AI FOR GOOD DISCOVERY
2021-05-25Ethical AI - AI for Peace and Information | AI FOR GOOD WEBINARS
2021-05-25Ethical AI - Accountability and Transparency in AI | AI FOR GOOD WEBINARS
2021-05-25Ethical AI - Fairness and Non-Discrimination in AI | AI FOR GOOD WEBINARS
2021-05-24Developing Girl’s Digital and AI Skills for More Inclusive AI for All | AI FOR GOOD WEBINARS
2021-05-24Inteligencia artificial para prevención de ataques cardíacos: Iker Casillas, ganador Copa del Mundo
2021-05-24AI for Heart Attack Prevention: Iker Casillas World Cup Winning goalkeeper Testimonial | AI for Good
2021-05-23Graph Neural Networking Challenge: Creating a Scalable Network Digital Twin | AI/ML IN 5G CHALLENGE
2021-05-20Smart Cities, Smart Mobility: Exploring AI for Future Communities | AI FOR GOOD ON THE GO!