Interpretable Neural Networks for Computer Vision: Clinical Decisions | AI FOR GOOD DISCOVERY

Channel:
Subscribers:
19,600
Published on ● Video Link: https://www.youtube.com/watch?v=gH-F7plbm4M



Duration: 0:00
355 views
0


Let us consider a difficult computer vision challenge. Would you want an algorithm to determine whether you should get a biopsy, based on an x-ray? That’s usually a decision made by a radiologist, based on years of training. We know that algorithms haven’t worked perfectly for a multitude of other computer vision applications, and biopsy decisions are harder than just about any other application of computer vision that we typically consider. The interesting question is whether it is possible that an algorithm could be a true partner to a physician, rather than making the decision on its own. To do this, at the very least, we would need an interpretable neural network that is as accurate as its black box counterparts. In this talk, I will discuss two approaches to interpretable neural networks: (1) case-based reasoning, where parts of images are compared to other parts of prototypical images for each class, and (2) neural disentanglement, using a technique called concept whitening. The case-based reasoning technique is strictly better than saliency maps, and the concept whitening technique provides a strict advantage over the posthoc use of concept vectors.

In Partnership with: ‪@fraunhofer‬ HHI

🎙 Speaker:
Cynthia Rudin, Professor of Computer Science, Electrical and Computer Engineering, and Statistical Science, Duke University

🎙 Moderator:
Wojciech Samek, Head of Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute


🔴 Watch the latest #AIforGood videos!


Explore more #AIforGood content:
1️ ⃣ AI for Good Top Hits
   • Top Hits  

2️ ⃣ AI for Good Webinars
   • AI for Good Webinars  

3️ ⃣ AI for Good Keynotes
   • AI for Good Keynotes  

📩 Stay updated and join our weekly AI for Good newslettehttp://eepurl.com/gI2kJ5kJ5

📅 Discover what's next on our programmhttps://aiforgood.itu.int/programme/me/

🗞 Check out the latest AI for Good newhttps://aiforgood.itu.int/newsroom/om/

📱 Explore the AI for Good blohttps://aiforgood.itu.int/ai-for-good-blog/...

🌎 Connect on our social media:
Websithttps://aiforgood.itu.int/nt/
Twittehttps://twitter.com/ITU_AIForGoodd  
LinkedIn Paghttps://www.linkedin.com/company/265119077  
LinkedIn Grouhttps://www.linkedin.com/groups/85677488  
Instagrahttps://www.instagram.com/aiforgoodd  
Faceboohttps://www.facebook.com/AIforGoodd  

What is AI for Good?
The AI for Good series is the leading action-oriented, global & inclusive United Nations platform on AI. The Summit is organized all year, always online, in Geneva by the ITU with XPRIZE Foundation in partnership with over 35 sister United Nations agencies, Switzerland and ACM. The goal is to identify practical applications of AI and scale those solutions for global impact.

Disclaimer:
The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.

#AIforGoodDiscovery #TrustworthyAI




Other Videos By AI for Good


2022-01-18AI-Enabled Public Health from a Marginalized Perspective | AI FOR GOOD DISCOVERY
2022-01-12Refusing AI Contact: Autism, Algorithms and the Dangers of ‘Technopsyence’ | AI FOR GOOD DISCOVERY
2022-01-11How can AI improve Weather and Climate Prediction? | AI FOR GOOD DISCOVERY
2022-01-04Purely Data-Driven Approaches to Weather Prediction: Promise and Perils | Suman Ravuri | DISCOVERY
2021-12-20Chaesub Lee, Director, ITU TSB | AI for Good Keynote | Global Impact Week 2021
2021-12-16Bringing ML to clinical use safely, ethically and cost-effectively | AI FOR GOOD DISCOVERY
2021-12-162021 #AIforGood Highlights
2021-12-15Explainability and Robustness for trustworthy AI | AI FOR GOOD DISCOVERY
2021-12-14Émission Spéciale | NAIA.R | Frederic Werner (ITU) | FORUM NÉO-AQUITAIN SUR L'IA ET LA ROBOTIQUE
2021-12-13ITU AI/ML in 5G Grand Challenge Finale | AI/ML IN 5G CHALLENGE
2021-12-08Interpretable Neural Networks for Computer Vision: Clinical Decisions | AI FOR GOOD DISCOVERY
2021-12-07Living with AI: Past, Present and Future | AI FOR GOOD WEBINARS
2021-12-05Fairness of machine learning classifiers in medical image analysis | AI FOR GOOD DISCOVERY
2021-12-05Standardization Ensuring Trustworthy Digital Society Enabled by AI Tech Pt. 2 | AI FOR GOOD WEBINARS
2021-12-01Algorithmic recourse: from theory to practice | AI FOR GOOD DISCOVERY
2021-11-30AI in Weather and Climate: doing better, doing different | AI FOR GOOD DISCOVERY
2021-11-30TestAIng.com: Making AI implementation trustworthy | AI FOR GOOD INNOVATION FACTORY
2021-11-30Talov: Inclusive & accessible AI for visually & hearing impaired | AI FOR GOOD INNOVATION FACTORY
2021-11-30Doctor On Call: Accessible & affordable AI-augmented healthcare | AI FOR GOOD INNOVATION FACTORY
2021-11-30Wizard.AI: Empowering companies to launch trustworthy AI | AI FOR GOOD INNOVATION FACTORY
2021-11-30Kettle: Deep learning to reshape climate change reinsurance | AI FOR GOOD INNOVATION FACTORY