Explainability and Robustness for trustworthy AI | AI FOR GOOD DISCOVERY

Channel:
Subscribers:
19,700
Published on ● Video Link: https://www.youtube.com/watch?v=NCajz8h13uU



Duration: 0:00
493 views
0


Today, thanks to advances in statistical machine learning, AI is once again enormously popular. However, two features need to be further improved in the future a) robustness and b) explainability/interpretability/re-traceability, i.e. to explain why a certain result has been achieved. Disturbances in the input data can have a dramatic impact on the output and lead to completely different results.

This is relevant in all critical areas where we suffer from poor data quality, i.e. where we do not have i.i.d. data. Therefore, the use of AI in real-world areas that impact human life (agriculture, climate, forestry, health, …) has led to an increased demand for trustworthy AI. In sensitive areas where re-traceability, transparency, and interpretability are required, explainable AI (XAI) is now even mandatory due to legal requirements.

One approach to making AI more robust is to combine statistical learning with knowledge representations. For certain tasks, it may be beneficial to include a human in the loop. A human expert can sometimes – of course not always – bring experience and conceptual understanding to the AI pipeline. Such approaches are not only a solution from a legal perspective, but in many application areas, the “why” is often more important than a pure classification result. Consequently, both explainability and robustness can promote reliability and trust and ensure that humans remain in control, thus complementing human intelligence with artificial intelligence.

Speaker:
Andreas Holzinger, Head of Human-Centered AI Lab, Institute for Medical Informatics/Statistics
Medizinische Universität Graz

Moderators:
Wojciech Samek, Head of Department of Artificial Intelligence, Fraunhofer Heinrich Hertz Institute

Watch the latest #AIforGood videos!


Explore more #AIforGood content:
AI for Good Top Hits
   • Top Hits  

AI for Good Webinars
   • AI for Good Webinars  

AI for Good Keynotes
   • AI for Good Keynotes  

Stay updated and join our weekly AI for Good newsletter:
http://eepurl.com/gI2kJ5

Discover what's next on our programme!
https://aiforgood.itu.int/programme/

Check out the latest AI for Good news:
https://aiforgood.itu.int/newsroom/

Explore the AI for Good blog:
https://aiforgood.itu.int/ai-for-good-blog/

Connect on our social media:
Website: https://aiforgood.itu.int/
Twitter: https://twitter.com/ITU_AIForGood
LinkedIn Page: https://www.linkedin.com/company/26511907
LinkedIn Group: https://www.linkedin.com/groups/8567748
Instagram: https://www.instagram.com/aiforgood
Facebook: https://www.facebook.com/AIforGood

What is AI for Good?
The AI for Good series is the leading action-oriented, global & inclusive United Nations platform on AI. The Summit is organized all year, always online, in Geneva by the ITU with XPRIZE Foundation in partnership with over 35 sister United Nations agencies, Switzerland and ACM. The goal is to identify practical applications of AI and scale those solutions for global impact.

Disclaimer:
The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.

#AIforGoodDiscovery #TrustworthyAI




Other Videos By AI for Good


2022-01-20Experiences, Strategies – A “Fireside Chat” on how to Succeed in the ITU AI/ML IN 5G CHALLENGE
2022-01-19Robotic exploration of subterranean worlds | AI FOR GOOD WEBINARS
2022-01-18Causal inference for Earth System Sciences | AI FOR GOOD DISCOVERY
2022-01-18AI-Enabled Public Health from a Marginalized Perspective | AI FOR GOOD DISCOVERY
2022-01-12Refusing AI Contact: Autism, Algorithms and the Dangers of ‘Technopsyence’ | AI FOR GOOD DISCOVERY
2022-01-11How can AI improve Weather and Climate Prediction? | AI FOR GOOD DISCOVERY
2022-01-04Purely Data-Driven Approaches to Weather Prediction: Promise and Perils | Suman Ravuri | DISCOVERY
2021-12-20Chaesub Lee, Director, ITU TSB | AI for Good Keynote | Global Impact Week 2021
2021-12-16Bringing ML to clinical use safely, ethically and cost-effectively | AI FOR GOOD DISCOVERY
2021-12-162021 #AIforGood Highlights
2021-12-15Explainability and Robustness for trustworthy AI | AI FOR GOOD DISCOVERY
2021-12-14Émission Spéciale | NAIA.R | Frederic Werner (ITU) | FORUM NÉO-AQUITAIN SUR L'IA ET LA ROBOTIQUE
2021-12-13ITU AI/ML in 5G Grand Challenge Finale | AI/ML IN 5G CHALLENGE
2021-12-08Interpretable Neural Networks for Computer Vision: Clinical Decisions | AI FOR GOOD DISCOVERY
2021-12-08Innovation Factory Grand Finale 2021 | AI FOR GOOD INNOVATION FACTORY
2021-12-07Living with AI: Past, Present and Future | AI FOR GOOD WEBINARS
2021-12-05Fairness of machine learning classifiers in medical image analysis | AI FOR GOOD DISCOVERY
2021-12-05Standardization Ensuring Trustworthy Digital Society Enabled by AI Tech Pt. 2 | AI FOR GOOD WEBINARS
2021-12-01Algorithmic recourse: from theory to practice | AI FOR GOOD DISCOVERY
2021-11-30AI in Weather and Climate: doing better, doing different | AI FOR GOOD DISCOVERY
2021-11-30TestAIng.com: Making AI implementation trustworthy | AI FOR GOOD INNOVATION FACTORY