Don’t Trust Your AI System: Model Its Validity Instead | Trustworthy AI | AI FOR GOOD

Channel:
Subscribers:
19,500
Published on ● Video Link: https://www.youtube.com/watch?v=x3SJpF93Pqo



Duration: 0:00
461 views
0


Machine behaviour is becoming cognitively very complex but rarely human-like. The efforts to fully understand machine learning and other AI systems are falling short, despite the progress of explainable AI (XAI) techniques. In many cases, especially when using post-hoc explanations, we get a false sense of understanding, and the initial sceptical (and prudent) stance towards an AI system turns into overconfidence, a dangerous delusion of trust. \n\nIn this AI for Good Discovery, Prof. Hernández-Orallo argues that for many AI systems of today and tomorrow we should not vainly try to understand what they do, but to explain and predict when and why they fail. We should model their user-aligned validity rather than their full behaviour. This is precisely what a robust, cognitively-inspired AI evaluation can do. Instead of maximising contingent dataset performance and extrapolating the volatile good metric equally to every instance, we can anticipate the validity of the AI system, specifically for each instance and user. \n\nProf. Hernández-Orallo illustrates how this can be done in practice, identifying relevant dimensions of the task at hand, deriving capabilities from the system’s characteristic grid and building well-calibrated assessor models at the instance level. His normative vision is that every deployed AI system in the future should only be allowed to operate if it comes with a capability profile or an assessor model, anticipating the user-aligned system validity before running each instance. Only by fine-tuning trust to each operating condition will we truly calibrate our expectations on AI. \n\n Speaker:\nJosé Hernández-Orallo\nProfessor\nUniversitat Politècnica de València\n\n Moderator:\nWojciech Samek\nHead of AI Department\nFraunhofer Heinrich Hertz Institute\n\n#\n\nJoin the Neural Network! \nhttps://aiforgood.itu.int/neural-network/\nThe AI for Good networking community platform powered by AI. \nDesigned to help users build connections with innovators and experts, link innovative ideas with social impact opportunities, and bring the community together to advance the SDGs using AI.\n\n Watch the latest #AIforGood videos!\n\n\nExplore more #AIforGood content:\n AI for Good Top Hits\n   • Top Hits  \n\n AI for Good Webinars\n   • AI for Good Webinars  \n\n AI for Good Keynotes\n   • AI for Good Keynotes  \n\n Stay updated and join our weekly AI for Good newsletter:\nhttp://eepurl.com/gI2kJ5\n\n Discover what's next on our programme!\nhttps://aiforgood.itu.int/programme/\n\nCheck out the latest AI for Good news:\nhttps://aiforgood.itu.int/newsroom/\n\nExplore the AI for Good blog:\nhttps://aiforgood.itu.int/ai-for-good-blog/\n\n Connect on our social media:\nWebsite: https://aiforgood.itu.int/\nTwitter: https://twitter.com/ITU_AIForGood\nLinkedIn Page: https://www.linkedin.com/company/26511907 \nLinkedIn Group: https://www.linkedin.com/groups/8567748 \nInstagram: https://www.instagram.com/aiforgood \nFacebook: https://www.facebook.com/AIforGood\n\nWhat is AI for Good?\nWe have less than 10 years to solve the UN SDGs and AI holds great promise to advance many of the sustainable development goals and targets.\nMore than a Summit, more than a movement, AI for Good is presented as a year round digital platform where AI innovators and problem owners learn, build and connect to help identify practical AI solutions to advance the United Nations Sustainable Development Goals.\nAI for Good is organized by ITU in partnership with 40 UN Sister Agencies and co-convened with Switzerland.\n\nDisclaimer:\nThe views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.




Other Videos By AI for Good


2022-11-28Machine learning for scientific discovery with examples in fluid mechanics | AI for Good Webinar
2022-11-27How to make AI more fair and unbiased | AI FOR GOOD DISCOVERY
2022-11-24AI for Good, or just better, cybersecurity | AI for Good Webinar
2022-11-23In AI we trust | AI for Good Webinar
2022-11-22Securing digital inclusion through AI | AI for Good Webinars
2022-11-21Towards human-centered social robots for good | AI for Good | Robotics for Good
2022-11-20Machine learning supporting ecology | AI for Good GeoAI Challenge
2022-11-17Project Resilience: Next steps for the Minimum Viable Product (MVP) | AI for Good Webinar
2022-11-16Multi Modal Beam Prediction Challenge 2022
2022-11-14Systematic deviations in data and model outputs in healthcare | AI for Good Discovery
2022-11-13Don’t Trust Your AI System: Model Its Validity Instead | Trustworthy AI | AI FOR GOOD
2022-11-09Meet the Robotics for Good start-ups advancing sustainable development | Robotics for Good
2022-11-08Responsible AI in practice | AI for Good Keynote
2022-11-07Bringing machine learning models to the bedside | AI FOR GOOD DISCOVERY
2022-11-06Industrial AI for Worry-Free Manufacturing | UNIDO | Discovery
2022-11-02How can AI help and protect and sustain global forest ecosystems? | AI for Good Webinars
2022-10-26Accelerating the solar transition with autonomous robots | AI FOR GOOD WEBINARS
2022-10-23Towards human-understandable explanations with XAI 2.0 | Trustworthy AI | AI for Good
2022-10-16How AI will shape human-robot collaboration | UNIDO | AI for Good Discovery
2022-10-12Unleashing autonomous drones for disaster risk reduction | AI FOR GOOD WEBINARS
2022-10-11What makes datafication (legally) wrongful? | Salomé Viljoen, Michigan Law School | AI FOR GOOD