Don’t Trust Your AI System: Model Its Validity Instead | Trustworthy AI | AI FOR GOOD
Machine behaviour is becoming cognitively very complex but rarely human-like. The efforts to fully understand machine learning and other AI systems are falling short, despite the progress of explainable AI (XAI) techniques. In many cases, especially when using post-hoc explanations, we get a false sense of understanding, and the initial sceptical (and prudent) stance towards an AI system turns into overconfidence, a dangerous delusion of trust. \n\nIn this AI for Good Discovery, Prof. Hernández-Orallo argues that for many AI systems of today and tomorrow we should not vainly try to understand what they do, but to explain and predict when and why they fail. We should model their user-aligned validity rather than their full behaviour. This is precisely what a robust, cognitively-inspired AI evaluation can do. Instead of maximising contingent dataset performance and extrapolating the volatile good metric equally to every instance, we can anticipate the validity of the AI system, specifically for each instance and user. \n\nProf. Hernández-Orallo illustrates how this can be done in practice, identifying relevant dimensions of the task at hand, deriving capabilities from the system’s characteristic grid and building well-calibrated assessor models at the instance level. His normative vision is that every deployed AI system in the future should only be allowed to operate if it comes with a capability profile or an assessor model, anticipating the user-aligned system validity before running each instance. Only by fine-tuning trust to each operating condition will we truly calibrate our expectations on AI. \n\n Speaker:\nJosé Hernández-Orallo\nProfessor\nUniversitat Politècnica de València\n\n Moderator:\nWojciech Samek\nHead of AI Department\nFraunhofer Heinrich Hertz Institute\n\n#\n\nJoin the Neural Network! \nhttps://aiforgood.itu.int/neural-network/\nThe AI for Good networking community platform powered by AI. \nDesigned to help users build connections with innovators and experts, link innovative ideas with social impact opportunities, and bring the community together to advance the SDGs using AI.\n\n Watch the latest #AIforGood videos!\n\n\nExplore more #AIforGood content:\n AI for Good Top Hits\n • Top Hits \n\n AI for Good Webinars\n • AI for Good Webinars \n\n AI for Good Keynotes\n • AI for Good Keynotes \n\n Stay updated and join our weekly AI for Good newsletter:\nhttp://eepurl.com/gI2kJ5\n\n Discover what's next on our programme!\nhttps://aiforgood.itu.int/programme/\n\nCheck out the latest AI for Good news:\nhttps://aiforgood.itu.int/newsroom/\n\nExplore the AI for Good blog:\nhttps://aiforgood.itu.int/ai-for-good-blog/\n\n Connect on our social media:\nWebsite: https://aiforgood.itu.int/\nTwitter: https://twitter.com/ITU_AIForGood\nLinkedIn Page: https://www.linkedin.com/company/26511907 \nLinkedIn Group: https://www.linkedin.com/groups/8567748 \nInstagram: https://www.instagram.com/aiforgood \nFacebook: https://www.facebook.com/AIforGood\n\nWhat is AI for Good?\nWe have less than 10 years to solve the UN SDGs and AI holds great promise to advance many of the sustainable development goals and targets.\nMore than a Summit, more than a movement, AI for Good is presented as a year round digital platform where AI innovators and problem owners learn, build and connect to help identify practical AI solutions to advance the United Nations Sustainable Development Goals.\nAI for Good is organized by ITU in partnership with 40 UN Sister Agencies and co-convened with Switzerland.\n\nDisclaimer:\nThe views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.