Dissecting Algorithmic Bias | Ziad Obermeyer | AI FOR GOOD DISCOVERY
Algorithms can reproduce and even scale up racial biases. A major mechanism by which bias gets into algorithms is label choice: the specific target variable an algorithm is trained to predict. In this talk, I will show that a widely used family of algorithms in health care predicts health care costs, as a proxy for health needs. But because of unequal access to care, Black patients cost less than White patients with the same needs. So when the algorithm is trained to predict cost, it de-prioritizes Black patients relative to their needs. Crucially, label choice bias is fixable: retraining algorithms to predict less biased proxies can turn algorithms into a force for good, targeting resources to those who need them and reducing disparities rather than perpetuating them.
Playbook: https://www.chicagobooth.edu/research/center-for-applied-artificial-intelligence/research/algorithmic-bias
Watch the latest #AIforGood videos:
Explore more #AIforGood content:
• Top Hits
• AI for Good Webinars
• AI for Good Keynotes
Discover what's next on our programme!
https://aiforgood.itu.int/programme/
Social Media:
Website: https://aiforgood.itu.int/
Twitter: https://twitter.com/ITU_AIForGood
LinkedIn Page: https://www.linkedin.com/company/26511907
LinkedIn Group: https://www.linkedin.com/groups/8567748
Instagram: https://www.instagram.com/aiforgood
Facebook: https://www.facebook.com/AIforGood
What is AI for Good?
The AI for Good series is the leading action-oriented, global & inclusive United Nations platform on AI. The Summit is organized all year, always online, in Geneva by the ITU with XPRIZE Foundation in partnership with over 35 sister United Nations agencies, Switzerland and ACM. The goal is to identify practical applications of AI and scale those solutions for global impact.
Disclaimer:
The views and opinions expressed are those of the panelists and do not reflect the official policy of the ITU.
#AIforHealth #AIforGoodDiscovery