Tutorial: Best practices for prioritizing fairness in AI systems

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=ZUCcB-uV4lw



Duration: 30:04
181 views
0


Speakers:
Amit Deshpande, Senior Researcher, Microsoft Research India
Amit Sharma, Senior Researcher, Microsoft Research India

As artificial intelligence (AI) continues to transform people’s lives, new opportunities raise new challenges. Most notably, when we assess the societal impact of AI systems, it’s important to be aware of their benefits, which we should strive to amplify, and their harms, which we should work to reduce. Developing and deploying AI systems in a responsible manner means prioritizing fairness. This is especially important for AI systems that will be used in high-stakes domains like education, employment, finance, and healthcare. This tutorial will guide you through a variety of fairness-related harms caused by AI systems and their most common causes. We will then dive into the precautions we need to take to mitigate fairness-related harms when developing and deploying AI systems. Together, we’ll explore examples of fairness-related harms and their causes; fairness dashboards for quantitatively assessing allocation harms and quality-of-service harms; and algorithms for mitigating fairness-related harms. We’ll discuss when they should and shouldn’t be used and their advantages and disadvantages.

Resources:
https://www.microsoft.com/en-us/research/group/reliable-machine-learning/

https://fairlearn.org/

Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit




Other Videos By Microsoft Research


2022-02-08Opening remarks: The Future of Privacy and Security
2022-02-08Tutorial: Create human-centered AI with the Human-AI eXperience (HAX) Toolkit
2022-02-08Panel: Maximizing benefits and minimizing harms with language technologies
2022-02-08Lightning talks: Advances in fairness in AI: New directions
2022-02-08Closing remarks: Tech for resilient communities
2022-02-08Lightning talks: Advances in fairness in AI: From research to practice
2022-02-08Panel: Content moderation beyond the ban: Reducing toxic, misleading, and low-quality content
2022-02-08Technology demo: Using technology to combat human trafficking
2022-02-08Technology demo: Project Eclipse: Hyperlocal air quality monitoring for cities
2022-02-08Research talk: Bucket of me: Using few-shot learning to realize teachable AI systems
2022-02-08Tutorial: Best practices for prioritizing fairness in AI systems
2022-02-08Demo: RAI Toolbox: An open-source framework for building responsible AI
2022-02-08Opening remarks: Responsible AI
2022-02-08Closing remarks: Deep Learning and Large Scale AI
2022-02-08Roundtable discussion: Beyond language models: Knowledge, multiple modalities, and more
2022-02-08Research talk: Closing the loop in natural language interfaces to relational databases
2022-02-08Just Tech: Bringing CS, the social sciences, and communities together for societal resilience
2022-02-08Research talk: WebQA: Multihop and multimodal
2022-02-08Opening remarks: Tech for resilient communities
2022-02-08Research talk: Towards Self-Learning End-to-end Dialog Systems
2022-02-08Research talk: Focal Attention: Towards local-global interactions in vision transformers



Tags:
fair AI systems
reliable AI systems
responsible AI
social inequities in AI
societal implications of AI
societal impact
machine learning
natural language processing
microsoft research summit