Lightning talks: Advances in fairness in AI: From research to practice

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=rLq5KXlGzFU



Duration: 20:46
115 views
0


Over the past few years, we’ve seen that artificial intelligence (AI) and machine learning (ML) provide us with new opportunities, but they also raise new challenges. Most notably, these challenges have highlighted the various ways in which AI systems can promote unfairness or reinforce existing societal stereotypes. While we can often spot fairness-related harms in AI systems when we see them, there’s no one-size-fits-all definition of fairness that applies to all AI systems in all contexts. Additionally, there are many reasons why AI systems can behave unfairly. In this session, we explain the diversity of work taking place on fairness in AI systems at Microsoft and in the broader community. We highlight how we’re applying fairness principles in real-world AI systems by measuring and mitigating different kinds of fairness-related harms in vision, speech-to-text, and natural language systems.

Introduction
Speaker: Amit Sharma, Senior Researcher, Microsoft Research India

Fairness in speech-to-text
Speakers:
Michael Amoako, RAIL Program Manager - Quality of Service Fairness Lead, Microsoft
Kristen Laird, Program Manager, Microsoft Cognitive Services Responsible AI

Representational harms in image tagging
Speaker: Solon Barocas, Principal Researcher, Microsoft Research NYC

SAVII: Measuring fairness-related harms in NL services
Speaker: Chad Atalla, Applied Scientist & Tech Lead, MSAI Responsible AI VTeam




Other Videos By Microsoft Research


2022-02-08Research talk: Privacy in machine learning research at Microsoft
2022-02-08Research talk: Towards bridging between legal and technical approaches to data protection
2022-02-08Research talk: Building towards a responsible data economy
2022-02-08Keynote: Unlocking exabytes of training data through privacy preserving machine learning
2022-02-08Closing remarks: Responsible AI
2022-02-08Opening remarks: The Future of Privacy and Security
2022-02-08Tutorial: Create human-centered AI with the Human-AI eXperience (HAX) Toolkit
2022-02-08Panel: Maximizing benefits and minimizing harms with language technologies
2022-02-08Lightning talks: Advances in fairness in AI: New directions
2022-02-08Closing remarks: Tech for resilient communities
2022-02-08Lightning talks: Advances in fairness in AI: From research to practice
2022-02-08Panel: Content moderation beyond the ban: Reducing toxic, misleading, and low-quality content
2022-02-08Technology demo: Using technology to combat human trafficking
2022-02-08Technology demo: Project Eclipse: Hyperlocal air quality monitoring for cities
2022-02-08Research talk: Bucket of me: Using few-shot learning to realize teachable AI systems
2022-02-08Tutorial: Best practices for prioritizing fairness in AI systems
2022-02-08Demo: RAI Toolbox: An open-source framework for building responsible AI
2022-02-08Opening remarks: Responsible AI
2022-02-08Closing remarks: Deep Learning and Large Scale AI
2022-02-08Roundtable discussion: Beyond language models: Knowledge, multiple modalities, and more
2022-02-08Research talk: Closing the loop in natural language interfaces to relational databases



Tags:
fair AI systems
reliable AI systems
responsible AI
social inequities in AI
societal implications of AI
societal impact
machine learning
natural language processing
microsoft research summit