Panel: Maximizing benefits and minimizing harms with language technologies

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=x58gXO0vMTI



Duration: 44:39
120 views
0


Speakers:
Hal Daumé III, Sr Principal Researcher, Microsoft Research NYC
Steven Bird, Professor, Charles Darwin University
Su Lin Blodgett, Postdoctoral Researcher, Microsoft Research Montréal
Margaret Mitchell, CEO & Research Scientist, Ethical AI LLC
Hanna Wallach, Partner Research Manager, Microsoft Research NYC

Language is one of the main ways in which people understand and construct the social world. Current language technologies can contribute positively to this process—by challenging existing power dynamics, or negatively—by reproducing or exacerbating existing social inequities. In this panel, we will discuss existing concerns and opportunities related to the fairness, accountability, transparency, and ethics (FATE) of language technologies and the data they ingest or generate. It’s important to address these matters because language technologies might surface, replicate, exacerbate or even cause a range of computational harms—from exposing offensive speech or reinforcing stereotypes, to even more subtle issues, like nudging users towards undesirable patterns of behavior or triggering memories of traumatic events. In this session, we’ll cover such critical questions as: How can we reliably measure fairness-related and other computational harms? Whose data is included in training a model, and who is excluded as a result? How do we better foresee potential computational harms from language technologies?

Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit




Other Videos By Microsoft Research


2022-02-08Demo: Generating formally proven low-level parsers with EverParse
2022-02-08Demo: EverParse: Automatic generation of formally verified secure parsers for cloud integrity
2022-02-08Research talk: DARPA SafeDocs: an approach to secure parsing and information interchange formats
2022-02-08Research talk: Privacy in machine learning research at Microsoft
2022-02-08Research talk: Towards bridging between legal and technical approaches to data protection
2022-02-08Research talk: Building towards a responsible data economy
2022-02-08Keynote: Unlocking exabytes of training data through privacy preserving machine learning
2022-02-08Closing remarks: Responsible AI
2022-02-08Opening remarks: The Future of Privacy and Security
2022-02-08Tutorial: Create human-centered AI with the Human-AI eXperience (HAX) Toolkit
2022-02-08Panel: Maximizing benefits and minimizing harms with language technologies
2022-02-08Lightning talks: Advances in fairness in AI: New directions
2022-02-08Closing remarks: Tech for resilient communities
2022-02-08Lightning talks: Advances in fairness in AI: From research to practice
2022-02-08Panel: Content moderation beyond the ban: Reducing toxic, misleading, and low-quality content
2022-02-08Technology demo: Using technology to combat human trafficking
2022-02-08Technology demo: Project Eclipse: Hyperlocal air quality monitoring for cities
2022-02-08Research talk: Bucket of me: Using few-shot learning to realize teachable AI systems
2022-02-08Tutorial: Best practices for prioritizing fairness in AI systems
2022-02-08Demo: RAI Toolbox: An open-source framework for building responsible AI
2022-02-08Opening remarks: Responsible AI



Tags:
fair AI systems
reliable AI systems
responsible AI
social inequities in AI
societal implications of AI
societal impact
machine learning
natural language processing
microsoft research summit