Panel: Content moderation beyond the ban: Reducing toxic, misleading, and low-quality content

Subscribers:
351,000
Published on ● Video Link: https://www.youtube.com/watch?v=wOEfp69Uko4



Duration: 40:59
424 views
0


Speakers:
Tarleton Gillespie, Senior Principal Researcher, Microsoft Research New England
Zoe Darmé, Senior Manager, Google
Ryan Calo, Lane Powell and D. Wayne Gittinger Professor, University of Washington School of Law
Sarita Schoenebeck, Associate Professor, University of Michigan
Charlotte Willner, Executive Director, Trust & Safety Professional Association

Public debate about content moderation focuses almost exclusively on removal, such as what is deleted and who is suspended. But what about content that is identified as “borderline,” which almost—but not quite—violates the guidelines? Faced with an expanding sense of responsibility, many platform companies have started identifying this type of content, as well as content that may be toxic, misleading, or harmful in the aggregate. Rather than remove it, they can minimize its effect by taking some of the following approaches: reduce its visibility in recommendations, limit its discoverability in search, add labels or warnings, or provide fact-checks or additional context. Tarleton Gillespie (Senior Principal Researcher at Microsoft) and Zoe Darmé (Senior Manager of Search at Google) host a panel that includes Sarita Schoenebeck, (Associate Professor, School of Information, University of Michigan), Ryan Calo (Professor of Law, University of Washington), and Charlotte Willner (Founding Executive Director of the Trust & Safety Professional Association).

Join us as they discuss these techniques and the questions they raise, such as: How is such content being identified? Are these approaches effective? How do users respond? How can platforms be transparent and accountable for such interventions? What are the ethical and practical implications of these approaches?

Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit




Other Videos By Microsoft Research


2022-02-08Research talk: Towards bridging between legal and technical approaches to data protection
2022-02-08Research talk: Building towards a responsible data economy
2022-02-08Keynote: Unlocking exabytes of training data through privacy preserving machine learning
2022-02-08Closing remarks: Responsible AI
2022-02-08Opening remarks: The Future of Privacy and Security
2022-02-08Tutorial: Create human-centered AI with the Human-AI eXperience (HAX) Toolkit
2022-02-08Panel: Maximizing benefits and minimizing harms with language technologies
2022-02-08Lightning talks: Advances in fairness in AI: New directions
2022-02-08Closing remarks: Tech for resilient communities
2022-02-08Lightning talks: Advances in fairness in AI: From research to practice
2022-02-08Panel: Content moderation beyond the ban: Reducing toxic, misleading, and low-quality content
2022-02-08Technology demo: Using technology to combat human trafficking
2022-02-08Technology demo: Project Eclipse: Hyperlocal air quality monitoring for cities
2022-02-08Research talk: Bucket of me: Using few-shot learning to realize teachable AI systems
2022-02-08Tutorial: Best practices for prioritizing fairness in AI systems
2022-02-08Demo: RAI Toolbox: An open-source framework for building responsible AI
2022-02-08Opening remarks: Responsible AI
2022-02-08Closing remarks: Deep Learning and Large Scale AI
2022-02-08Roundtable discussion: Beyond language models: Knowledge, multiple modalities, and more
2022-02-08Research talk: Closing the loop in natural language interfaces to relational databases
2022-02-08Just Tech: Bringing CS, the social sciences, and communities together for societal resilience



Tags:
fair AI systems
reliable AI systems
responsible AI
social inequities in AI
societal implications of AI
societal impact
machine learning
natural language processing
microsoft research summit