Platform-supported Auditing Of Social Media Algorithms For Public Interest

Published on ● Video Link: https://www.youtube.com/watch?v=3gLzaOnCGM4



Duration: 32:15
335 views
13


Aleksandra Korolova (University of Southern California) [REMOTE]
https://simons.berkeley.edu/talks/panel-interpretability-physical-sciences
Interpretable Machine Learning in Natural and Social Sciences

Relevance estimators are algorithms used by social media platforms to determine what content is shown to users and its presentation order. These algorithms aim to personalize the platform's experience for users, increasing engagement and, therefore, platform revenue. However, many have concerns that the relevance estimation and personalization algorithms are opaque and can produce outcomes that are harmful to individuals or society. Legislations have been proposed in both the U.S. and the E.U. that mandate auditing of social media algorithms by external researchers. But auditing at scale risks disclosure of users' private data and platforms' proprietary algorithms, and thus far there has been no concrete technical proposal that can provide such auditing. We propose a new method for platform-supported auditing that can meet the goals of the proposed legislations




Other Videos By Simons Institute for the Theory of Computing


2022-07-11Using Large-Scale Clinico-Genomics Data for in silico Clinical Trials and Precision Oncology
2022-07-11A Statistical, Reference-Free Algorithm Subsumes Myriad Problems in Genome Science
2022-07-11Machine Learning for Single-Cell 3D Epigenomics
2022-07-11Understanding Molecular Complexity for Precision Medicine
2022-07-11Genomics of Cancer
2022-07-11Formatting Biological Big Data to Enable (Personalized) Systems Pharmacology
2022-07-11Landscapes of Human cis-regulatory Elements and Transcription Factor Binding Sites...
2022-07-11Spatial Transcriptomics Identifies Neighbourhoods and Molecular Markers of Alveolar Damage...
2022-07-11BANKSY: A Spatial Omics Algorithm that Unifies Cell Type Clustering and Tissue Domain Segmentation
2022-07-01Panel on Interpretability in the Law
2022-07-01Platform-supported Auditing Of Social Media Algorithms For Public Interest
2022-07-01Legal Barriers To Interpretable Machine Learning
2022-06-30Interpretability and Algorithmic Fairness
2022-06-30Panel on Interpretability in the Biological Sciences
2022-06-30Machine Learning, Deep Networks and Interpretability in Systems, Cognitive and...
2022-06-30Interpreting Deep Learning Models Of Functional Genomics Data To Decode Regulatory Sequence...
2022-06-30Panel on Interpretability in the Physical Sciences
2022-06-30Interpreting Machine Learning From the Perspective of Nonequilibrium Systems
2022-06-29Interpretability In Atomic-Scale Machine Learning
2022-06-29Panel on Causality
2022-06-29Explanation: A(N Abridged) Survey



Tags:
Simons Institute
theoretical computer science
UC Berkeley
Computer Science
Theory of Computation
Theory of Computing
Interpretable Machine Learning in Natural and Social Sciences
Aleksandra Korolova