Research talk: Causal ML and fairness

Subscribers:
351,000
Published on ● Video Link: https://www.youtube.com/watch?v=TQGvTdI1G7Y



Duration: 0:00
520 views
6


Speaker: Allison Koenecke, Postdoc, Microsoft Research New England

Observing heterogeneous treatment effects across different demographic groups is an important mechanism for evaluating fairness. However, relatively little data is available for certain demographics, in which case researchers may combine multiple data sources to increase statistical power. The stakes are especially high in healthcare—it is imperative to accurately measure the effectiveness of treatments for diseases that could disproportionately impact underrepresented patient subgroups. Join researcher Allison Koenecke, from the Machine Learning & Statistics Group at Microsoft Research New England, to discuss federated causal inference. Because legal and privacy considerations may restrict individual-level information sharing across data sets, we introduce federated methods for treatment effect estimation that only utilize summary-level statistics from each data set. These asymptotically guaranteed methods provide variance estimates and doubly robust treatment effects under model assumptions on heterogeneous data sets.

Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit




Other Videos By Microsoft Research


2022-02-08Tutorial: Translating real-world data into evidence
2022-02-08Future of technology in combatting disease and disparities in treatment: A cardiovascular case study
2022-02-08Research talks: Showcasing health equity, access, and resilience collaborations
2022-02-08Opening remarks: Health & Life Sciences - Discovery
2022-02-08The role of tech in decreasing health inequities, improving access, and strengthening resilience
2022-02-08Opening remarks: Health and Life Sciences - Delivery
2022-02-08Closing remarks: Causal Machine Learning
2022-02-08Demo: Enabling end-to-end causal inference at scale
2022-02-08Research Talk: Enhancing the robustness of massive language models via invariant risk minimization
2022-02-08Research talk: Post-contextual-bandit inference
2022-02-08Research talk: Causal ML and fairness
2022-02-08Research talk: Causal learning: Discovering causal relations for out-of-distribution generalization
2022-02-08Research talk: Can causal learning improve the privacy of ML models?
2022-02-08Research talk: Causal ML and business
2022-02-08Research talk: Challenges and opportunities in causal machine learning
2022-02-08Opening remarks: Causal Machine Learning
2022-02-08Closing remarks: The Future of Privacy and Security
2022-02-08Demo: Generating formally proven low-level parsers with EverParse
2022-02-08Demo: EverParse: Automatic generation of formally verified secure parsers for cloud integrity
2022-02-08Research talk: DARPA SafeDocs: an approach to secure parsing and information interchange formats
2022-02-08Research talk: Privacy in machine learning research at Microsoft