Demo: RAI Toolbox: An open-source framework for building responsible AI

Subscribers:
344,000
Published on ● Video Link: https://www.youtube.com/watch?v=tGgJCrA-MZU



Duration: 20:10
1,116 views
0


Speakers:
Besmira Nushi, Principal Researcher, Microsoft Research
Mehrnoosh Sameki, Senior Program Manager, Microsoft Azure Machine Learning
Amit Sharma, Senior Researcher, Microsoft Research India

Assessing and investigating machine learning (ML) models prior to deployment remains at the core of developing trustworthy and responsible artificial intelligence (AI). While different open-source tools have been proposed for assessing fairness, explainability, or errors of an ML model, these properties are not independent, and ML practitioners may need several of these functionalities to fully identify, diagnose, mitigate issues, and take action in the real world. In this session, we will demonstrate the Responsible AI Toolbox. This toolbox was built with two intentions: accelerate the development lifecycle for ML in a way that implements and applies Responsible AI principles, and serve as a collaboration framework for research in the Responsible AI field. We will introduce the overall workflow, from the ease of configuring the interoperable dashboards up to the intended experience. We will showcase how the toolbox can be used to assess models through a responsible AI lens and to analyze data for causal decision-making with the goal of identifying actions that can impact desirable outcomes in the real world. Attendees will be able to access the different parts of the demo through online interactive deployments of the toolbox on illustrational datasets and models.

Resources: https://github.com/microsoft/responsible-ai-widgets/

Learn more about the 2021 Microsoft Research Summit: https://Aka.ms/researchsummit




Other Videos By Microsoft Research


2022-02-08Tutorial: Create human-centered AI with the Human-AI eXperience (HAX) Toolkit
2022-02-08Panel: Maximizing benefits and minimizing harms with language technologies
2022-02-08Lightning talks: Advances in fairness in AI: New directions
2022-02-08Closing remarks: Tech for resilient communities
2022-02-08Lightning talks: Advances in fairness in AI: From research to practice
2022-02-08Panel: Content moderation beyond the ban: Reducing toxic, misleading, and low-quality content
2022-02-08Technology demo: Using technology to combat human trafficking
2022-02-08Technology demo: Project Eclipse: Hyperlocal air quality monitoring for cities
2022-02-08Research talk: Bucket of me: Using few-shot learning to realize teachable AI systems
2022-02-08Tutorial: Best practices for prioritizing fairness in AI systems
2022-02-08Demo: RAI Toolbox: An open-source framework for building responsible AI
2022-02-08Opening remarks: Responsible AI
2022-02-08Closing remarks: Deep Learning and Large Scale AI
2022-02-08Roundtable discussion: Beyond language models: Knowledge, multiple modalities, and more
2022-02-08Research talk: Closing the loop in natural language interfaces to relational databases
2022-02-08Just Tech: Bringing CS, the social sciences, and communities together for societal resilience
2022-02-08Research talk: WebQA: Multihop and multimodal
2022-02-08Opening remarks: Tech for resilient communities
2022-02-08Research talk: Towards Self-Learning End-to-end Dialog Systems
2022-02-08Research talk: Focal Attention: Towards local-global interactions in vision transformers
2022-02-08Research talk: Knowledgeable pre-trained language models



Tags:
fair AI systems
reliable AI systems
responsible AI
social inequities in AI
societal implications of AI
societal impact
machine learning
natural language processing
microsoft research summit