AI Fariness and Adversarial Debiasing

Published on ● Video Link: https://www.youtube.com/watch?v=DtoKMrrAkLE



Duration: 57:27
415 views
4


Speaker(s): David Van Bruwaene
Facilitator(s):


Find the recording, slides, and more info at https://ai.science/e/how-the-board-of-directors-got-their-start-with-adversarial-debiasing--4tJcydU0OxWYpORF6cPl


Motivation / Abstract
Designing governance systems for AI is challenging on multiple fronts. Machine learning models are inscrutable to the best of us, yet it is frequently non-technical members of senior management who set budget, scope, and quality targets, with final sign-off on product release. I work through a governance use case of setting and enforcing policy relating to protected category labels (e.g. race, age, and gender). This demonstrates the need for a difficult conversation between data scientists and senior management covering bias mitigation techniques, standards, regulations, and business strategy. I propose a solution that relies on the notion of a multi-layer policy with adaptive verification subprocesses. Using this construct, I show how oversight committees truly can work hand-in-hand with data scientists to bring responsible AI systems into production.


What was discussed?
- how can we define “algorithmic fairness” and how’s that different from our regular understanding of fairness
- Who should be behind the decision of what should be the fairness definition? How can engineering teams contribute?
- what is the economic incentive for fairness?


What are the key takeaways?
- there are many algorithmic definitions for fairness and what should be used is still reliant on proper debate and human involvement before we can completely leave the fairness judgement to machines
- there are ways to think about incentives for fairness, like cost avoidance (reputation, legal action, ...) or more positive aspects (acquiring wider customer base), but incentives for companies is not very clear and still legislation and regulation is needed to ensure users' interest


------
#AISC hosts 3-5 live sessions like this on various AI research, engineering, and product topics every week! Visit https://ai.science for more details




Other Videos By LLMs Explained - Aggregate Intellect - AI.SCIENCE


2020-08-19Pink Diamond - Data Driven Prediction of Venture Success | Workshop Capstone
2020-08-19Review Nuggets - Mining Insight from Consumer Product Reviews | Workshop Capstone
2020-08-19Fast Film - Emotionally Aware Movie Recommender | Workshop Capstone
2020-08-19Acetock - Stock Prediction Tool for Amateur Investors | Workshop Capstone
2020-08-19Saramsh - Patent Document Summarization using BART | Workshop Capstone
2020-08-19MindfulZen - Data Driven Stress Buster | Workshop Capstone
2020-08-14Machine Learning and the Earth: Applying AI to address some of the world’s greatest challenges
2020-08-13Xun Wang (GEICO): 7 Job Profiles to Demystify the Data Science Career Landscape
2020-08-12Gossip-based Actor-Learner Architectures for Deep Reinforcement Learning | AISC
2020-08-12Computer v.s. Human visual system | AISC
2020-08-12AI Fariness and Adversarial Debiasing
2020-08-11Joint Policy-Value Learning for Recommendation | AISC
2020-08-11Operationalizing the AI Canvas for AI Product Success (and profit) | AISC
2020-08-07Overview of Bias and Fairness in AI
2020-08-06Subexponential-Time Algorithms for Sparse PCA | AISC
2020-08-05Inverse design of nanoporous crystalline reticular materials with deep generative models | AISC
2020-08-04ChemOS: An orchestration software to democratize autonomous discovery | AISC
2020-07-30Recurrent Neural Network for Quantum Wave Function | AISC
2020-07-30Bounded Rationality in Las Vegas: Probabilistic Finite Automata PlayMulti-Armed Bandits | AISC
2020-07-30Information Retrieval for Price Consistency Monitoring - Liu Yang (Amazon)
2020-07-29Quantum Technologies: State of Play | AISC



Tags:
deep learning
machine learning
ai ethics
ai fairness
inclusion
diversity