Explaining AI

Channel:
Subscribers:
139,000
Published on ● Video Link: https://www.youtube.com/watch?v=rI_L95qnVkM



Duration: 16:27
10,743 views
0


From movie recommendations to medical diagnoses, people are increasingly comfortable with AI making recommendations, or even decisions. However, AI often inherits bias from the datasets that train it, so how do we know we can trust it? Dr. Harry Shum, Head of Microsoft’s AI and Research, breaks down some of the current biases in AI models. And then calls for us to open the "black box" in order to develop the transparency, fairness, and trust needed for continued AI adoption.

Highlights
The latest AI breakthroughs [0:24]
Xiaoice, the Chinese AI with EQ (as well as IQ) [2:42]
Why EQ leads to better digital assistants and chat bots [3:50]
How Japanese and Chinese businesses are using Xiaoice for sales and financial reports [4:51]
Gender bias in current AI models [6:22]
Mapping the gender bias with word pairings [8:33]
Harry Shum makes the case for transparent AI [12:21]
3 Reasons why we need explainable AI [12:58]
The tradeoff between accuracy and explainability in AI models [14:20]

Pull Quote
"...with IQ, we're helping people to accomplish tasks. And with EQ, we have empathy, we have the social skills, and the understanding of human beings' feelings and the emotions."







Tags:
artificial intelligence
ai
machine learning
machine learning projects
artificial intelligence robot
a16z summit
a16z fintech
future technology
Xiaoice
social chatbot
augmented reality
open source
open source intelligence
explainable ai
explainable ai ethics
ai bias
ai gender bias
ai bias ted talk
ai training data and bias
deep learning
deep learning ai
xiaoice demo
xiaoice chatbot
xiaoice microsoft