I talk to the new Facebook Blender Chatbot

Subscribers:
291,000
Published on ● Video Link: https://www.youtube.com/watch?v=wTIPGoHLw_8



Duration: 11:21
15,071 views
506


This is what a 9 Billion parameter transformer can do. I take a look at FAIR's new paper "Recipes for building an open-domain chatbot" and try out their chatbot live!

Jump to 3:00 to see the chatbot in action.

Paper: https://arxiv.org/abs/2004.13637
Blog: https://ai.facebook.com/blog/state-of-the-art-open-source-chatbot/
Code: https://parl.ai/projects/blender/

Abstract:
Building open-domain chatbots is a challenging area for machine learning research. While prior work has shown that scaling neural models in the number of parameters and the size of the data they are trained on gives improved results, we show that other ingredients are important for a high-performing chatbot. Good conversation requires a number of skills that an expert conversationalist blends in a seamless way: providing engaging talking points and listening to their partners, and displaying knowledge, empathy and personality appropriately, while maintaining a consistent persona. We show that large scale models can learn these skills when given appropriate training data and choice of generation strategy. We build variants of these recipes with 90M, 2.7B and 9.4B parameter models, and make our models and code publicly available under the collective name Blender. Human evaluations show our best models are superior to existing approaches in multi-turn dialogue in terms of engagingness and humanness measurements. We then discuss the limitations of this work by analyzing failure cases of our models.

Authors: Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M. Smith, Y-Lan Boureau, Jason Weston

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher




Other Videos By Yannic Kilcher


2020-05-13Faster Neural Network Training with Data Echoing (Paper Explained)
2020-05-12Group Normalization (Paper Explained)
2020-05-11Concept Learning with Energy-Based Models (Paper Explained)
2020-05-10[News] Google’s medical AI was super accurate in a lab. Real life was a different story.
2020-05-09Big Transfer (BiT): General Visual Representation Learning (Paper Explained)
2020-05-08Divide-and-Conquer Monte Carlo Tree Search For Goal-Directed Planning (Paper Explained)
2020-05-07WHO ARE YOU? 10k Subscribers Special (w/ Channel Analytics)
2020-05-06Reinforcement Learning with Augmented Data (Paper Explained)
2020-05-05TAPAS: Weakly Supervised Table Parsing via Pre-training (Paper Explained)
2020-05-04Chip Placement with Deep Reinforcement Learning (Paper Explained)
2020-05-03I talk to the new Facebook Blender Chatbot
2020-05-02Jukebox: A Generative Model for Music (Paper Explained)
2020-05-01[ML Coding Tips] Separate Computation & Plotting using locals
2020-04-30The AI Economist: Improving Equality and Productivity with AI-Driven Tax Policies (Paper Explained)
2020-04-29Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask (Paper Explained)
2020-04-28[Rant] Online Conferences
2020-04-27Do ImageNet Classifiers Generalize to ImageNet? (Paper Explained)
2020-04-26[Drama] Schmidhuber: Critique of Honda Prize for Dr. Hinton
2020-04-25How much memory does Longformer use?
2020-04-24Supervised Contrastive Learning
2020-04-23Thinking While Moving: Deep Reinforcement Learning with Concurrent Control



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
nlp
chatbot
dialogue
persona
vegan
turing test
natural language processing
transformer
generator
context