Learning to summarize from human feedback (Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=vLTmnaMpQCs



Duration: 45:30
17,363 views
628


#summarization #gpt3 #openai

Text Summarization is a hard task, both in training and evaluation. Training is usually done maximizing the log-likelihood of a human-generated reference summary, while evaluation is performed using overlap-based metrics like ROUGE. Both significantly undervalue the breadth and intricacies of language and the nature of the information contained in text summaries. This paper by OpenAI includes direct human feedback both in evaluation and - via reward model proxies - in training. The final model even outperforms single humans when judged by other humans and is an interesting application of using reinforcement learning together with humans in the loop.

OUTLINE:
0:00 - Intro & Overview
5:35 - Summarization as a Task
7:30 - Problems with the ROUGE Metric
10:10 - Training Supervised Models
12:30 - Main Results
16:40 - Including Human Feedback with Reward Models & RL
26:05 - The Unknown Effect of Better Data
28:30 - KL Constraint & Connection to Adversarial Examples
37:15 - More Results
39:30 - Understanding the Reward Model
41:50 - Limitations & Broader Impact

Paper: https://arxiv.org/abs/2009.01325
Blog: https://openai.com/blog/learning-to-summarize-with-human-feedback/
Code: https://github.com/openai/summarize-from-feedback
Samples: https://openaipublic.blob.core.windows.net/summarize-from-feedback/website/index.html#/

My Video on GPT-3: https://youtu.be/SY5PvZrJhLE
My Video on GPT-2: https://youtu.be/u1_qMdb0kYU

Abstract:
As language models become more powerful, training and evaluation are increasingly bottlenecked by the data and metrics used for a particular task. For example, summarization models are often trained to predict human reference summaries and evaluated using ROUGE, but both of these metrics are rough proxies for what we really care about---summary quality. In this work, we show that it is possible to significantly improve summary quality by training a model to optimize for human preferences. We collect a large, high-quality dataset of human comparisons between summaries, train a model to predict the human-preferred summary, and use that model as a reward function to fine-tune a summarization policy using reinforcement learning. We apply our method to a version of the TL;DR dataset of Reddit posts and find that our models significantly outperform both human reference summaries and much larger models fine-tuned with supervised learning alone. Our models also transfer to CNN/DM news articles, producing summaries nearly as good as the human reference without any news-specific fine-tuning. We conduct extensive analyses to understand our human feedback dataset and fine-tuned models. We establish that our reward model generalizes to new datasets, and that optimizing our reward model results in better summaries than optimizing ROUGE according to humans. We hope the evidence from our paper motivates machine learning researchers to pay closer attention to how their training loss affects the model behavior they actually want.

Authors: Nisan Stiennon, Long Ouyang, Jeff Wu, Daniel M. Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, Paul Christiano

Links:
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/yannic-kilcher-488534136/

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2020-11-15[News] Soccer AI FAILS and mixes up ball and referee's bald head.
2020-11-10Underspecification Presents Challenges for Credibility in Modern Machine Learning (Paper Explained)
2020-11-02Language Models are Open Knowledge Graphs (Paper Explained)
2020-10-26Rethinking Attention with Performers (Paper Explained)
2020-10-17LambdaNetworks: Modeling long-range Interactions without Attention (Paper Explained)
2020-10-11Descending through a Crowded Valley -- Benchmarking Deep Learning Optimizers (Paper Explained)
2020-10-04An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (Paper Explained)
2020-10-03Training more effective learned optimizers, and using them to train themselves (Paper Explained)
2020-09-18The Hardware Lottery (Paper Explained)
2020-09-13Assessing Game Balance with AlphaZero: Exploring Alternative Rule Sets in Chess (Paper Explained)
2020-09-07Learning to summarize from human feedback (Paper Explained)
2020-09-02Self-classifying MNIST Digits (Paper Explained)
2020-08-28Axial-DeepLab: Stand-Alone Axial-Attention for Panoptic Segmentation (Paper Explained)
2020-08-26Radioactive data: tracing through training (Paper Explained)
2020-08-23Fast reinforcement learning with generalized policy updates (Paper Explained)
2020-08-20What Matters In On-Policy Reinforcement Learning? A Large-Scale Empirical Study (Paper Explained)
2020-08-18[Rant] REVIEWER #2: How Peer Review is FAILING in Machine Learning
2020-08-14REALM: Retrieval-Augmented Language Model Pre-Training (Paper Explained)
2020-08-12Meta-Learning through Hebbian Plasticity in Random Networks (Paper Explained)
2020-08-09Hopfield Networks is All You Need (Paper Explained)
2020-08-06I TRAINED AN AI TO SOLVE 2+2 (w/ Live Coding)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
openai
nlp
transformer
gpt
gpt3
gpt-3
gpt-2
natural language processing
summarization
extractive
reddit
attention mechanism
language model
natural language understanding
human feedback
human in the loop
active learning
reward
reward model
reinforcement learning
deep reinforcement learning
deep rl
ppo
proximal policy optimization
adversarial example
broader impact