[OpenAI GPT2] Language Models are Unsupervised Multitask Learners | TDLS Trending Paper

Published on ● Video Link: https://www.youtube.com/watch?v=n_UlVuFAzbU



Duration: 1:29:32
5,087 views
78


Toronto Deep Learning Series - Fast Track Stream
https://tdls.a-i.science/events/2019-03-07

Language Models are Unsupervised Multitask Learners

"Natural language processing tasks, such as question answering, machine translation, reading comprehension, and summarization, are typically approached with supervised learning on task-specific datasets. We demonstrate that language models begin to learn these tasks without any explicit supervision when trained on a new dataset of millions of webpages called WebText. When conditioned on a document plus questions, the answers generated by the language model reach 55 F1 on the CoQA dataset - matching or exceeding the performance of 3 out of 4 baseline systems without using the 127,000+ training examples. The capacity of the language model is essential to the success of zero-shot task transfer and increasing it improves performance in a log-linear fashion across tasks. Our largest model, GPT-2, is a 1.5B parameter Transformer that achieves state of the art results on 7 out of 8 tested language modeling datasets in a zero-shot setting but still underfits WebText. Samples from the model reflect these improvements and contain coherent paragraphs of text. These findings suggest a promising path towards building language processing systems which learn to perform tasks from their naturally occurring demonstrations."




Other Videos By LLMs Explained - Aggregate Intellect - AI.SCIENCE


2019-04-08ACT: Adaptive Computation Time for Recurrent Neural Networks | AISC
2019-04-04[FFJORD] Free-form Continuous Dynamics for Scalable Reversible Generative Models (Part 1) | AISC
2019-04-01[DOM-Q-NET] Grounded RL on Structured Language | AISC Author Speaking
2019-03-315-min [machine learning] paper challenge | AISC
2019-03-28[Variational Autoencoder] Auto-Encoding Variational Bayes | AISC Foundational
2019-03-25[GQN] Neural Scene Representation and Rendering | AISC
2019-03-21Towards Interpretable Deep Neural Networks by Leveraging Adversarial Examples | AISC
2019-03-18Understanding the Origins of Bias in Word Embeddings
2019-03-14[Original Style Transfer] A Neural Algorithm of Artistic Style | TDLS Foundational
2019-03-11[RecSys 2018 Challenge winner] Two-stage Model for Automatic Playlist Continuation at Scale |TDLS
2019-03-07[OpenAI GPT2] Language Models are Unsupervised Multitask Learners | TDLS Trending Paper
2019-03-04You May Not Need Attention | TDLS Code Review
2019-02-28[DDQN] Deep Reinforcement Learning with Double Q-learning | TDLS Foundational
2019-02-25[AlphaGo Zero] Mastering the game of Go without human knowledge | TDLS
2019-02-21Transformer XL | AISC Trending Papers
2019-02-19Computational prediction of diagnosis & feature selection on mesothelioma patient records | AISC
2019-02-18Support Vector Machine (original paper) | AISC Foundational
2019-02-11Tensor Field Networks | AISC
2019-02-07ACAI: Understanding and Improving Interpolation in Autoencoders via an Adversarial Regularizer
2019-02-04Code Review: Transformer - Attention Is All You Need | AISC
2019-02-04[StyleGAN] A Style-Based Generator Architecture for GANs, part2 (results and discussion) | TDLS



Tags:
machine learning
deep learning
gpt2
language models
natural langage processing
transformer
attention
gpt-2
open ai