Does GPT-3 lie? - Misinformation and fear-mongering around the TruthfulQA dataset

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=aX8phGhG8VQ



Duration: 13:19
19,311 views
0


#gpt-3 #truth #conspiracy

A new benchmark paper has created quite an uproar in the community. TruthfulQA is a dataset of 817 questions probing for imitative falsehoods where language models become less truthful, the larger they get. This surprising counter-intuitive finding validates many people's criticisms of large language models, but is it really the correct conclusion?

OUTLINE:
0:00 - Intro
0:30 - Twitter Paper Announcement
4:10 - Large Language Models are to blame!
5:50 - How was the dataset constructed?
9:25 - The questions are adversarial
12:30 - Are you surprised?!

Paper: https://arxiv.org/abs/2109.07958

Links:
TabNine Code Completion (Referral): http://bit.ly/tabnine-yannick
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://discord.gg/4H8xxDF
BitChute: https://www.bitchute.com/channel/yannic-kilcher
Minds: https://www.minds.com/ykilcher
Parler: https://parler.com/profile/YannicKilcher
LinkedIn: https://www.linkedin.com/in/ykilcher
BiliBili: https://space.bilibili.com/1824646584

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2021-10-24Symbolic Knowledge Distillation: from General Language Models to Commonsense Models (Explained)
2021-10-21I took a Swiss train and it was awesome! Train Seat Review - SBB InterCity 1 - Geneva to St. Gallen
2021-10-20[ML News] Microsoft trains 530B model | ConvMixer model fits into single tweet | DeepMind profitable
2021-10-07[ML News] DeepMind does Nowcasting | The Guardian's shady reporting | AI finishes Beethoven's 10th
2021-10-06Grokking: Generalization beyond Overfitting on small algorithmic datasets (Paper Explained)
2021-10-02How far can we scale up? Deep Learning's Diminishing Returns (Article Review)
2021-09-29[ML News] Plagiarism Case w/ Plot Twist | CLIP for video surveillance | OpenAI summarizes books
2021-09-27Inconsistency in Conference Peer Review: Revisiting the 2014 NeurIPS Experiment (Paper Explained)
2021-09-26100K Subs AMA (Ask Me Anything)
2021-09-24[ML News] New ImageNet SOTA | Uber's H3 hexagonal coordinate system | New text-image-pair dataset
2021-09-21Does GPT-3 lie? - Misinformation and fear-mongering around the TruthfulQA dataset
2021-09-20Topographic VAEs learn Equivariant Capsules (Machine Learning Research Paper Explained)
2021-09-16[ML News] Roomba Avoids Poop | Textless NLP | TikTok Algorithm Secrets | New Schmidhuber Blog
2021-09-14Celebrating 100k Subscribers! (w/ Channel Statistics)
2021-09-10[ML News] AI predicts race from X-Ray | Google kills HealthStreams | Boosting Search with MuZero
2021-09-06∞-former: Infinite Memory Transformer (aka Infty-Former / Infinity-Former, Research Paper Explained)
2021-09-03[ML News] Blind Chess AI Competition | Graph NNs for traffic | AI gift suggestions
2021-09-02ALiBi - Train Short, Test Long: Attention with linear biases enables input length extrapolation
2021-08-27[ML News] Stanford HAI coins Foundation Models & High-profile case of plagiarism uncovered
2021-08-26Fastformer: Additive Attention Can Be All You Need (Machine Learning Research Paper Explained)
2021-08-23PonderNet: Learning to Ponder (Machine Learning Research Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
gpt-3
truthful
truthfulqa
conspiracy
conspiracy theories
large language models
ezra klein
inverse scaling
openai
gpt-j
gpt-neo
imitative falsehoods
adversarial
informativeness
evaluation
trustworthy
ml bias
are language models biased
is gpt-3 truthful
question answering
harmful prompt
helpful prompt