LLaMA: Open and Efficient Foundation Language Models (Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=E5OnoYF2oAk



Duration: 41:07
81,536 views
2,496


#ai #meta #languagemodel

LLaMA is a series of large language models from 7B to 65B parameters, trained by Meta AI. They train for longer on more data and show that something like gpt-3 can be outperformed by significantly smaller models when trained like this. Meta also releases the trained models to the research community.

OUTLINE:
0:00 - Introduction & Paper Overview
4:30 - Rant on Open-Sourcing
8:05 - Training Data
12:40 - Training Hyperparameters
14:50 - Architecture Modifications
17:10 - Optimizer
19:40 - Efficient Implementation
26:15 - Main Results
38:00 - Some more completions
40:00 - Conclusion


Paper: https://arxiv.org/abs/2302.13971
Website: https://ai.facebook.com/blog/large-language-model-llama-meta-ai/

Abstract:
We introduce LLaMA, a collection of foundation language models ranging from 7B to 65B parameters. We train our models on trillions of tokens, and show that it is possible to train state-of-the-art models using publicly available datasets exclusively, without resorting to proprietary and inaccessible datasets. In particular, LLaMA-13B outperforms GPT-3 (175B) on most benchmarks, and LLaMA-65B is competitive with the best models, Chinchilla-70B and PaLM-540B. We release all our models to the research community.

Authors: Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, Guillaume Lample

Links:
Homepage: https://ykilcher.com
Merch: https://ykilcher.com/merch
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://ykilcher.com/discord
LinkedIn: https://www.linkedin.com/in/ykilcher

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2023-05-23Tree of Thoughts: Deliberate Problem Solving with Large Language Models (Full Paper Review)
2023-05-21OpenAI suggests AI licenses (US Senate hearing on AI regulation w/ Sam Altman)
2023-05-12[ML News] Geoff Hinton leaves Google | Google has NO MOAT | OpenAI down half a billion
2023-04-27Scaling Transformer to 1M tokens and beyond with RMT (Paper Explained)
2023-04-15OpenAssistant RELEASED! The world's best open-source Chat AI!
2023-04-10AI Alignment Livestream (aka OpenAssistant "Just Chatting")
2023-04-06OpenAssistant First Models are here! (Open-Source ChatGPT)
2023-03-18The biggest week in AI (GPT-4, Office Copilot, Google PaLM, Anthropic Claude & more)
2023-03-15GPT-4 is here! What we know so far (Full Analysis)
2023-03-11This ChatGPT Skill will earn you $10B (also, AI reads your mind!) | ML News
2023-03-02LLaMA: Open and Efficient Foundation Language Models (Paper Explained)
2023-02-24Open Assistant Inference Backend Development (Hands-On Coding)
2023-02-04OpenAssistant - ChatGPT's Open Alternative (We need your help!)
2022-12-31Open Assistant Live Coding (Open-Source ChatGPT Replication)
2022-12-29AI Essay Competition (lab42)
2022-12-26Open Assistant Live Coding (Open-Source ChatGPT Replication)
2022-12-07ChatGPT: This AI has a JAILBREAK?! (Unbelievable AI Progress)
2022-11-27[ML News] GPT-4 Rumors | AI Mind Reading | Neuron Interaction Solved | AI Theorem Proving
2022-11-25CICERO: An AI agent that negotiates, persuades, and cooperates with people
2022-11-19Galactica: A Large Language Model for Science (Drama & Paper Review)
2022-11-13[ML News] Multiplayer Stable Diffusion | OpenAI needs more funding | Text-to-Video models incoming



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
what is deep learning
deep learning tutorial
introduction to deep learning
meta ai
meta llama
llama llm
gpt-3
large language models
transformers
chatgpt
instruction tuning
llama-i
llama paper