Efficient Streaming Language Models with Attention Sinks (Paper Explained)

Subscribers:
284,000
Published on ● Video Link: https://www.youtube.com/watch?v=409tNlaByds



Duration: 32:27
33,164 views
917


#llm #ai #chatgpt

How does one run inference for a generative autoregressive language model that has been trained with a fixed context size? Streaming LLMs combine the performance of windowed attention, but avoid the drop in performance by using attention sinks - an interesting phenomenon where the token at position 0 acts as an absorber of "extra" attention.

OUTLINE:
0:00 - Introduction
1:20 - What is the problem?
10:30 - The hypothesis: Attention Sinks
15:10 - Experimental evidence
18:45 - Streaming LLMs
20:45 - Semantics or position?
22:30 - Can attention sinks be learned?
27:45 - More experiments
30:10 - Comparison to Big Bird


Paper: https://arxiv.org/abs/2309.17453

Abstract:
Deploying Large Language Models (LLMs) in streaming applications such as multi-round dialogue, where long interactions are expected, is urgently needed but poses two major challenges. Firstly, during the decoding stage, caching previous tokens' Key and Value states (KV) consumes extensive memory. Secondly, popular LLMs cannot generalize to longer texts than the training sequence length. Window attention, where only the most recent KVs are cached, is a natural approach -- but we show that it fails when the text length surpasses the cache size. We observe an interesting phenomenon, namely attention sink, that keeping the KV of initial tokens will largely recover the performance of window attention. In this paper, we first demonstrate that the emergence of attention sink is due to the strong attention scores towards initial tokens as a ``sink'' even if they are not semantically important. Based on the above analysis, we introduce StreamingLLM, an efficient framework that enables LLMs trained with a finite length attention window to generalize to infinite sequence lengths without any fine-tuning. We show that StreamingLLM can enable Llama-2, MPT, Falcon, and Pythia to perform stable and efficient language modeling with up to 4 million tokens and more. In addition, we discover that adding a placeholder token as a dedicated attention sink during pre-training can further improve streaming deployment. In streaming settings, StreamingLLM outperforms the sliding window recomputation baseline by up to 22.2x speedup. Code and datasets are provided at this https URL.

Authors: Guangxuan Xiao, Yuandong Tian, Beidi Chen, Song Han, Mike Lewis

Links:
Homepage: https://ykilcher.com
Merch: https://ykilcher.com/merch
YouTube: https://www.youtube.com/c/yannickilcher
Twitter: https://twitter.com/ykilcher
Discord: https://ykilcher.com/discord
LinkedIn: https://www.linkedin.com/in/ykilcher

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2023-12-11Did Google fake their Gemini Video?
2023-12-09Text Embeddings Reveal (Almost) As Much As Text
2023-12-03Scalable Extraction of Training Data from (Production) Language Models (Paper Explained)
2023-12-03Just Chatting (OpenAssistant Goodbye Stream)
2023-11-25What is Q-Learning (back to basics)
2023-11-23Greg & Sam are BACK! (+ Q-Star is AGI) (Also Memes)
2023-11-19Is Sam Altman coming back? (OpenAI drama continues)
2023-11-18OpenAI just fired CEO Sam Altman
2023-11-08I built the most expensive CPU ever! (Every instruction is a prompt)
2023-10-24OpenAssistant is Completed
2023-10-14Efficient Streaming Language Models with Attention Sinks (Paper Explained)
2023-10-07Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution (Paper Explained)
2023-09-12Retentive Network: A Successor to Transformer for Large Language Models (Paper Explained)
2023-09-03Reinforced Self-Training (ReST) for Language Modeling (Paper Explained)
2023-08-15[ML News] LLaMA2 Released | LLMs for Robots | Multimodality on the Rise
2023-08-14How Cyber Criminals Are Using ChatGPT (w/ Sergey Shykevich)
2023-08-13Recipe AI suggests FATAL CHLORINE GAS Recipe
2023-08-12DeepFloyd IF - Pixel-Based Text-to-Image Diffusion (w/ Authors)
2023-06-20[ML News] GPT-4 solves MIT Exam with 100% ACCURACY | OpenLLaMA 13B released
2023-06-06Tree-Ring Watermarks: Fingerprints for Diffusion Images that are Invisible and Robust (Explained)
2023-06-02RWKV: Reinventing RNNs for the Transformer Era (Paper Explained)



Tags:
deep learning
machine learning
arxiv
explained
neural networks
ai
artificial intelligence
paper
machine learning tutorial
gpt-4
chatgpt
streaming llms
streaming llm
attention sink
attention sinks
window attention
windowed attention