Energy-Based Transformers are Scalable Learners and Thinkers (Paper Review)

Subscribers:
292,000
Published on ● Video Link: https://www.youtube.com/watch?v=RAEy3JZmIaA



Duration: 0:00
17,474 views
749


Paper: https://arxiv.org/abs/2507.02092
Code: https://github.com/alexiglad/EBT
Website: https://energy-based-transformers.github.io/

Abstract:
Inference-time computation techniques, analogous to human System 2 Thinking, have recently become popular for improving model performances. However, most existing approaches suffer from several limitations: they are modality-specific (e.g., working only in text), problem-specific (e.g., verifiable domains like math and coding), or require additional supervision/training on top of unsupervised pretraining (e.g., verifiers or verifiable rewards). In this paper, we ask the question "Is it possible to generalize these System 2 Thinking approaches, and develop models that learn to think solely from unsupervised learning?" Interestingly, we find the answer is yes, by learning to explicitly verify the compatibility between inputs and candidate-predictions, and then re-framing prediction problems as optimization with respect to this verifier. Specifically, we train Energy-Based Transformers (EBTs) -- a new class of Energy-Based Models (EBMs) -- to assign an energy value to every input and candidate-prediction pair, enabling predictions through gradient descent-based energy minimization until convergence. Across both discrete (text) and continuous (visual) modalities, we find EBTs scale faster than the dominant Transformer++ approach during training, achieving an up to 35% higher scaling rate with respect to data, batch size, parameters, FLOPs, and depth. During inference, EBTs improve performance with System 2 Thinking by 29% more than the Transformer++ on language tasks, and EBTs outperform Diffusion Transformers on image denoising while using fewer forward passes. Further, we find that EBTs achieve better results than existing models on most downstream tasks given the same or worse pretraining performance, suggesting that EBTs generalize better than existing approaches. Consequently, EBTs are a promising new paradigm for scaling both the learning and thinking capabilities of models.

Authors: Alexi Gladstone, Ganesh Nanduru, Md Mofijul Islam, Peixuan Han, Hyeonjeong Ha, Aman Chadha, Yilun Du, Heng Ji, Jundong Li, Tariq Iqbal

Links:
Homepage: https://ykilcher.com/
Merch:
YouTube:
Twitter: https://twitter.com/ykilcher
Discord: https://ykilcher.com/discord
LinkedIn: https://www.linkedin.com/in/ykilcher

If you want to support me, the best thing to do is to share out the content :)

If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar: https://www.subscribestar.com/yannickilcher
Patreon: https://www.patreon.com/yannickilcher
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n




Other Videos By Yannic Kilcher


2 days agoContext Rot: How Increasing Input Tokens Impacts LLM Performance (Paper Analysis)
6 days agoEnergy-Based Transformers are Scalable Learners and Thinkers (Paper Review)
2025-05-03On the Biology of a Large Language Model (Part 2)
2025-04-05On the Biology of a Large Language Model (Part 1)
2025-01-26[GRPO Explained] DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models
2024-12-26Traditional Holiday Live Stream
2024-12-24Byte Latent Transformer: Patches Scale Better Than Tokens (Paper Explained)
2024-12-10Safety Alignment Should be Made More Than Just a Few Tokens Deep (Paper Explained)
2024-11-23TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters (Paper Explained)
2024-10-19GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models
2024-10-12Were RNNs All We Needed? (Paper Explained)
2024-10-05Scaling LLM Test-Time Compute Optimally can be More Effective than Scaling Model Parameters (Paper)
2024-08-04Privacy Backdoors: Stealing Data with Corrupted Pretrained Models (Paper Explained)
2024-07-08Scalable MatMul-free Language Modeling (Paper Explained)
2024-06-26Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools (Paper Explained)
2024-06-01xLSTM: Extended Long Short-Term Memory
2024-05-21[ML News] OpenAI is in hot waters (GPT-4o, Ilya Leaving, Scarlett Johansson legal action)
2024-05-01ORPO: Monolithic Preference Optimization without Reference Model (Paper Explained)
2024-04-30[ML News] Chips, Robots, and Models
2024-04-28TransformerFAM: Feedback attention is working memory
2024-04-27[ML News] Devin exposed | NeurIPS track for high school students