Reasoning without Language - Deep Dive into 27 mil parameter Hierarchical Reasoning Model
Hierarchical Reasoning Model (HRM) is a very interesting work that shows how recurrent thinking in latent space can help convey ideas that language may perhaps find hard to express.
HRM, trained from scratch with only the official dataset (~1000 examples), with only 27M parameters and a 30x30 grid context (900 tokens), achieves a performance of 40.3%, which substantially surpasses leading CoT-based models like o3-mini-high (34.5%) and Claude 3.7 8K context (21.2%) in ARC-AGI.
This latent space thinking reminds me of the MemOS parameter/activation memory that can be used to convey context without needing input context in language.
HRM updates its reasoning (hidden vectors) every timestep for the low-level vector, and once every T timesteps for the high-level vector, giving us a blueprint of how to do reasoning across multiple timescales and hierarchy.
Moving ahead, I think it would be more interesting if we can combine this latent thinking together with language thinking of the current Chain of Thought (CoT) paradigm. Effectively, this means that instead of just outputting tokens from transformer between subtasks, we also have a latent space output that can be passed into the next call of the LLM.
We should also scale it up to cover more kinds of latent spaces like images, videos, audio, sensorimotor, and also allow for interfacing with tool use and putting the tool output back into the model.
~~~
References:
Slides: https://github.com/tanchongmin/john-youtube/blob/main/Discussion_Sessions/Hierarchical Reasoning Model.pdf
Paper: https://www.arxiv.org/pdf/2506.21734
Code: https://github.com/sapientinc/HRM/blob/main/models/hrm/hrm_act_v1.py
Other resources:
Chain of Thought: https://arxiv.org/pdf/2201.11903
Reasoning and Acting (ReAct): https://arxiv.org/pdf/2210.03629
Thinking in Latent Space: https://arxiv.org/html/2412.06769v2
MemOS (Multiple memory spaces in an overall architecture): https://arxiv.org/pdf/2505.22101
Learning, Fast and Slow (learning from any start and goal state along the trajectory of experience): https://arxiv.org/pdf/2301.13758
~~~
0:00 Introduction
3:38 Impressive results on ARC-AGI, Sudoku and Maze
12:10 Experimental Tasks
17:17 Hierarchical Model Design Insights
29:21 Neuroscience Inspiration
33:25 Clarification on pre-training for HRM
38:20 Performance for HRM could be due to data augmentation
49:30 Visualizing Intermediate Thinking Steps
1:00:05 Traditional Chain of Thought (CoT)
1:02:50 Language may be limiting
1:09:03 New paradigm for thinking
1:25:52 Traditional Transformers do not scale depth well
1:30:59 Truncated Backpropagation Through Time
1:34:04 Towards a hybrid language/non-language thinking
~~~
AI and ML enthusiast. Likes to think about the essences behind breakthroughs of AI and explain it in a simple and relatable way. Also, I am an avid game creator.
Discord: https://discord.gg/bzp87AHJy5
LinkedIn: https://www.linkedin.com/in/chong-min-tan-94652288/
Online AI blog: https://delvingintotech.wordpress.com/
Twitter: https://twitter.com/johntanchongmin
Try out my games here: https://simmer.io/@chongmin