Tokenization in DeepSeek R1

Published on ● Video Link: https://www.youtube.com/watch?v=tUpErbDUQes



Duration: 0:00
164 views
1


In this video, we dissect token boundary bias, a common pitfall in models like Stability AI’s tokenizer, where unexpected spaces or punctuation splits derail outputs and how DeepSeek fixed this in their R1 model.


Where else to find us:
https://www.linkedin.com/in/amirfzpr/
https://aisc.substack.com/
   / @ai-science  
https://lu.ma/aisc-llm-school
https://maven.com/aggregate-intellect/