NLP
2017
INTERMEDIATEFeatured

Attention Is All You Need

Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, Illia Polosukhin · 2017

The foundational Transformer paper. Introduces multi-head self-attention and dispenses with recurrence and convolutions — the blueprint behind every modern large language model.

What you'll get

  • Outline: a plain-English breakdown of the paper's core idea, prerequisites, and the concepts you'll need to implement it.
  • Exercises: five to ten hands-on tasks, each with a concept card, a prompt, a starter code stub, and a collapsible reference solution.
  • Runnable notebook: a single .ipynb you can download and open in Jupyter or VS Code to work through every exercise.
  • Extensions: suggested follow-up experiments so you don't stop at a faithful reimplementation.