NLP
2018
INTERMEDIATEFeatured

BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding

Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova · 2018

Bidirectional masked-language modelling that reshaped NLP benchmarks and set the pretraining-then-finetuning pattern for years to come.

What you'll get

  • Outline: a plain-English breakdown of the paper's core idea, prerequisites, and the concepts you'll need to implement it.
  • Exercises: five to ten hands-on tasks, each with a concept card, a prompt, a starter code stub, and a collapsible reference solution.
  • Runnable notebook: a single .ipynb you can download and open in Jupyter or VS Code to work through every exercise.
  • Extensions: suggested follow-up experiments so you don't stop at a faithful reimplementation.