Self-Supervised Learning
2021
INTERMEDIATE

Masked Autoencoders Are Scalable Vision Learners

Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr Dollár, Ross Girshick · 2021

MAE. Mask 75% of image patches and reconstruct them — a BERT-style objective that yields strong, scalable vision representations.

What you'll get

  • Outline: a plain-English breakdown of the paper's core idea, prerequisites, and the concepts you'll need to implement it.
  • Exercises: five to ten hands-on tasks, each with a concept card, a prompt, a starter code stub, and a collapsible reference solution.
  • Runnable notebook: a single .ipynb you can download and open in Jupyter or VS Code to work through every exercise.
  • Extensions: suggested follow-up experiments so you don't stop at a faithful reimplementation.