Training & Optimization
2015
BEGINNERBatch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift
Sergey Ioffe, Christian Szegedy · 2015
Batch Normalization. Normalising activations per-batch stabilises and speeds up training — a now-standard building block in deep networks.
What you'll get
- Outline: a plain-English breakdown of the paper's core idea, prerequisites, and the concepts you'll need to implement it.
- Exercises: five to ten hands-on tasks, each with a concept card, a prompt, a starter code stub, and a collapsible reference solution.
- Runnable notebook: a single
.ipynbyou can download and open in Jupyter or VS Code to work through every exercise. - Extensions: suggested follow-up experiments so you don't stop at a faithful reimplementation.