Implement classic papers from scratch
Pick any paper below and PaperNova generates a guided workbook: an outline, exercise-wise explanations from beginner to advanced, and a downloadable Jupyter notebook you can run locally.
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin, Ming-Wei Chang, Kenton Lee, +1 more
Bidirectional masked-language modelling that reshaped NLP benchmarks and set the pretraining-then-finetuning pattern for years to come.
Attention Is All You Need
Ashish Vaswani, Noam Shazeer, Niki Parmar, +5 more
The foundational Transformer paper. Introduces multi-head self-attention and dispenses with recurrence and convolutions — the blueprint behind every modern large language model.
Efficient Estimation of Word Representations in Vector Space
Tomas Mikolov, Kai Chen, Greg Corrado, +1 more
Word2Vec. Skip-gram and CBOW turned words into dense vectors whose geometry encodes meaning — the bridge between symbolic text and modern deep learning.
Why implement classic papers?
Reading a paper and implementing it are two very different skills. PaperNova's workbook tool bridges that gap: Gemini turns the paper into a sequence of small, self-contained exercises — from a warm-up reimplementation of the core idea up to advanced extensions — then assembles them into a Jupyter notebook you can run, edit and extend.
Prefer to work from your own paper? Upload a PDF and get the same guided workbook tailored to it.