Implement classic papers from scratch
Pick any paper below and PaperNova generates a guided workbook: an outline, exercise-wise explanations from beginner to advanced, and a downloadable Jupyter notebook you can run locally.
Language Models are Few-Shot Learners
Tom B. Brown, Benjamin Mann, Nick Ryder, +3 more
GPT-3. Shows that scale alone unlocks in-context learning — a 175B parameter LM can tackle new tasks from a handful of examples in the prompt.
LLaMA: Open and Efficient Foundation Language Models
Hugo Touvron, Thibaut Lavril, Gautier Izacard, +1 more
LLaMA. Open-weights foundation model family that matched or beat GPT-3 at a fraction of the parameters and catalysed the open-source LLM ecosystem.
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
Jason Wei, Xuezhi Wang, Dale Schuurmans, +3 more
Chain-of-Thought. A few worked-example prompts dramatically improve LLM reasoning on arithmetic, commonsense and symbolic tasks — zero training required.
Why implement classic papers?
Reading a paper and implementing it are two very different skills. PaperNova's workbook tool bridges that gap: Gemini turns the paper into a sequence of small, self-contained exercises — from a warm-up reimplementation of the core idea up to advanced extensions — then assembles them into a Jupyter notebook you can run, edit and extend.
Prefer to work from your own paper? Upload a PDF and get the same guided workbook tailored to it.