Writing a literature review that argues, not summarizes
The common failure mode is a paper-by-paper list with no thread between them. A workflow that surfaces the argument earlier.
Academe
A literature review is an argument about how a field thinks, made out of the papers the author found. The common failure mode is to treat it as a summary instead. The draft becomes a sequence of "Smith (2019) argued X. Jones (2020) argued Y. Patel (2021) argued Z." with no thread between them. Reviewers send these back. A workflow that surfaces the argument earlier:
Step 1: Decide what question the review is answering
Before opening a single paper, write the question in one sentence. If it cannot fit in one sentence, the scope is not narrow enough yet. Compare:
"How have large language models been used to generate exam questions for medical students since 2020?"
with
"AI in education."
The first defines a tractable corpus and a defensible boundary. The second is a topic, not a question.
Step 2: Gather faster than feels comfortable
Run three searches in parallel:
- Direct term search using the exact phrases from the question.
- Citation tracking. Pick two or three seminal papers already trusted and follow citations forward (who cited them) and backward (who they cited).
- Reviews of reviews. Look for "systematic review" or "scoping review" of the topic. An existing review saves weeks if it is well-scoped.
Dump everything into one project. No filtering yet. No reading yet. The first pass is pure breadth.
Step 3: Screen by title and abstract only
Once the pile is 60 to 200 candidates, screen them fast: title first, abstract second. Thirty seconds per paper. Three buckets:
- Include: directly answers some slice of the question.
- Exclude: wrong topic, wrong method, wrong population.
- Maybe: needs a closer look.
A common error here is reading each paper in full. Screening is pattern matching, not reading. Reading comes later, and only for the included pile.
Step 4: Code the included papers
"Coding" sounds intimidating but means extracting the same five to ten fields from every paper. Decide on fields before starting so the schema does not drift. For a typical empirical review:
- Year, authors, country of study
- Research question or hypothesis
- Method (experimental, survey, qualitative, modeling)
- Sample size and population
- Key finding
- Limitations the authors admit to
A spreadsheet or structured note works. After this step the literature is a dataset, not a pile of PDFs.
Step 5: Cluster
Lay out the coded papers and look for clusters: papers that share a method, a finding, or a framing. Three to six clusters usually emerge. The clusters become the sections of the review.
This is the argumentative step. The clustering choice is itself a claim about how the field is organized. Two reviewers can cluster the same papers differently and arrive at different reviews, both defensible.
Step 6: Write each cluster as claim plus evidence
Every paragraph in a literature review has the same shape:
Claim about the field → two to four papers that support the claim → one or two papers that complicate or contradict it → one sentence on what remains unresolved.
A cluster that cannot be written into this shape is not load-bearing yet. Either merge it into a stronger cluster or drop it.
Step 7: Write the synthesis last
The introduction and synthesis are written last, not first. "Here is what the field knows" cannot be written before the middle sections have specified what each slice of the field knows. Drafts where the introduction is written first nearly always end with an introduction that does not match the body.
What a research workspace handles
Steps 2 through 4 are mechanical and they go faster with the right tooling. Academe imports a folder of 50 to 100 PDFs, extracts the coding fields proposed in Step 4, and answers structured questions across the paper set ("which papers use RCTs?", "which authors disagree with Smith 2019?"). The argument, the clustering choice, and the synthesis are not automated. Those are the parts of the review that distinguish it from any other.