Running a reproducible systematic review
PRISMA-compliant screening, dual-reviewer adjudication, and auto-generated flow diagrams without stitching together Covidence, EndNote, Excel, and Word.
Academe
A systematic review is not a longer literature review. It is a different instrument. Where a literature review is an argument, a systematic review is a reproducible measurement of what the evidence currently says. If a second team ran the same search string, screened by the same criteria, and extracted using the same fields, they should arrive at the same conclusion. Academe's Systematic Review module is built around that reproducibility constraint.
1. Register the protocol first
Before any search, lock the question (PICO for clinical, SPIDER for qualitative, or a custom frame), the inclusion and exclusion criteria, the databases that will be searched, and the extraction fields. The protocol document is timestamped. Reviewers can compare what was said in advance to what was actually done.
2. Run the search
A search string is drafted once and runs across the databases registered in the protocol. The source connectors cover the major scholarly indexes and return a deduplicated hit list. Every query is logged with its exact string and timestamp.
The search string is the most common point of failure in a systematic review. Too narrow and the review misses studies. Too broad and the screening pile explodes. A pilot search that confirms five to ten known papers appear in the results is the cheapest way to catch the error before it costs a month.
3. Title and abstract screening
Hits surface in a two-column screener: the full title and abstract on the left, the decision panel on the right. Dual-reviewer mode runs two screeners independently and flags conflicts for adjudication. Every decision, including the reason code, is stored for the PRISMA flow diagram.
The screener can pre-sort by relevance so the likely-includes appear first. Every call is still made by a human; the ordering only changes the queue.
4. Full-text screening
Same pattern, but with the PDF. Upload or auto-fetch via the Unpaywall integration. Exclusion at full-text stage requires a reason code (wrong population, wrong outcome, wrong study design) that feeds directly into the flow diagram.
5. Data extraction
Define the extraction form once: study design, sample, intervention, comparator, outcome, effect size, risk-of-bias domain. The platform generates an extraction sheet from the form. For each included paper, an LLM pre-populates the fields it can find in the methods and results. Humans verify and correct. Verified values land in a structured table that can be piped into meta-analysis software (Stata, R's metafor, RevMan) or plotted inline.
6. Risk of bias
Pick a recognized tool such as Cochrane RoB 2.0, QUADAS-2, or Newcastle-Ottawa. The platform asks the right domain questions per study. An LLM proposes an initial assessment from the methods section; the reviewer verifies, edits, and confirms.
7. Synthesis
Narrative synthesis, meta-analysis, or meta-narrative. The methods section is generated from the audit trail of decisions actually made, and results tables follow PRISMA 2020 formatting. The PRISMA flow diagram is rendered from the screening logs, not transcribed by hand.
8. Report
Export the final review as DOCX, PDF, or LaTeX. Every claim in the results section links back to the extraction row and the source paper. A reader can click a claim and reach the supporting evidence in two hops.
What this consolidates
Traditional systematic-review tooling spreads across Covidence, EndNote, Excel, R, and Word. Each tool has separate state, and synchronizing them is the work that eats the most time. Bringing protocol, screening, extraction, risk-of-bias, and manuscript into one workspace means changes propagate instead of being copied across seven applications. The reviewing judgment is unchanged. The synchronization overhead is removed.