Skip to docs content
Feature

Systematic reviews

Scope a review question, gather the corpus, dedupe and screen against eligibility criteria, extract structured data, and produce an audit-ready synthesis in the same workspace.

The workflow

Academe breaks a systematic review into five stages, each with a dedicated panel. Progress through them in order, or jump back and revise. The state is saved per-project and every decision is attributed to a reviewer.

  1. Protocol. Define the research question, PICO/SPIDER framework, inclusion and exclusion criteria, and registration details (PROSPERO, OSF).
  2. Gather. Run searches across the scholarly index; import ad-hoc references by paper identifier or .ris/.bib upload.
  3. Deduplicate. Automatic dedup by DOI, title-similarity, and author/year fingerprint. Review flagged duplicates before they are merged.
  4. Screen. Two-pass screening: title/abstract first, then full-text. Every decision is logged with the reviewer and the rule that triggered it.
  5. Extract. Structured data extraction into a table you define. Extraction runs against the full text and shows the source passage for every cell.

Protocol panel

Open Systematic Review from the project sidebar. The Protocol panel captures the fields a PRISMA-compliant review needs: population, intervention, comparator, outcomes, study types, language filters, date range, and any pre-registered criteria. The protocol is version-pinned. Changing it after screening begins creates a new revision so your audit trail stays intact.

Gathering the corpus

The Gather panel supports three input modes:

  • Search queries against Academe's unified scholarly index. Queries are saved so you can rerun them and diff the results.
  • DOI / PMID import for individual records, pasted one-per-line.
  • File upload for .ris, .bib, .nbib, and CSV exports from other tools.

Duplicates are flagged but not removed in Gather. That happens in the dedup step so you can review the merges.

Dedupe & screen

The dedup pass compares records on DOI, normalized title, and author-year fingerprints. High-confidence duplicates are merged by the system; borderline cases are queued for reviewer confirmation.

Screening runs title/abstract first, then full-text. Each decision records the reviewer, the timestamp, the rule invoked, and a free-form note. Dual-review mode lets two reviewers screen independently and surfaces disagreements for reconciliation.

Structured extraction

In the Extract panel, define the columns you need: study characteristics, outcomes, effect sizes, and risk-of-bias items. Academe fills the table against the included papers' full text. Every cell shows the source passage on click, so you can verify the extracted value.

Reviewer transparency
Every extracted value carries a provenance link back to the passage it came from. If a reviewer edits a cell, the row records who changed it and when.

Export & reporting

When the review is complete, export the full dataset as CSV or Excel, the PRISMA flow as SVG or PNG, and the reviewer log as a PDF audit trail. A markdown synthesis draft is generated with citation anchors you can drop straight into a manuscript.

Reproducibility by default
Every systematic review in Academe is replayable: the protocol, queries, decision log, and extraction schema are all captured in the project record. A second reviewer can re-run the exact same review and see what differed.

At a glance

Less manual screening work

Assistant support is aimed at the screening and data-extraction steps, where the bulk of reviewer hours are normally spent.

High screening recall

Assisted title/abstract screening targets the same recall as a careful human reviewer on validation sets, with an audit trail for include/exclude decisions.

Accurate extraction

Structured extraction runs against the full text and attaches the source passage to every cell, so accuracy is verifiable rather than asserted.

Millions of papers indexed

Gather searches run across Academe's unified scholarly index covering academic papers, conference proceedings, preprints, and clinical trial registries.

Data sources

The Gather panel queries a single unified index so you don't have to assemble queries across five different databases by hand.

Academic papers

Source coverage spanning 400 million+ scholarly works. Refreshed weekly.

Clinical trials

Over 500,000 registered trials from clinical trial registries. Refreshed on a regular schedule.

Your documents

Upload institutional subscriptions, grey literature, or prior project PDFs to supplement the index.

Why reviewers trust the output

  • Sentence-level citationsEvery generated claim points back to the exact sentence from the source paper. No more hunting through a PDF to figure out where a summary came from.
  • Dynamic screening & extractionYou often don't know which papers you want or what to extract from them until you see what's out there. Academe lets you refine criteria mid-flight and re-apply them without restarting the review.
  • Criteria you can auditPer-criterion pass/fail on every screening decision, with a rationale quote and the option to override. Reviewer override is logged alongside the original recommendation.
  • Dual-reviewer reconciliationTwo reviewers screen independently; Academe surfaces disagreements for a third tie-breaker pass.

Beyond the guidelines: what automation makes practical

PRISMA, Cochrane, and HTA guidelines were shaped around the constraints of manual work. Academe keeps you compliant with those guidelines while enabling workflows that used to be impractical:

  • Living reviewsKeep a review current by combining the Gather step with Research Alerts. New papers matching your protocol are surfaced the week they publish and can be incorporated without re-running the whole pipeline.
  • Transparency by defaultShare a read-only link to the review record: protocol, queries, decision log, and extraction schema, with a colleague, editor, or advisor who does not have an Academe account.
  • Iterative protocol designTest different inclusion criteria on a pilot subset before committing. The pipeline can rerun quickly, so protocol iteration is easier to review.
  • Reusable dataThe papers and extracted data from a completed review stay in your library. Reuse them in the next project without re-screening.

FAQ

  • Can I collaborate with a teammate on a review?Yes. Team workspaces allow multiple reviewers to screen records and work from the same extraction table. Dual-review reconciliation is built for screening decisions.
  • Can I customize screening and extraction criteria?Yes. Start with suggested criteria, then edit or replace them with your protocol-defined rules. Extraction schemas are customizable.
  • Can I update my review as new papers are published?Yes. Pair the review with Research Alerts for living reviews. New matches are queued for a focused rescreen pass.
  • Can you extract data from tables and figures?Tables are supported. Figures are supported for captions and inline text.
  • Do you have access to paywalled papers?We prioritize open-access versions first and surface them automatically. For paywalled papers, connect your institutional access and Academe will route through it where permitted by your institution's license.
  • Is the workflow PRISMA / HTA / MDR compliant?The audit trail produced by Academe (protocol version, queries with timestamps, decision log, extraction provenance) is designed to satisfy PRISMA and related regulatory guidance. Export formats include CSV, Excel, SVG/PNG flow diagrams, and a PDF reviewer log.
Keep going