Skip to docs content
Trust

Assistant Reliability

What Academe does to keep answers faithful to your sources, and where you should still verify before trusting.

Answers are grounded, not generated

Chat answers carry citations back to the passages they draw from. Clicking a citation opens the source document at the relevant paragraph. If the assistant cannot ground an answer in your corpus, it should say so instead of inventing one.

This is a deliberate trade: the assistant is more useful when it has read the paper, and less useful when it has not. It leans hard on direct extraction rather than paraphrase, and citations are part of the answer path, not decorative afterthoughts attached at the end.

What we do to keep it faithful

  • Extraction, not generationThe assistant is biased toward quoting or tightly paraphrasing source passages rather than composing free text.
  • Citation-first answeringResponses are generated alongside the passages that justify them; if the supporting passage is weak, the claim is softened or dropped.
  • Cross-checkingHigh-stakes claims are routed through a second pass that re-reads the cited passage and asks whether the claim actually follows.
  • Structured tasks stay structuredExtracted tables show the source quote next to every cell, so disagreements between the cell value and its evidence are visible rather than hidden.
  • Edits remain reviewableWhen the assistant rewrites your prose, the change arrives as a tracked suggestion you can accept or reject.

Why you may get different answers to the same question

Asking the same question twice will not always produce the same answer. Three reasons, in rough order of how often they matter:

  • Your context has changed. New documents in the project, different filters, and a different open tab can all shift what the assistant has to work from.
  • Academe has changed. Retrieval, ranking, and the underlying models are updated frequently. A run from last week and a run from today are not the same engine.
  • Language models have intrinsic variability. Even holding the inputs fixed, two runs can produce differently worded answers. This is expected and not a bug. Treat the assistant as a well-read collaborator, not a deterministic database.
When determinism matters
For extractions or reviews where you need the exact same answer across runs, lock the result in the table or note. Once saved, the value is frozen and future re-runs will compare against it rather than overwrite it.

Language coverage

Academe is optimized for English-language research. Searches, extractions, and chat all work best against English corpora. The assistant will attempt queries and summaries in other languages, but result quality varies, and it is not intended as a translation tool.

If you are working primarily in another language, treat non-English output as a draft that needs a native review, the same way you would treat any generated text in a language you do not control.

Where the assistant is not a substitute

A grounded assistant is still an assistant. It will not catch every mistake a reviewer would, it cannot weigh the relative credibility of two sources the way a domain expert can, and it will not notice that a cited paper has since been retracted unless the retraction is in the record it has.

Use it the way you would use a fast Research Agent: quick to surface evidence and assemble structure, but not a stand-in for your own judgment on the claims that matter.

Keep going