Skip to docs content
Discovery

Agentic Search

Academe compares your project against the global scholarly corpus and surfaces ideas, findings, and methods that may be relevant to your work.

Why traditional search isn’t enough

Keyword search mostly returns what you already know to ask for. Important papers for your work are often the ones you don’t know exist: a neighbouring field that solved a related problem, a preprint that complicates an assumption, or a methods paper that changes how you would run an analysis.

Agentic search reverses the direction. Instead of only asking you to search the corpus, Academe searches from the context of the project you are currently working on.

What it surfaces

As you write, import papers, and build out your project, Academe builds an evolving picture of what your research is about: the claims you’re making, the methods you’re using, and the gaps you haven’t filled in yet. It compares that picture against the scholarly corpus.

Adjacent work you haven’t cited

Papers that look relevant even when they don’t share your keywords.

Contradictory findings

Results that challenge central claims while there is still time to revise.

Method transfers

Techniques from other fields that may apply to your problem.

Unexplored gaps

Questions that sit naturally next to your work and may need a closer look.

Recent developments

New papers, preprints, and datasets relevant to your active line of thought, as they appear.

Reviewer-style objections

Angles a critical reader is likely to raise, with citations to support or defeat them.

Where you see it

Suggestions appear where you’re already working, not on a separate screen you have to remember to visit.

Inline while writing

When a paragraph asserts a claim the corpus disputes, Academe flags it as you type.

In chat

Ask a question and the agent can pull in papers you haven’t imported yet, with their relevance made explicit.

Weekly digest

A quiet Monday-morning email summarising what’s new and why it matters for your project.

On-demand review

Ask Academe to “red-team” a section and it actively searches for the strongest counter-evidence.

Why it works

  • Deep project contextAcademe uses what you’ve cited, what you’ve written, what methods you’re using, and what’s still missing. Rankings are shaped by your project, not only by a generic popularity score.
  • Parallel explorationThe research agent can run several focused searches at once: chasing angles, traversing the citation graph, and reading candidate papers where full text is available. You see the distilled result.
  • Continuous updatingThe corpus is refreshed regularly. When a paper appears that could change an argument you wrote last week, Academe can surface it.
What it won’t do
Agentic search is a research partner, not a replacement for judgement. It surfaces candidates and explains why they’re relevant. Deciding whether a finding changes your argument, whether a method fits your data, and what to cite is still your call. Suggestions include sources, passages, and reasoning so you can evaluate them on the merits.

Turning it on and off

Agentic search is enabled by default. You can scope it per project: turn off inline suggestions during focused writing, or restrict the corpus to specific years, fields, or open-access only from your project settings.

Background research runs on the same AI credit budget as the rest of the assistant, and each suggestion shows the credits it used so you stay in control.

Related