On the importance of stupidity in research
Schwartz's 2008 essay made the case that feeling lost is the texture of the work, not a sign of having chosen the wrong career. Some practical follow-ups.
Academe
Here is a thing that is not always said clearly in a first year of graduate school: the researchers most worth admiring are confused most of the time. They spend their days in territory where they do not know the answer, do not know the right question, and, most uncomfortably, do not know whether the problem they are staring at is hard or simply ill-posed. That feeling of being out of one's depth is not a defect in the experience. It is what the work consists of.
The classic argument
In 2008, the cell biologist Martin Schwartz published a short essay in the Journal of Cell Science titled "The importance of stupidity in scientific research." His claim was almost rude: if research feels smart and competent, the work is probably not research. Feeling smart usually means working on problems whose answers are already in the literature. Real research lives at the edge of what anyone knows, and the honest experience of that edge is confusion.
Schwartz's anecdote was about a graduate-school colleague who left after a few years. Brilliant, hard-working, publishing. But every problem had a familiar shape, and when familiar shapes ran out, the experience of not knowing what to do next became unbearable. Schwartz, who always felt stupid, stayed. He was not smarter. He was better trained at sitting with confusion.
Why this keeps tripping people up
A specific trap catches students moving from coursework into research:
- In coursework, feeling smart correlates with doing well.
- The same reward signal is carried into the research lab.
- Research throws the student into confusion.
- The confusion is read as "I must not be smart enough for this."
- The student starts optimizing for felt-smartness: safe problems, known methods, incremental extensions.
- The work becomes less interesting. Satisfaction drops. The advisor sighs.
The original move, importing the classroom reward signal into research, is the whole mistake. Classroom success rewards answering well-defined questions with known tools. Research rewards noticing that a question is ill-defined and slowly figuring out how to define it better. Those two activities feel different. One feels like competence. The other feels like being lost.
What the texture of useful confusion looks like
Recognizable patterns worth learning to read, so they are not mis-attributed to incompetence:
- A paper where every individual sentence is comprehensible but the overall point is not. This usually means the author's argumentative landscape is not yet visible to the reader. Keep reading. Come back in a week.
- An experimental result that resists interpretation. This is the productive kind of stuck. The interpretation gap is the research problem.
- An introduction that collapses on a follow-up question. The collapse is information; the writer now knows more than they did an hour ago.
- Three weeks of sitting with a question before it can be stated clearly. Normal. The ability to state it clearly is often 80 percent of the contribution.
None of these feel good. All of them mean the work is working.
Habits that make it sustainable
Comfort with confusion is not a feeling one can will into existence. What can be built is the set of habits that allow consistent work while confused.
Keep a running confusion note
A single document, in a lab notebook or research workspace. At the top: "What I do not yet understand." Below: a bulleted list of the questions currently unanswered. Add freely. Cross items out when they resolve. Naming confusion turns it from a diffuse feeling into a workable artifact.
Separate "I do not understand" from "I am not capable"
These blur together internally but they are not the same. Not understanding a particular paper on a first read is a function of exposure and time, not of capability. The narration trick is to label more carefully. "I did not understand that sentence" is accurate. "I did not understand that sentence because I am not smart enough" is a story being told, and it does no useful work.
Ask the obvious question earlier
The researchers who move fastest are the ones who ask, out loud, in a room of experts: "Wait, why do we do it that way?" Three-quarters of the time there is a good answer and the asker learns something. One-quarter of the time nobody has a good answer and a real problem has been identified.
Short feedback loops with someone more experienced
A 20-minute weekly conversation with an advisor or a senior postdoc does more than a week of solo reading. Not because the senior figure produces the answer (they often do not) but because they normalize the texture of being stuck. Watching an experienced researcher shrug and say "yes, that is a hard question, I do not know either" is itself a calibration moment.
A note on AI tools
LLM-assisted research tools reduce one flavor of stupidity: the kind where the researcher cannot find a paper, recall a definition, or trace a citation. They do not reduce the kind that matters: not knowing what question to ask, how to interpret a result, or why two papers disagree. That second kind is the craft. The first kind is trivia recall, and getting trivia recall out of the way faster is straightforwardly useful, because it leaves more time for the work that still requires confusion.
So what
If the smart people around a student look less lost than the student feels, they are probably not. They have had longer to learn that lost is fine. The skill to build is not becoming less confused. It is becoming less alarmed by confusion. Those are two different things, and only one of them is a prerequisite for doing real work.