On curiosity, hypothesis, and the thinking we didn’t know we were giving away
A while back, I was running an exploratory research project. We had interview transcripts — hours of them. Good material, real people, real conversations.
Then the timeline collapsed. A deadline moved up. The synthesis I’d planned to spend a week on suddenly needed to be done by Thursday.
So I fed the transcripts into an AI. Asked it the questions I needed answered. Gave it the brief.
What came back was clean. Well-organized. Useful-looking. Themes, patterns, business opportunities, supporting quotes. Everything a stakeholder deck needs.
I used it.
Two weeks later, I was in a meeting where someone pushed back on the direction. And I realized I couldn’t defend it. Not because I had checked and found it solid — but because I had never really understood it. I had the output. I had never done the thinking.
It felt embarrassing in retrospect. Not because I’d done something wrong — because I’d done nothing at all.
The AI had answered my questions. But I hadn’t earned the right to ask them yet.
The Hypothesis You Walk In With
There’s a step in qualitative research that goes by different names — affinity mapping, thematic analysis, synthesis. The labels make it sound more structured than it feels. What it actually involves is sitting with a mess. Transcripts, notes, half-formed observations. You move things around. Patterns start to suggest themselves. You resist the obvious groupings. Something unexpected surfaces.
It’s slow. It’s uncomfortable. And it’s where hypotheses are tested and refined.
But before any of that — before the transcripts, before the synthesis — there’s something more fundamental. The hypothesis you walk in with. The one built not from data but from accumulated knowledge, pattern recognition, genuine curiosity about what might be true. This is your compass before the map exists.
Without it, affinity mapping becomes furniture arrangement. You’re grouping things that look similar without knowing what you’re actually looking for. You’ll produce clusters. You won’t produce insight.
This applies to desk research too — and I’ll be honest, I skip it there more than anywhere else. Without a question you actually have, you’re not researching. You’re browsing.
“Let’s just test it” is what people say when they’ve skipped this part. Testing without a hypothesis isn’t research — it’s asking a question you don’t actually have. You’ll get an answer. You won’t know what to do with it. And if you’re the person accountable for the direction, the honest move is to stop, think for two seconds, and decide — rather than convening a room full of people to validate something you haven’t yet bothered to understand yourself.
Outsourced Curiosity
There’s a specific kind of loss that happens when you outsource curiosity.
AI is extraordinarily good at producing answers. That’s not the problem. The problem is what happens before the answer — the moment you decide what to ask. When you open a chat window instead of sitting with the mess, you skip the part where the real question forms. The genuine not-knowing. The itch that sends you back to the transcripts at 11pm because something didn’t add up.
When you prompt an LLM, you’re not forming a hypothesis. You’re describing the shape of the answer you already expect. The prompt is the hypothesis — half-baked, unearned, delivered as a request. What comes back confirms the shape you drew. It feels like discovery. It isn’t.
This isn’t unique to AI. A respected professor, a senior SME, a confident colleague — they all shape your hypotheses before you’ve tested them. The difference is that you can usually trace that influence. You know whose thinking you borrowed. With LLMs, the bias is invisible. The model’s gravitational pull — toward consensus, toward the most-represented patterns in its training data — shapes your next prompt without announcing itself. And because it looks like your own thinking, you don’t push back on it.
The risk isn’t that AI gives you wrong answers. It’s something slower and harder to notice.
Every time you skip the hypothesis — every time you open a chat window instead of sitting with the discomfort — you’re not just saving time. You’re practicing a different skill. The skill of receiving. And like any muscle you stop using, the other one atrophies. The one that generates questions. The one that notices what’s missing. The one that feels the itch at 11pm.
Outsourced curiosity doesn’t fail loudly. It degrades quietly. Your outputs stay clean. Your decks look fine. But the questions get shallower. The hypotheses get safer. And at some point you’re no longer steering — you’re just prompting, and calling it thinking.
That’s what I didn’t notice in that meeting. It wasn’t one bad research project. It was a habit forming.
Who Owns the Curiosity
There’s one thing AI cannot do.
It cannot want to know something.
Which means the question is not whether AI is useful in research. It is. The question is who owns the curiosity.
I think about a question someone whose mind I’ve always respected asked me once, over dinner. The setup was simple: imagine a human being, newborn in the sense of having no memory, no language, no context — suddenly placed on an unknown planet. What is the first thing they feel?
Not think. Feel.
We talked about it for a long time and didn’t arrive anywhere conclusive. But the question stayed. Because underneath it is something harder: how does consciousness build itself? What is a human being, at the origin point, before the accumulation begins?
I don’t know the answer. I’m not sure anyone does. But I’m fairly certain it doesn’t start with a prompt.
Curiosity might be the most human thing there is — not because we decided it should be, but because it appears to be what we’re made of before anything else layers on top. The question that forms before you know what you’re looking for. The itch that has no brief.
If that’s what we’re outsourcing, we should at least know we’re doing it.
FAQ
Q: What is “outsourced curiosity” and why does it matter? A: Outsourced curiosity refers to the practice of delegating not just research tasks but the act of wondering itself to AI tools. When you prompt an LLM before forming a hypothesis, you skip the generative discomfort that produces genuine insight. The risk isn’t bad output — it’s the gradual atrophy of the capacity to ask interesting questions in the first place.
Q: What is the difference between a research question and a hypothesis? A: A research question defines what you want to learn. A hypothesis is a specific, testable claim about what you expect to find — and why. In qualitative research, the hypothesis you walk in with (built from experience and pattern recognition) shapes what you notice. Without it, affinity mapping and thematic analysis become furniture arrangement: you group things that look similar without knowing what you’re actually looking for.
Q: What is affinity mapping? A: Affinity mapping is a synthesis method used in qualitative research and UX, where raw data — interview notes, observations, quotes — is organized into clusters based on relationships and patterns. It’s typically done with sticky notes (physical or digital) and requires the researcher to sit with messy, contradictory material until structure emerges. The process itself is where hypotheses are refined.
Q: Can AI tools be useful in qualitative research? A: Yes — for transcription, initial clustering, and pattern recognition across large datasets, AI is genuinely useful. The risk is using it to skip the hypothesis formation stage entirely: feeding transcripts in before you’ve developed a point of view, and accepting the synthesis as your own understanding. The tool works best as a complement to human judgment, not a replacement for it.
Q: How does this relate to the broader argument about AI and human cognition? A: The concern isn’t that AI produces wrong answers. It’s that repeated reliance on AI for the generative stages of thinking — forming questions, building hypotheses, sitting with uncertainty — may quietly degrade the cognitive capacity those stages develop. Like any skill, curiosity requires practice. Outsource it consistently enough, and the questions get shallower without you noticing.
Further Reading & Resources
engawa: In Praise of Friction — The argument that resistance isn’t inefficiency — it’s the mechanism through which meaning is made. The piece that first articulated the tension between optimization and depth that runs through this one.
engawa: The Silence Between Notes — Ma — On the Japanese concept of negative space, gut feeling, and what gets lost when we fill every gap with output. The direct predecessor to this piece.
engawa: I Still Remember Their Names — Over 100 conversations about water, and what no algorithm could have told us. The practical counterpart to this essay — what it looks like when you don’t outsource the curiosity.
Taishi Okano writes about the intersection of technology, craft, and culture from New York and Tokyo. engawa is where he works things out.