Case study № 02

A research repository that researchers actually open

Fieldnote's users loved the tool in week one and stopped opening it by month three. We rebuilt browse and recall around how researchers actually search. Monthly active usage tripled.

Measured

Outcomes.

Monthly active researchers
×3.1

Six weeks post-launch, cohort matched

Time to first relevant clip
12 → 4s

Task-based timing, five recruited researchers

Expansion revenue
+24%

Seats added per paying account, Q3 → Q4

Median tags per study
4 → 17

The problem we were handed

Fieldnote sells a research repository to B2B product teams. The product had strong first-month engagement, new teams would pile studies in, tag things, share clips, and then usage would fall off a cliff. Accounts stayed subscribed because the research inside was valuable, but the tool itself was increasingly ignored. The churn risk was three quarters out, not immediate, but the revenue retention math did not work unless active usage recovered.

The ask was broad. “Why are researchers not coming back, and what should we build to fix it?” I had nine weeks and a team of four.

What research showed

The repository was full. Researchers were storing studies in it. But nobody was searching it.

Instead, researchers were asking each other in Slack. “Did anyone run something with enterprise admins recently?” would get answered by the one teammate who remembered. If that teammate was on leave, the research was, functionally, gone.

The repository was indexed by study. Studies were tagged, searchable, and browsable. But a researcher does not want a study. A researcher wants the thirty seconds in the study where the participant said something specific about enterprise admins. That thirty seconds lived inside a two-hour video, inside a session, inside a study, inside the repository. The tool made you dig through four layers to find it.

The core decision

We changed what the repository returns. Instead of returning studies and letting the researcher drill in, it returns clips, thirty-second annotated moments, with the study, session, and participant attached as metadata. The clip is the first thing a researcher sees, and it is the thing they can paste into a Slack thread or a deck.

This sounds small. It was not. It required re-indexing every existing study, rewriting the search backend, rebuilding the browse UI, and changing the mental model every existing user already had. We tested it in a private beta with eighteen accounts before rolling it out.

Design detail that mattered

Three details moved usability more than I expected.

The hover preview. Twelve seconds of video, muted by default, auto-play on hover. That single affordance cut the time researchers spent opening dead-end clips by more than half.

The participant demographic chips. Every clip shows three small chips beneath the transcript snippet, industry, role, seniority. Researchers could filter on those without ever opening a clip. We had not planned to add the chips until post-launch; researchers asked for them unprompted during the private beta and we added them in week seven.

The transcript snippet. Showing the exact sentence, not a generic preview, turned out to be the single most requested feature during testing. Researchers treat transcripts as evidence. They want to see the exact words.

What did not work

An early prototype tried to generate automatic summaries of each clip using an LLM. Researchers hated it. The summaries were usually accurate but lost the specificity that made the clip useful. We cut the feature entirely. It turns out that a summary is a different thing than a quote, and researchers were searching for quotes.

We also tried a “related clips” panel on the clip detail view. It was ignored. Researchers who found a good clip were done, they did not want adjacent clips, they wanted to get out of the tool and into their deck. We shipped without it.

Outcomes

Monthly active researchers tripled within six weeks. Time-to-first-relevant-clip dropped from twelve seconds to four on the standard task-based benchmark. Expansion revenue, seats added to existing paying accounts, grew 24% in the quarter following launch, mostly because teams who had mothballed the tool reactivated it and added their new hires.

Reflection

The retrieval model was doing a different thing than the research workflow. Fixing the mismatch was worth more than any new feature we could have added. Research tools are the clearest case I know of where the model in the product must match the model in the researcher’s head. Get it wrong and no amount of polish will bring them back.

Approach

Process.

Twelve recall interviews, one pattern

We asked twelve active researchers to walk us through the last time they tried to find something in Fieldnote. Every single one described the same move, give up on the tool, search Slack instead. The repository was not the problem. The retrieval model was. The tool indexed studies; researchers searched for moments.

Rewrote the information architecture around the clip

The old model had studies at the top, sessions nested inside, and clips buried three levels deep. We flipped it, clips became the primary unit, with study, session, and participant as facets. A clip is what a researcher pastes into a Slack thread or a strategy deck. Making it the unit the tool returns matched the actual workflow.

Designed for the skim before the read

Researchers do not read search results, they skim them. The list view shows the clip's transcript snippet, the participant's demographic tags, the study it came from, and a 12-second preview on hover. Every piece of context a researcher needs to decide in under a second whether the clip is worth opening.

Tested recall on the researchers who had churned

We ran unmoderated recall tests with eight researchers whose accounts had gone cold. Seven of the eight found a specific clip in under ten seconds on their first attempt. Six of the eight asked, unprompted, when they could have this. Three of them resumed active usage the week we shipped it.

Our researchers stopped asking us to add features. They just opened the tool and started using it. That is what a tool you trust feels like.

Harper Wu Head of Research, Contour Design

Credits

Lead Product Designer
Max Mustermann
Staff Engineer
Jana Mészáros
Product Manager
Aram Kazazian
Data
Benji Oduya

Toolchain

  • Figma
  • Linear
  • Notion
  • Amplitude
  • Loom

Contact

Similar problem?

If this is close to the shape of your own problem, a short note is the fastest route. Two business days to reply.