AI-Assisted Survey Response Analysis Tool
An AI-assisted research tool that helps teams analyze and categorize open-ended survey responses—while keeping human judgment firmly in control.
Role
Product Designer (0→1 Feature)
Industry
Healthcare Market Research
Duration
3 months (Discovery → Launch)
Key Design Decision
Why Separate Single Edit and Bulk Edit
Through user testing sessions with the Report and Access teams, I observed that researchers consistently operated in two very different mental modes.
At times, they needed to carefully review AI suggestions one response at a time—checking nuance, context, and intent. At other times, they wanted to apply the same change across dozens or even hundreds of responses after identifying a pattern.
Early explorations attempted to support both behaviors within a single interface. However, this quickly introduced confusion, increased cognitive load, and raised the risk of accidental bulk changes.
The Decision
I intentionally separated the experience into Single Edit Mode and Bulk Edit Mode, rather than combining both into an overloaded workflow.
Single Edit Mode supports precision and confidence, allowing researchers to validate and fine-tune AI suggestions at the individual response level.
Bulk Edit Mode supports speed and scale, enabling users to apply consistent changes across multiple responses efficiently and safely.
Why This Matters
This separation aligns the interface with users’ mental models, rather than forcing them to adapt to the system.
It reduces error risk by making bulk actions an explicit, intentional mode.
It improves efficiency by optimizing each mode for a specific task.
It builds trust in AI-assisted workflows by ensuring users always understand the scope and impact of their actions.
By designing around distinct user intentions instead of a one-size-fits-all interaction, the experience feels both powerful and safe—especially critical in AI-driven environments where mistakes can scale quickly.


Why Re-Categorization Requires Strong Warnings
Re-categorization is one of the most powerful—and risky—actions in the AI-Assisted Survey Response Analysis Tool.
During usability testing, multiple researchers hesitated before re-categorizing, unsure what would happen to their previous work.
From a researcher’s perspective, re-running categorization may feel like a simple reset. In reality, it removes all existing manual adjustments that researchers have carefully applied to the data—a single action that can overwrite hours of thoughtful review.
AI-assisted analysis also introduces uncertainty. Re-processing the same set of responses may produce different results, and once manual changes are cleared, they cannot be easily recovered.
The Design Question: How might we let researchers benefit from AI assistance without allowing irreversible actions to undo their work silently?
The Decision
I designed explicit, high-visibility warning states before re-categorization:
Clear language explaining exactly what will be removed
A deliberate confirmation step to prevent accidental actions
A visible progress state during re-processing
A clear success message once the process completes
Why This Matters
These warnings act as guardrails, not friction.
They protect researchers from unintended data loss
They make AI system behavior transparent and predictable
They build confidence to experiment, knowing the system will not surprise them
By treating re-categorization as a high-stakes decision, rather than a casual action, the experience establishes long-term trust. Researchers remain in control, even when underlying AI behavior is complex or irreversible.
📸 Re-categorization warning modal, loading state, and success confirmation

By pairing strong warnings with transparent system feedback, researchers remain confident and in control—even when working with irreversible AI-driven operations.
Why Results Preview Is Critical Before Export
During testing, I noticed that researchers rarely felt confident immediately after finishing categorization.
What they really wanted to know wasn’t “Did I finish tagging?” — but “Do these results actually make sense?”
Without a way to validate outcomes, researchers had to export data blindly and only discover issues later, often after the data had already been shared.
The Challenge
AI-assisted analysis can produce results quickly, but speed alone doesn’t equal confidence.
Researchers needed a way to sanity-check patterns, spot gaps, and assess coverage before committing to an export.The Decision
The Decision
I introduced a Results Preview that summarizes categorization outcomes before export:
High-level metrics showing categorized vs. uncategorized responses
A bar chart visualizing category distribution and relative weight
A lightweight preview that supports quick validation without interrupting the workflow
Why This Matters
Results Preview shifts the experience from editing data to evaluating insights.
By surfacing patterns before export, researchers can validate quality, catch issues early, and make informed decisions—reducing rework and increasing confidence in AI-assisted analysis.
📸 Results preview with metrics and category distribution

Other projects
Respondent Portal Redesign (Web + Mobile)
Mobile-First Panelist Experience
Feasibility & Pricing — Internal Decision Platform
Flagship Internal Product · B2B · Healthcare Research. Designing a core internal platform that enables teams to evaluate feasibility and generate accurate pricing — supporting confident decisions at scale.
Beauty Spa Startup — Marketing Website
Designed and launched a marketing website for a beauty spa startup, creating a calm, trustworthy digital experience that guides users from service discovery to booking.
Education Platform — University Application Support Website
Designed and launched a content-driven education platform to support students navigating the university application process. The website focuses on clarity, structure, and trust—helping users quickly understand services and take next steps with confidence.












