Great design isn’t just polished visuals or stats—it’s about asking the tough, often unspoken questions that make the user experience truly meaningful.
What was the hardest research question
and how did I solve it?
One of the toughest research questions I’ve faced was deceptively simple:
“Do users actually trust our product, or are they just going through the motions?”
It came up while designing onboarding for a financial app. Standard usability tests told us users could click through the process, but the business wanted something deeper: real, emotional trust. How could I measure something so intangible?
I realized no single method would work. So I combined approaches:
I added short, in-context surveys after key onboarding steps, asking users how confident and safe they felt. During interviews, I watched for hesitation moments when someone paused, reread a line, or asked for clarification. I also tracked when users dropped off, and looked for patterns in the questions they asked support.
One surprising insight: trust wasn’t built by extra explainer text, but by immediately visible license badges and social proof at just the right moment.
Armed with this, we changed the onboarding flow to surface those signals earlier. Completion rates went up, but even more important, the tone of user feedback shifted from “hope this works” to “I feel safe here.”
When did research give me conflicting results
and what did I do?
When qualitative and quantitative studies seem to contradict each other, I treat the mismatch as valuable data not a mistake and follow 5 steps:
1. Method audit
First, I double-check the basics: participant samples, task scenarios, success metrics, and analytics logs. Many “conflicts” disappear once recruitment or measurement errors are fixed.
2. Triangulation
If both sources still look solid, I introduce a third lens: such as a brief in-context survey, customer-support logs, or an A/B experiment, to fill the information gap.
3. Data segmentation
I break results down by key dimensions (device type, user segment, usage context). Often each method is simply capturing a different slice of the audience.
4. Hypothesis validation
Based on the newly identified cause of the discrepancy, I run a quick test - A/B or prototype - to confirm what really drives user behavior.
5. Decision documentation
Finally, I produce a concise summary mapping every source to the final conclusion. This transparent trail shows the team that a data conflict is a cue for deeper analysis, not panic.
The cycle: audit → triangulate → segment → test → document - turns seemingly conflicting evidence into a clear picture and leads to fact-based decisions.
How did I persuade a stakeholder to rethink their assumptions?
I began with curiosity, not confrontation, paraphrasing the stakeholder’s belief to show I’d heard them and linking our discussion to the shared goals (conversion and customer trust) we already agreed on. Establishing that common yardstick kept the conversation cooperative.
Next, I introduced concise, vivid evidence that challenged the assumption: a single analytics snapshot and a 20-second user-test clip. Rather than pushing for an immediate reversal, I suggested a low-risk experiment (an A/B toggle limited to 10 % of traffic) so we could “learn together” without jeopardizing the roadmap.
When the test delivered a 7% lift, I closed the loop in the stakeholder’s language, highlighting the measurable impact and crediting their initial push for driving faster validation. Framed as a win for the team, the result turned a rigid assumption into a shared, data-backed decision.
How do I choose the right metrics for each project?
First, I clarify the project’s north-star goal, the single outcome that matters most to both users and the business (e.g., task success, revenue, retention). From that goal, I frame a user-centric problem statement (“make checkout feel instant on mobile”) so every metric we pick can be traced back to solving it.
Next, I break metrics into tiers: a primary outcome metric that signals success (e.g., conversion rate), a handful of diagnostic metrics that explain why the outcome moves (time on task, error rate), and guardrail metrics that ensure we don’t harm something else (customer satisfaction, latency). I choose each one by testing for actionability, sensitivity, and the team’s ability to instrument it reliably.
Finally, I define baselines and target deltas, set up automated tracking, and schedule regular metric reviews so we can pivot if signals get noisy or the product evolves. By keeping the list lean and tied to decisions, not vanity numbers, the team focuses on learning and improving, not just reporting.