top of page
Search


Why Writing Good Survey Questions Is Not Enough: EFA and CFA in Scale Development
In UX research, psychology, human factors, and AI evaluation, we often want to measure things we cannot directly observe. Trust in AI, cognitive load, frustration, confidence, perceived control, and motivation are all examples of constructs. They matter a lot, but they cannot be measured with a ruler or directly observed like task time or click count. That is why researchers build questionnaires and scales. The problem is that writing a list of good-sounding questions does no

Bahareh Jozranjbar
16 hours ago5 min read


Evaluating AI-Powered Systems in the Real World
AI is now embedded in the products people use every day. It recommends what to watch, helps write and summarize, supports medical and financial decisions, assists employees at work, and increasingly shapes how people search, learn, and choose. That is why evaluating AI can no longer be treated as a narrow technical exercise. A model may be accurate, fast, and impressive on paper, yet still create confusion, reduce user control, or fail when placed inside a real human workflow

Bahareh Jozranjbar
5 days ago6 min read


Game Theory in UX Research
UX research is often framed around a simple question: what does the user want, feel, or struggle with? That question is important, but it is not always enough. In many real products, users are not making decisions in isolation. They are responding to interfaces, incentives, AI systems, platform rules, competitors, social norms, and sometimes even regulation. At the same time, those systems are also reacting to users. This means many UX problems are not just about usability. T

Bahareh Jozranjbar
7 days ago5 min read


Anomaly Detection in UX Research and Product Analytics
A product can look healthy at the surface while important problems remain hidden underneath. Conversion may appear stable overall, even though one device group is failing badly. Survey scores may seem fine, even though some responses are clearly fraudulent or low quality. A release may go live successfully, yet subtle changes in user journeys, latency, or error patterns may already be signaling friction. In many cases, the problem is not the total absence of data. The problem

Bahareh Jozranjbar
Mar 178 min read
Quasi-Experimental Methods in UX Research
For a long time, UX teams treated A/B testing as the only legitimate way to make causal claims. If you could not randomize users, you reported trends, correlations, or anecdotes and hoped stakeholders understood the limits. That mental model no longer fits how products actually evolve. Most meaningful UX changes cannot be cleanly randomized. They are rolled out gradually, constrained by infrastructure, shaped by policy, or deployed universally before research even begins. Qua

Bahareh Jozranjbar
Jan 223 min read


Signal, Noise, and the Real Problem with Behavioral Data
Imagine you are listening to a crowded room where several conversations are happening at once. You are trying to follow just one voice. At first, everything blends together: laughter from one corner, music in the background, fragments of unrelated sentences drifting past. To make sense of anything, your brain starts doing something remarkable. You focus on the voice you care about, the words that matter to you, the rhythm and tone that stay consistent, and you mentally tune o
Mohsen Rafiei
Jan 185 min read


Hierarchical Bayesian mixture models in UX Studies
Trying to do serious UX research with the wrong statistical tools is like trying to eat a bowl of soup with a fork. You can work very hard, take dozens of careful scoops, and still walk away hungry. The problem is not that UX data is unusable or too noisy. The problem is that we often analyze it with methods that were never designed for how human behavior actually unfolds. UX studies can be surprisingly efficient and informative when the analysis respects the structure of the
Mohsen Rafiei
Jan 144 min read


UX Focus Group Interviews: What We’re Really Doing When We Bring Users Into a Room Together
Most of us have been there, we run usability tests and see where people struggle, but we still do not quite understand how users think about the product. Surveys give us numbers, but the answers feel thin or oddly constrained. At some point, someone suggests talking to users together, letting them react to each other, and seeing what emerges. That moment is usually when focus group interviews enter the picture, and depending on how they are used, they either bring clarity to
Mohsen Rafiei
Jan 54 min read
Rigorous Qualitative UX and Market Research
Organizations now have endless behavioral data. Clickstreams, funnels, retention curves, support tickets, reviews, session replays. Yet teams still struggle to answer the question behind every metric: why. At the same time, decisions have become more expensive and harder to reverse. When the ask is a multi million dollar pivot, a major redesign, or a policy change, inspirational insights are not enough. Teams need interpretive work that can stand up to scrutiny, including an

Bahareh Jozranjbar
Dec 31, 20255 min read
A Practical Framework for Evaluating AI Alignment Capabilities
We have crossed a threshold in AI. In the older era, evaluation meant performance verification. Can the system do the task. Does it get the right answer. Does the benchmark score go up. In the frontier model era, that is no longer the question that matters. Now the question is alignment validation. Does the system do the task for the right reasons, within the right constraints, without hidden failure modes that only appear under pressure, under attack, or over time. This shif

Bahareh Jozranjbar
Dec 23, 20257 min read


When More Questions Do Not Mean Better Insight
I remember one specific survey that finally forced me to confront this problem head on. We were evaluating an e commerce redesign and the survey looked perfectly reasonable on paper. Among many items, we asked users to rate Visual Appeal and Attractiveness as two separate questions. Different words, different intentions, different stakeholders pushing for each. When the data came back, the correlation between those two items was almost perfect. Not theoretically similar. Prac
Mohsen Rafiei
Dec 22, 20255 min read


Topic Modeling For Behavioral Science And Ux
When we are faced with a large amount of text, most of us do the same thing instinctively. We try to get the big picture before understanding every detail. Imagine scrolling through hundreds of customer reviews for a product you are thinking of buying. You do not read every review carefully. You skim, you scan, and very quickly you get a sense that people are mostly complaining about battery life, praising the design, and arguing about the price. That rough summary forms almo
Mohsen Rafiei
Dec 19, 20255 min read


Why We Should Be Careful With What Users Say: Understanding the Limits of Self-Reported Judgments in Research
One of the most common instincts in research is to ask people exactly what they think. We ask users why they chose a specific product, what they liked about an interface, what confused them during a task, what mattered most to their workflow, and what ultimately influenced their final decision. This approach feels intuitive, respectful, and efficient; after all, who knows the user better than the user themselves? Yet, decades of psychological and behavioral research point to
Mohsen Rafiei
Dec 18, 20256 min read


A UX Framework for Measuring Feature Awareness
Most product teams still assume that if a feature ships, users will naturally discover it. In practice that assumption fails all the time. New capabilities launch, engineering and design celebrate, dashboards show a small bump in traffic, and then adoption plateaus at a level that cannot possibly justify the investment. The real problem is usually not usability in the narrow sense. It is feature awareness: do people even notice that the thing exists, and do they understand wh

Bahareh Jozranjbar
Dec 11, 202511 min read


Interview Analysis as the Real Substance of UX Research
Interviews are like a gold mine, but only if you actually know how to extract the ore. Most teams stop at we talked to users and walk away with a handful of quotes and a gut feeling. Raw transcripts alone are not insight. They are messy, biased, emotionally charged human data that only become valuable through rigorous analysis. As a cognitive scientist and UX researcher who cares deeply about methodological quality, I see this mistake constantly. Great interview work is not d
Mohsen Rafiei
Dec 10, 20257 min read


Embracing the Gray: Why Fuzzy Logic Membership Functions Are UX’s Next Big Thing
We do not live in a binary world, yet much of traditional UX research still treats user decisions as if we do. Think about your last product decision. Did you love it or hate it. Of course not. Your actual internal dialogue probably sounded more like this: the onboarding flow was mostly smooth, but the payment screen felt cluttered, the subscription terms made you uneasy, and while the product solved your core problem, you still felt unsure whether you should commit long term
Mohsen Rafiei
Dec 8, 20256 min read
The Shape of User Experience
A Practical Guide to Probability Distributions in UX Research Most UX research teams now live in a world of metrics. Conversion rates, task success, time on task, churn, NPS, feature adoption, “rage clicks”, scroll depth. We A/B test them, segment them, and present them in slide decks every week. But under the surface, most of us still treat all of these metrics as if they came from the same simple shape: a nice symmetric bell curve. In the textbook world, variables are conti

Bahareh Jozranjbar
Dec 8, 202512 min read


How to Decide Your UX Interview Sample Size
You’re walking through a big clothing store, flipping through rack after rack trying to find something interesting. At first everything feels new. Different colors, styles, cuts. After a few minutes, though, you realize you’re seeing almost the same things again and again. Sure, the store is huge, but the variety isn’t unlimited. No matter which aisle you turn into, it’s basically more of what you’ve already seen. At some point you stop searching and think okay, I get it, not
Mohsen Rafiei
Nov 29, 20257 min read


Choosing the Right Regression for Real Human Data
UX research has gone through a quiet revolution. What used to be mostly interviews, usability testing, and expert reviews has expanded into a field deeply driven by data. Today we are not just listening to what users say. We analyze behavioral logs, interaction timing, psychometrics, conversion funnels, eye-tracking traces, error counts, retention curves, and long-term engagement metrics. Our questions have also become sharper. We no longer ask simply which design people like

Bahareh Jozranjbar
Nov 29, 20257 min read
How to Analyze Changes in User Attitudes Over Time: A Practical Guide for UX Research
Most UX research still treats user experience as a series of snapshots. We run a usability test, collect a System Usability Scale score, send a Net Promoter Score wave, or run an intercept survey at the end of a flow. These are useful for diagnosing immediate friction and capturing static sentiment, but they miss something fundamental about how humans interact with products over time. User attitudes are not fixed states. They are trajectories. Trust builds, then cracks. Frust

Bahareh Jozranjbar
Nov 26, 20259 min read
bottom of page