Why your experiment's impact is probably greater than you think

When we experiment, we try and address hypotheses we believe are blockers (or enablers) for conversion success. Eg if your signup form for a service converts 65%, one can imagine many blockers for the 35% of users that don't convert:

  • They don't understand the offer;
  • They don't trust you;
  • It's too long to fill out;
  • It requests details they don't feel comfortable providing;
  • They get stuck unable to answer one of the questions (there's an answer you hadn't considered);
  • It's too slow;
  • It's buggy;
  • They don't feel it's valuable for them to complete this stage
  • ...

When we experiment, we try and address one such blocker and remove it. Let's imagine we believe (or used qualitative research to find out) that some people find the form too long. You experiment with removing a few fields - but find it only grows conversions rate by 0.5%. Does that mean the solution of that problem was only an issue for 0.5% of your users? No.

Multiple blockers will cause first few fixes to register as less impact, with increasing impact as we work more on the same blocker, until it plateaus. What happens is that most people are not affected by just one blocker to completing a process/funnel. They might both feel it's too long and not trust the process, or not trust the process and not understand what we're asking. As a result, when you start fixing the 'trust' issues, if you didn't touch anything else yet, you can only affect the people who only have the trust issue. Even if you "fixed" trust completely, there are still blockers for the rest of the population. As you remove the next blocker, say "understanding", you now reap the rewards of both fixing the trust issue beforehand, and of the understanding blocker. The diagram illustrates the dynamic at play here, with hypotheses are called "Trust", "Time" (I don't have time to fill this out), "Price" and "Understand" (I don't understand why I need it).

You can see many dynamics are possible, including something you think you've solved (stopped being a bottleneck) becoming once more a bottleneck after other things are solved. Users/clients also change preference, or the audience of users changes and we are exposed to people with a different mix of preferences, resulting in shifting the "blockers". All of this is to say that it's important to revisit hypotheses you think you've solved before, and it's important to acknowledge that you won't see all the fruits of every effort immediately, but sometimes a successful 'clearing' of the path to user success is actually reliant on a lot of previous blockers that were removed in a way that was impossible to measure. Essentially - every experiment's impact is tempered by all the other issues so that you see only a part of its effect.

This is covering just 1 slide of a 20+ slide deck I have about tips & tricks of experimentation. I'm now giving FREE office hours to consult on any data, growth or product question you may have. I'm also giving this talk for FREE to companies as a live Zoom webinar for 45m + 15m Q&A with me. Get in touch with me here if your data science or product team is interested in this. (There are no strings attached, I'm simply trying to learn and connect with as many analytics and product practitioners as I can for fun & networking.)

Nimrod Priell

Advisor to CXOs at $1B+-valued startups in martech, fintech and cyber, and other startups ranging from B2C to SaaS. Ex-Facebook PM. Interested in growth, data and product management.

More of our stories from

'Additive' and 'Substractive' measures, or how to goal for stuff that shouldn't happen

Most of the stuff we measure should go up and to the right. But what about stuff where the goal is 'this never happens'?

Understand, Identify, Execute

Understand, Identify, Execute is at the heart of Facebook's product management framework and key to its success.

Why your experiment's impact is probably greater than you think

A mental model of blockers can help understand why we need a regime of experiments rather than just one to fix an issue

The false dichotomy of Quant-Qual analysis and the real tradeoff of data-driven product development

There's no tradeoff between qualitative and quantitative analysis. Do both. The real tradeoff is something else.


All Topics