Most people learn the scientific method in school. It’s a straightforward and rigorous way of testing things. Wikipedia does a good job of explaining it:
The scientific method is an empirical method for acquiring knowledge that has characterized the development of science since at least the 17th century. It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; the testability of hypotheses, experimental and the measurement-based statistical testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings… A scientific hypothesis must be falsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.
Incidentally, this looks quite a bit like Lean Startup’s iterative Build → Measure → Learn cycle, doesn’t it?
While we shouldn’t pretend that a methodology (be it Lean Startup, Design Thinking or something else) is going to solve all of our problems, using the scientific method through an assumption framework when validating corporate innovations is critical.
This is because in our experience, there are a few real snags that can derail our best efforts to build new corporate ventures:
An assumption is something you believe to be true, but haven’t yet proven. They allow you to not only figure out what you need to validate but in what order, so that you focus on learning the most important things first.
The next time you’re sitting in a meeting, see how many times someone pronounces “a fact” that’s really an assumption.
You get the point.
Perhaps in the next meeting when people are making bold statements without sufficient evidence (read: assumptions) you can actually step up and say, “Bob, I really think that’s an assumption. How do we know?” Bob might lose it, but you’re asking the right question.
Here’s the thing: pretty much everything is an assumption.
This is especially true early on when you’re trying to validate the problem.
But it’s also very true when you’re building the solution. And marketing your product. And developing your business model. Etc.
So the question is this: How many decisions are you going to make purely off of your own belief? Or are you going to recognize that what you “know to be true” may in fact not be totally true, and requires actual testing?
Here’s how you can use assumptions to test the right things, in the right order:
Now that you know “everything is an assumption” you’ll probably have a long list.
Here are a few tips:
Desirability, Viability, Feasibility (or DVF) is a concept from Design Thinking. We’ve found it to be very effective at getting people to focus on what actually matters.
In simple terms:
You almost always want to start with Desirability, because if you get that wrong, it doesn’t matter if you can build it or you think it’ll make money.
Too many corporate innovators spend an enormous amount of time in Excel mapping out a complex P&L and business model, demonstrating how much money a new initiative will make. Here’s a secret: You can make Excel do almost anything you want—at least with numbers. Total revenue in 5 years is too small? Just change the market share from 1% to 2%! If you want a $100M business “on paper” I can 100% guarantee you that you can make that happen.
Going through this exercise encourages a conversation around what really matters and helps us ask better questions. “Do we really know if people have that problem? Do we really know if they’ll pay $5/month? Do we really know what the MVP should include?” Etc.
Something as simple as DVF can affect product and strategy meetings. We’ve seen it work. It’s a tool that gives people a way of describing issues and digging in.
Now that you have a list of assumptions and they’re all labeled with DVF, you need to figure out which ones are the most important.
Which of your assumptions—if proven wrong—will put an end to your new venture or innovation project?
Ultimately that’s the question you have to ask. And you have to be honest too.
That’s how you define criticality. If something is insanely critical to the success of what you’re doing, that’s the assumption you want to go after first.
Certainty is how sure (or unsure) you are about something. If you’ve already done the research and proven something with a degree of confidence, then you can define an assumption as “certain.” If you have no bloody clue, it’s an uncertain assumption.
When we run these types of collaborative exercises with teams we often do them with a simple matrix (put it on a white board, or big sheet of paper with sticky notes):
Then you can get everyone to write out their assumptions and group them around common themes, and “rank” them on a 2:2 matrix like the one above.
Ideally only a few assumptions stand out as super critical and uncertain.
Sometimes we see teams go through this and EVERYTHING is a top priority. That’s a recipe for disaster. It’ll lead to analysis paralysis. When doing this you have to be pretty strict about it, and remember the question I posed earlier: What assumptions, if proven wrong, end this whole process?
Once you’ve identified the riskiest and most critical assumptions, you need to test them. The ideal test (as per the scientific method) is one that is controlled and falsifiable. If you have too many variables, it’s hard to know what’s affecting things. And if you don’t have a way of invalidating, and everything is designed to only prove your assumption, you’re cheating.
To be clear: It can be very difficult to run super controlled experiments. Don’t kill yourself trying to create the perfect experiment, especially in the early stages of your startup. It’ll lead to a lot of frustration.
When in doubt, do something. Try something. And see what happens.
It might mean you have to run multiple experiments to learn enough, but that’s better than trying to design something perfect.
Some experiments may be qualitative in nature. This is particularly true when you’re trying to validate a problem. For example: customer interviews are an experiment. They’re not perfect, and they’re not going to give you statistically significant data, but that’s OK.
Here’s a list of some common tests & experiments that you might run (in order of effort/fidelity):
There are many more types of experiments, tools and techniques you can use. The key is to figure out the best experiment for the assumption you’re testing. And again, if you’re not sure, try something, learn & iterate. Learn more about how Highline Beta provides enterprise innovation services to companies looking to experiment, test and validate new growth opportunities.
You’ll need something to track all of your assumptions & experiments. We’ve included a basic spreadsheet example here: https://bit.ly/3XtyU5Z
The template has a simple example in it with a problem statement, a list of a few assumptions, and rankings for criticality and certainty. Feel free to copy the template and use it.
In a corporate setting (it’s true in startups as well), it’s often the person with the loudest voice or opinions that “wins” (i.e. gets their way.) This isn’t a sustainable or scalable way to build new ventures / corporate innovations or succeed at innovation.
We don’t believe in being insanely rigorous or trying to enforce a methodology that sucks the magic out of things. But a little more rigor would be nice.
So here’s our suggestions:
Good luck!
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.