Validating Corporate Innovations Must be Rooted in Assumptions
Most people learn the scientific method in school. It’s a straightforward and rigorous way of testing things. Wikipedia does a good job of explaining it:
The scientific method is an empirical method for acquiring knowledge that has characterized the development of science since at least the 17th century. It involves careful observation, applying rigorous skepticism about what is observed, given that cognitive assumptions can distort how one interprets the observation. It involves formulating hypotheses, via induction, based on such observations; the testability of hypotheses, experimental and the measurement-based statistical testing of deductions drawn from the hypotheses; and refinement (or elimination) of the hypotheses based on the experimental findings… A scientific hypothesis must be falsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.
Incidentally, this looks quite a bit like Lean Startup’s iterative Build → Measure → Learn cycle, doesn’t it?
While we shouldn’t pretend that a methodology (be it Lean Startup, Design Thinking or something else) is going to solve all of our problems, using the scientific method through an assumption framework when validating corporate innovations is critical.
This is because in our experience, there are a few real snags that can derail our best efforts to build new corporate ventures:
- Relying exclusively on your gut (or the gut of a senior leader) is going to most likely lead to failure
- Trying to build the “perfect solution” off the bat will lead to failure
- If you don’t understand the riskiest assumptions faced by your business (or corporate innovations), you’ll fail.
Understanding Assumptions
An assumption is something you believe to be true, but haven’t yet proven. They allow you to not only figure out what you need to validate but in what order, so that you focus on learning the most important things first.
The next time you’re sitting in a meeting, see how many times someone pronounces “a fact” that’s really an assumption.
- “Customers want this new feature.” — Oh yeah? How do we know? Why do they want it?
- “People will pay for this new widget.” — Oh yeah? How do we know? Why will they pay? Who are these people, anyway? What are they going to do with that widget? Why?
- “It takes people too long to [do that task.] We could make it faster by [doing some thing.]” — Oh yeah? How do we know it takes too long? What does too long even mean? How do we know the solution will speed it up? Will it speed it up enough?
You get the point.
Perhaps in the next meeting when people are making bold statements without sufficient evidence (read: assumptions) you can actually step up and say, “Bob, I really think that’s an assumption. How do we know?” Bob might lose it, but you’re asking the right question.
Here’s the thing: pretty much everything is an assumption.
This is especially true early on when you’re trying to validate the problem.
But it’s also very true when you’re building the solution. And marketing your product. And developing your business model. Etc.
So the question is this: How many decisions are you going to make purely off of your own belief? Or are you going to recognize that what you “know to be true” may in fact not be totally true, and requires actual testing?
How to use Assumptions
Here’s how you can use assumptions to test the right things, in the right order:
- Write down every assumption you have
- Identify whether they’re Desirability, Viability, or Feasibility related
- Rank the assumptions based on Criticality and Certainty
- Describe how you’d test the assumptions (focusing on the most critical and least certain); include success criteria/metrics for each experiment and a time period for the test
1. Write down every assumption
Now that you know “everything is an assumption” you’ll probably have a long list.
Here are a few tips:
- Start every assumption with, “We believe…” and then write out what you believe to be true. Even if you don’t totally believe it, frame the assumptions this way; it makes testing things easier (with a falsifiable option.) Even if you are very confident you already have proof of your assumption, write it down. Teammates may disagree.
- Aim for the most precise assumptions you can, but some of them are going to be high level. Initially when you do this exercise, we’re going for quantity, not quality—the more assumptions the better. But if an assumption is too high level it becomes hard to run a good experiment against.
- Ask “why” a lot. Have you ever heard of “5 Whys”? It’s a problem-solving methodology that suggests you dig into something by asking why 5 times, with the goal of getting to a root cause. If you ask “why?” a lot, you’ll generate a lot of assumptions. Does “5 Whys” get annoying? Yes. Which is why you need to get savvy about asking questions in different ways.
2. Categorize assumptions as desirability, viability or feasibility
Desirability, Viability, Feasibility (or DVF) is a concept from Design Thinking. We’ve found it to be very effective at getting people to focus on what actually matters.
In simple terms:
- Desirability is whether users want “it” or not.
- Viability is whether “it” is good for business / you can make the business model work.
- Feasibility is whether “it” can be delivered. (This includes technical questions, but also compliance, regulatory, legal, etc.)
You almost always want to start with Desirability, because if you get that wrong, it doesn’t matter if you can build it or you think it’ll make money.
Too many corporate innovators spend an enormous amount of time in Excel mapping out a complex P&L and business model, demonstrating how much money a new initiative will make. Here’s a secret: You can make Excel do almost anything you want—at least with numbers. Total revenue in 5 years is too small? Just change the market share from 1% to 2%! If you want a $100M business “on paper” I can 100% guarantee you that you can make that happen.
Going through this exercise encourages a conversation around what really matters and helps us ask better questions. “Do we really know if people have that problem? Do we really know if they’ll pay $5/month? Do we really know what the MVP should include?” Etc.
Something as simple as DVF can affect product and strategy meetings. We’ve seen it work. It’s a tool that gives people a way of describing issues and digging in.
3. Rank the assumptions based on Criticality and Certainty
Now that you have a list of assumptions and they’re all labeled with DVF, you need to figure out which ones are the most important.
Which of your assumptions—if proven wrong—will put an end to your new venture or innovation project?
Ultimately that’s the question you have to ask. And you have to be honest too.
That’s how you define criticality. If something is insanely critical to the success of what you’re doing, that’s the assumption you want to go after first.
Certainty is how sure (or unsure) you are about something. If you’ve already done the research and proven something with a degree of confidence, then you can define an assumption as “certain.” If you have no bloody clue, it’s an uncertain assumption.
When we run these types of collaborative exercises with teams we often do them with a simple matrix (put it on a white board, or big sheet of paper with sticky notes):
Then you can get everyone to write out their assumptions and group them around common themes, and “rank” them on a 2:2 matrix like the one above.
Ideally only a few assumptions stand out as super critical and uncertain.
Sometimes we see teams go through this and EVERYTHING is a top priority. That’s a recipe for disaster. It’ll lead to analysis paralysis. When doing this you have to be pretty strict about it, and remember the question I posed earlier: What assumptions, if proven wrong, end this whole process?
4. Define tests for each of the most critical and uncertain assumptions
Once you’ve identified the riskiest and most critical assumptions, you need to test them. The ideal test (as per the scientific method) is one that is controlled and falsifiable. If you have too many variables, it’s hard to know what’s affecting things. And if you don’t have a way of invalidating, and everything is designed to only prove your assumption, you’re cheating.
To be clear: It can be very difficult to run super controlled experiments. Don’t kill yourself trying to create the perfect experiment, especially in the early stages of your startup. It’ll lead to a lot of frustration.
When in doubt, do something. Try something. And see what happens.
It might mean you have to run multiple experiments to learn enough, but that’s better than trying to design something perfect.
Some experiments may be qualitative in nature. This is particularly true when you’re trying to validate a problem. For example: customer interviews are an experiment. They’re not perfect, and they’re not going to give you statistically significant data, but that’s OK.
Here’s a list of some common tests & experiments that you might run (in order of effort/fidelity):
- Shop Along: Accompany a user on their journey (through whatever it is you’re interested in exploring.)
- Concept Statements: A simple 1-page document that describes a value proposition; share with users and see how they react.
- Brochure or Sales Sheet: A good B2B tool. Create a brochure or sales sheet (physical or digital) to share with prospective customers. See how they react. May drive conversion to something like a Letter of Intent (which is a strong desirability signal.)
- Paper Prototype: Very simple way of exploring interaction design and early solution ideation with users. Get users to co-create solutions with you. Anyone can do this—you don’t need to be a designer.
- Landing Page: Very common tool to test value propositions, conversion rates, CTAs, etc. Can help start to collect quantitative data. Tip: Get emails initially, and follow up to conduct customer interviews.
- Digital Ad Campaign: Great for testing value propositions and target audiences, to see what resonates. Can lead to a landing page. Helps get a sense of what’s engaging (through conversion metrics.)
- Surveys: Create surveys that you can push online to users, or to user panels. Warning: Do not rush surveying. I see lots of people rely on surveys to get answers, but you can easily bias them to get the answers you want.
- Clickable Prototype: A more robust “solution” but can still be very basic. I still love using Balsamiq for quick prototyping because it’s so easy and takes out almost all design-related elements automatically.
There are many more types of experiments, tools and techniques you can use. The key is to figure out the best experiment for the assumption you’re testing. And again, if you’re not sure, try something, learn & iterate. Learn more about how Highline Beta provides enterprise innovation services to companies looking to experiment, test and validate new growth opportunities.
You’ll need something to track all of your assumptions & experiments. We’ve included a basic spreadsheet example here: https://bit.ly/3XtyU5Z
The template has a simple example in it with a problem statement, a list of a few assumptions, and rankings for criticality and certainty. Feel free to copy the template and use it.
The loudest voice in the room shouldn’t win by default
In a corporate setting (it’s true in startups as well), it’s often the person with the loudest voice or opinions that “wins” (i.e. gets their way.) This isn’t a sustainable or scalable way to build new ventures / corporate innovations or succeed at innovation.
We don’t believe in being insanely rigorous or trying to enforce a methodology that sucks the magic out of things. But a little more rigor would be nice.
So here’s our suggestions:
- Keep an eye out for assumptions and call them out. Just put your hand up a bit more often and ask questions.
- Use DVF. I’ve seen this concept work by generating interesting questions and conversations. Start sneaking DVF into how you do things at work and it will help.
- Test more. This again requires that you put your hand up and say something. “Hey, why don’t we run a quick experiment?” Stop arguing (I mean “debating”) in meetings about the “right way” to do something since none of you probably know. Instead just go out and test. Even small tests, repeated frequently enough, can improve the odds of building better stuff and create cultural change.
Good luck!