Reduce costs and risks while innovating with MVPs (part 1).

  • By
    Jessica Massinelli
    January 21, 2022
    May 2, 2023

For those who have never heard the term “MVP”, it stands for Minimum Viable Product and it’s simply the first workable version of a business idea.

“What should we do next?”,  “How do we know it’s going to work?”, “ What if the users don’t like it?”, “Should we spend our money on something that’s not validated?”

These are just some of the questions businesses address daily, while attempting to survive in an always evolving market, with distracted users that are difficult to acquire and very easy to lose. 

Companies, startups and entrepreneurs know the vital importance of innovation as one of the key pillars for success and the most efficient way to meet a user’s fluid expectations.

They know that, through user-centric design approaches and agile methodologies, they are now able to identify those needs and deliver solutions to specific user requirements. 

The best way to validate potential product solutions aimed to answer implicit user needs, explore new markets, or simply expand the company services portfolio is something that still scares most CEO’s and can easily turn the business perspective upside down.

In the first part of our series, we take a deep dive into the topic of what an MVP is, explore the culture behind it to better understand when one is needed, and discover how to start working on an MVP for your business or setup.

Getting confident with the approach MVPs: Build-Measure-Learn


You may have already heard about the ‘Lean Startup’ methodology. As the name suggests, this culture was first adopted by startups and then quickly spread and adopted by bigger companies during digital transformation. Basically, it's “lean thinking” applied to any process of innovation. In order to create a sustainable business model, companies and entrepreneurs are required to evolve ideas basing activities and decisions on validated learnings in every phase of the project, starting from raising some degree of approval and interests even before the product has been developed (avoiding pouring 100% of the budget into the development of a product that may never see the light).

lean-startup-methodology
Lean startup methodology, by Eric Ries


But, how do you validate something that is not in the market yet? That’s where experiments and MVPs start to become a priority and beneficial to any business environment. One of the main principles of the Lean Startup methodology is the “Build, Measure, Learn” (BML) feedback loop.

Too much planning makes the project costly and risky at the same time, but on the contrary, launching a product without any preliminary research can easily lead to uncertain consequences. That’s where the BML feedback loop becomes fundamental. This method generates small, iterative cycles where businesses can take advantage of constant feedback from the market (including target audiences, users of the product and broader public audiences) to pivot their decision-making in case of negative reactions or instead persevere if the users demonstrate interest and appreciation toward the product. Continuing to apply this methodology will ensure constant and consistent releases and improvements to ultimately reach a stable and validated feature list that will shape the final product.

The ultimate purpose of the BML feedback loop is to identify the “right thing to build”, the product that users are willing to use and buy, in the most efficient way, and as quickly as possible.

Visual representation of the BLM loop, adapted from Eric Ries (2011)


Every innovation starts with learning of some kind. Even though the name of the principle model “Build, Measure, Learn” itself suggests the order of the loop phases, it usually follows the opposite approach.

Hypotheses such as “Users would want to access the sauna by unlocking the door with the card because they may not bring the smartphone with them ” or “Customers want to receive weekly offers based on their previous purchases because they prefer to have a personalised experience” often work as the initial fuel for starting a project and building into the process the BML model.

Use the formula below as the basis for your hypotheses and ascertain whether your project is worth pursuing: : 

I/WE BELIEVE [the subject/target] WILL [predicted action] BECAUSE [reason]

Stating one or several hypotheses is what will initially feed into the BML feedback loop to validate or reject your product, using experiments run with actual customers or prospects throughout the method cycle.

Let’s breakdown the Build-Measure-Learn process a bit more

Build.

Let’s start saying that to run experiments you may not need to build anything at all. What you need to do, especially in the first BML loops, is to measure interest in your product and validate your business hypotheses, not UI designs or technical solutions. Define from the beginning the questions your MVP will answer and ensure that you identify, prior to running the experiment, which actionable metrics will be relevant to validate your initial hypothesis. Here some examples of experiments you may want to run without the risk of leaving your bank account empty:

  1. User interviews, shadowing, focus groups: select a small audience of between 5 - 10 users with specific criteria (that will reflect your final users target) to explore a specific topic with them and understand their motivations and needs in a particular field or specific requirements when defining a service. Keep in consideration that users may not be aware yet of what they need.
  2. Landing pages, blogs, social media accounts: Target your marketing campaigns through these channels while spreading the word about your ideas. Measure how many users subscribe to the waiting list via your landing page or observe reactions around the topics you raise on the new Instagram page. Build your audience prior to defining a high-fidelity MVP to measure the effective interest around the problem you are willing to solve. 
  3. Split testing: Very similar to A/B testing, split testing consists of dividing your web traffic in 50%-50% and studying the analytics derived by the two live propositions of the service. The different versions may completely vary in terms of service, technologies and experience, however can also be very similar except for a specific innovation you wish to test (price, brand). What will ensure the most positive results from the experiment is the right solution to pursue; split testing ensures revenues are derived from a known proposition that already works, while giving the business the possibility to understand how a new version of the service performs and later can decide which one to keep based on the results of the experiment. Split testing experiments could also include “fake doors” where, for example, you let your users express their interests through a registration form for a service that is not available yet (and later show a message that the service is ‘coming soon’) and measure the number of clicks on the CTA to make considerations about if the idea is worth a shot or not.
  4. MVPs: MVPs include a wide range of solutions from low-fidelity designs (such as paper sketches) to high-fidelity outputs (such as digital prototypes, single-feature MVPs or a developed product) that just includes the minimum amount of functionality aimed to deliver the value proposition of the idea. Keep in mind that building an MVP should require far less time than developing the actual product as some of the solutions adopted by strategists include the so-called “Wizard of Oz” approach. In this case, what is shown to users is a very detailed frontend that mimics a real working environment with someone manually operating the working environment through a script replicating procedures of the proposed technology. A good example could be a human conversing with a real website user that thinks he’s texting to a chatbot instead. This experiment requires a script which the operator will use to base his answers and a “fake-automated” chat interface to let the conversation start. It could be very useful to validate if the chatbot script reflects user intentions and its level of efficiency in answering users queries. Ultimately, an MVP should be delivered to an audience (either real users for live MVPs or testers for prototypes) to effectively measure their appreciation of the solution.

Of course, many factors influence the decision making process while planning what the best way to run an experiment is. In each case, the suggestion is to build something that requires very little effort when a high degree of uncertainty lies ahead, just to have the BML loop started, to learn from the users with the minimum amount of effort. Pivot or persevere, make data-based decisions to build up your product and improve your experiments at the end of each cycle. 

Measure. 

When an experiment is completed, it is time to collect data and translate them into insights. Depending on the experiment, your team could have in hand both qualitative and quantitative data to analyze and to compare with previous reports (if any). Some quantitative KPIs to focus on if the experiment is run on a live version of a website/app are Customer Acquisition Cost (CAC), conversion rates, percentages of active/paying users, Customer Lifetime Value (CLV), which is how much your users spend before abandoning your app or website, total value of transactions and Average Revenue Per User (ARPU).

Even though quantitative insights will help your business formulate data driven decisions, it is also very important to understand user sentiments toward your product to help you refine the value proposition and that’s where qualitative insights become fundamental.

The first step when working on qualitative data is to transform them into data points. Based on the nature of the experiment and its outcomes, to make clarity out of the vast amount of information users are able to express, it is fundamental to use frameworks. 

Raw data points can be clustered to highlight common themes (Affinity Clustering), placed into a linear scale to visualize the extreme sentiments and all the shades in between (Spectrums) or placed into a matrix to both measure relevance to the users and development effort (Importance Difficulty Matrix). 


Affinity Clustering, Spectrum, Importance Difficulty matrix

Through the work of researchers, quantitative and qualitative insights become meaningful and interdependent and will start to significantly address the hypothesis defined at the beginning of the process.

Learn.

During this last step of the loop, as a result of the building and measurement phases, the initial hypotheses are reframed based on the most recent learnings. Data-driven decisions are made to decide whether to pivot or persevere. Pivoting is necessary when the latest learnings demonstrate that user feedback wasn't positive or the interest raised was very poor. Pivoting could mean that your idea needs a partial directional change or a complete shutdown. 

Businesses must persevere when instead users demonstrate interest and appreciation in the idea. In this case, further plans should include experimentation with the intent of refining the solution by adding details to the initial hypothesis. 

It is very important at this stage to share information with every project stakeholder and keep an open mind by not getting attached to ideas or solutions that didn't encounter user favor.

In conclusion

A Minimum Viable Product, or commonly referred to as MVP, is the first workable version of a business idea and is built using one or more hypotheses, validated using the Build-Measure-Learn (BML) methodology using actual users of the product in principle.

Start off your MVP project with the formula below to build one or more hypotheses of your product:

I/WE BELIEVE [the subject/target] WILL [predicted action] BECAUSE [reason]

Next, use the BML model to build and test your product with intended users, incorporating the learnings and feedback into the design and development cycle of your product. 

In the next chapter, we’ll explore when it is necessary to opt for an MVP, what makes a good MVP, and take a look at some important building blocks and exercises when building your MVP. 

References


Tags
No items found.

Other articles you may like.

We conducted a UX audit for three major airlines. The results? Well, even the most renowned airlines have room for improvement! Discover more.

Using the DOT Score: Airline Booking

  • By
    Digital of Things
    September 8, 2023
    September 11, 2023

We conducted a UX audit for three major airlines. The results? Well, even the most renowned airlines have room for improvement! Discover more.

No items found.
all
An in-depth diary study is an effective way to get up close and personal with your participants. Our go-to guide helps you reap real results. Ready?

Guide to Running an Impactful Diary Study

  • By
    Digital Of Things
    October 10, 2023
    October 10, 2023

An in-depth diary study is an effective way to get up close and personal with your participants. Our go-to guide helps you reap real results. Ready?

No items found.
all
We took time out to chat with Sudipt Shah, our CEO and co-founder, as well as Doaa Badran, one of our talented UX researchers ahead of Digital of Things launch in the Kingdom of Saudi Arabia. Hereʼs everything they had to say about what the future holds as the business branches out…

Launching in the KSA

  • By
    Digital of Things
    May 2, 2023
    July 10, 2023

We took time out to chat with Sudipt Shah, our CEO and co-founder, as well as Doaa Badran, one of our talented UX researchers ahead of Digital of Things launch in the Kingdom of Saudi Arabia. Hereʼs everything they had to say about what the future holds as the business branches out…

No items found.
all