26 Mar

Building Smarter Data & AI Products with Lean Experimentation

MA
Maryam Aidini

The global AI market is booming, with businesses investing billions in machine learning models, predictive analytics, and intelligent automation. Data has become the foundation of modern product development, shaping how businesses deliver value and make strategic decisions. Yet, despite its power, data alone does not guarantee success. 

Every product that has data as its core, is built on a set of assumptions – about data quality, data availability, user behaviour, and business impact – that, if proven false, can result in failure.

Without a structured approach to testing assumptions early, teams risk building solutions that are useless, biased, or impossible to scale. As product managers, data scientists, and business leaders, we've all experienced that moment when an exciting new idea emerges, but uncertainty lingers. Is this the right path? Will it deliver real value? 

Lean experimentation provides a systematic way to validate ideas before making costly commitments. By testing assumptions early, teams can avoid expensive missteps, accelerate learning, and build data and AI products that truly deliver value in the real world. This article will provide an overview of the common assumptions we make and how lean experimentation can help minimise risk and maximise return on investment which means more investment in data and AI products.

Dangerous Assumptions when Building Products with Data as a Core Component

When building products that rely on data, especially those that consume and use data produced externally to the product itself (whether AI-powered personal assistants or analytical platforms), we rely on numerous assumptions about the data we need to use. If any of these assumptions turn out to be false, the product is at risk of failure – leading to wasted time, effort, and money. Such failures can erode leadership’s trust, making it harder to secure future investments.

These assumptions often relate to how data is collected, processed, interpreted, and used to drive decisions or generate insights. However, if they prove incorrect, the consequences can be severe, including poor adoption, missed business opportunities, and an overall lack of impact.

These are some of the common high risk assumptions:

Common High-Risk Assumptions in Data at Core Products

  1. Data Availability Assuming that the required data will be readily available, clean, and accessible when needed.
  2. Data Quality – Assuming that the data is accurate, complete, and free from bias.
  3. User Behaviour Assuming that users will engage with and act upon data-driven insights as expected.
  4. Business Value Assuming that leveraging data in a specific way will generate measurable business impact.
  5. Model Performance – If AI or machine learning is involved, assuming that models trained on training or historical data will generalise well to production and future data.

Validating assumptions early in a project is crucial for reducing risk, saving time and resources, and increasing the chances of success. Lean experimentation is one of the most effective ways to validate assumptions in these projects, ensuring we are on the right path.

We applied this approach while working with a global asset management firm that had invested heavily in data platforms and solutions that leverage and are built on top of that platform. They initially assumed their data was of a high enough quality to meet their needs and that their technology choices were well-suited to their needs. 

However, instead of committing to a full-scale solution upfront, we used lean experimentation to systematically test these assumptions through small, focused experiments. This approach not only prevented months of wasted effort but also ensured that the final product was effective and aligned with their business goals.

Lean Experimentation in Action – A Framework for Success

Lean experimentation minimises waste and maximises learning by testing assumptions quickly and efficiently. It is an approach to learn with the least amount of resources available whether an idea/initiative will deliver its anticipated value. 

The Three Key Steps of Lean Experimentation

Step 1: Uncover Assumptions

Assumptions are the beliefs or hypotheses that underpin product decisions, such as, “Users will find this feature valuable” or “ Our data's quality attributes are suitable for what we are trying to achieve” or “Our AI model will accurately predict outcomes.” If these assumptions are incorrect, the product will fail. It’s essential to identify these assumptions early in the process.

Tools for Uncovering Assumptions:

  • Interviews with stakeholders and users to validate beliefs.
  • Use of frameworks like the Business Model Canvas to uncover hidden assumptions about the customer, value proposition, and key activities.
  • Workshops with cross-functional teams (eg: connecting data scientists, marketers, business managers together) to surface assumptions across different perspectives.
  • Work with technical team to uncover hidden technical assumptions

Step 2: Design Experiments

Once assumptions are identified, the next step is to design experiments to validate or invalidate these assumptions. The goal is to test the most critical assumptions that will have the largest impact on the future product.

Experiment Design Principles

  • Hypothesis-Driven Development: Define what needs to be tested clearly. For example, "We believe that adding feature X will improve user engagement by 20%."
  • Small and Focused: Keep experiments small to quickly gather insights without committing unnecessary resources.
  • A/B Testing: Test different versions of a feature or AI model to compare which performs better.
  • Rapid Prototyping: In the case of AI products, this can mean creating a simple version of an ML model, the fastest approach to validate whether there are correlations in the data, before committing to full scale development.
  • Technical spikes: In the case of data related products, this means building quick proof of concepts to understand if the data accuracy, data quality, data availability and timeliness are suitable for our needs.

Step 3: Analyse and Iterate

This is the critical stage where we analyse the results of our experiment and determine the next steps—whether to move forward, pivot, or stop altogether.

Now is the time to combine data with real user insights. We look at quantitative metrics like user engagement, conversion rates, data quality, or AI model accuracy, alongside qualitative feedback from users, to assess whether our assumption holds true. Based on this analysis, we decide whether to refine our approach, rethink our assumptions, or proceed with confidence.

Why Does Lean Experimentation Matter?

Working with data creates additional uncertainties to product development, beyond those around user adoption and value. Data quality, data availability and model performance are just as unpredictable. We tend to assume these factors will align perfectly—but in reality, they rarely do. Lean experimentation helps mitigate these risks by enabling teams to test critical assumptions early, ensuring that resources are invested in the right solutions.

Key Benefits:

  • Reducing Waste – AI initiatives require significant resources to build and train models. Lean experimentation validates ideas before making costly investments.
  • Managing Complexity – When the product relies on integrating and using data produced elsewhere, we rely heavily on that data and how it will be made available to us. Experimentation enables teams to test and evaluate their data early, helping them anticipate challenges and make informed decisions for the future.
  • Increasing confidence – When the team gains visibility into the challenges and limitations of the data, they can make more informed decisions and better prepare for future development.
  • Continuous Improvement: Lean experimentation fosters a culture of learning, where teams are encouraged to constantly test and iterate on their ideas. This ongoing refinement process leads to long-term growth and the ability to continuously improve business impact over time.
  • Data-Driven Decision Making: Lean experimentation allows teams to base decisions on real data rather than assumptions. By running small, controlled experiments, businesses can identify which strategies and solutions will yield the most value, minimising risks and improving decision-making.

By embracing an iterative, hypothesis-driven approach, companies can build resilient, user-centric AI and data products that effectively solve real-world problems.

Building products with data at their core is inherently uncertain. By proactively identifying and testing assumptions, teams can reduce the risk of failure, avoid wasted resources, and create solutions that genuinely meet user needs while delivering business value.

Lean experimentation enhances confidence in decision-making, saves time and money, and ensures solutions are fit for purpose. The three-step process—uncovering assumptions, designing experiments, and analysing results—is essential for success. Additionally, recognising and addressing high-risk assumptions early in the development cycle is crucial to building resilient, effective data-driven products.

Latest Stories

See More
This website uses cookies to maximize your experience and help us to understand how we can improve it. By clicking 'Accept', you consent to the use of these cookies. If you would like to manage your cookie settings, you can control this in your internet browser. Find out more in our Privacy Policy