There has been renewed interest in investing in data, driven by the desire to adopt AI. Data strategies are evolving to align with this goal. However, many still take a technology-centred approach, focusing on acquiring new capabilities for AI.
While understandable, the main obstacle to effective data strategies is the unmanageable complexity that large organisations face. No matter how clear the vision, success depends on navigating both existing and added complexity.
Where does this complexity come from in the first place? Simply, the increased complexity of data mirrors the increased complexity of any business in today’s environment. Data cannot be detached from the business; as business operations become more intricate and their goals become more sophisticated, the data generated follows suit.
Adding new capabilities to the data estate without addressing complexity directly is unlikely to succeed. Many organisations are still grappling with traditional data management approaches that struggle to keep up, alongside existing issues like siloing, fragmentation, legacy systems, and poor data quality.
Any forward-looking data strategy must address this complexity directly to make a tangible impact on the business's strategic goals and outcomes. How do we formulate a data strategy to focus on these issues?
The world of software has long recognised two sources of complexity:
1) Essential complexity, inherent in adding new capabilities or evolving existing ones, such as introducing a new AI tool; and
2) Accidental complexity, which is unrelated to the qualities of the system itself and likely accumulates over time, such as data quality issues introduced at the source.
A data strategy should address both sides of complexity by identifying which capabilities will bring value to the organisation, what complexities are introduced by these capabilities, and how to handle them. It should also diagnose accidental sources of complexity that may hinder value delivery.
To understand how this complexity manifests itself, consider regulatory compliance.
Enterprises face a broad and ever-changing regulatory landscape. Being, and evidencing, compliance, responding to any risks and reporting on your position is a data heavy endeavour.
New capabilities, such as LLMs, knowledge graphs and RAG, present an opportunity to solve these problems in a more radical, valuable and durable way. As such they are not "required" but they are an opportunity that a data strategy can explore. Implementing these is no trivial task and this gives us a way of looking at the complexities we need to address to achieve compliance across customer, operations, and beyond.
Accidental complexities could include data quality issues, organisational boundaries and silos, accessibility of existing systems of record.
Essential complexities would include the introduction of new capabilities, upskilling the workforce to appropriate levels of data literacy, upgrading the necessary tooling for consuming data.
For a data strategy to move the needle on this topic, there needs to be a clear articulation of the above complexities, the approach to address them and the expected value outcome.
This approach will elevate a data strategy while grounding it in the organisation's realities, making it more likely to be seen as realistic and generate momentum and buy-in.
In the end, the key to a successful data strategy is to embrace complexity rather than ignore it. By addressing both essential and accidental complexities explicitly, organisations can build data strategies that are practical, aligned with their AI ambitions, and effectively drive strategic business outcomes.