The launch of ChatGPT during the closing months of 2022, and its subsequent domination of any AI story thereafter, posed significant questions for enterprises. The game-changing opportunities being touted needed validating, concerns were raised over compliance and the shortage in talent became even more obvious, all as organisations tried to move quickly and keep pace with the competition.
But the year wasn’t solely about Generative AI. Our technology leaders work with highly regulated enterprises across multiple industries all at different stages of their transformation journeys. They gave us their insight on the most impactful trends and advances in the last 12 months and what it means for large organisations.
Data mesh highlighted that in order for businesses to generate value from the diverse and decentralised data they have access to, they need to apply a value based approach and align it with the business.
There was an emergence of governance tooling, but compliance hasn’t necessarily been a driver – the bigger driver has been data quality.
The relationship between operational and analytical data is well understood compared to two years ago. This is about how operational data feeds into analytical data and therefore data quality issues tend to first emerge at the operational level (e.g. how data is captured, integrated, aligned etc).
Quality of data has always been hugely important, but the old ways of improving data quality aren’t fit for purpose for fostering opportunities of Generative AI. This is because traditionally, data is viewed as static resulting in ad hoc cleansing exercises. Treating data as an organic product that needs to be managed has been a real shift this year.
Partnerships continued to be a key source of innovation for enterprises on their digital transformation journeys, but two areas stuck out to me this year; Ecosystem partnerships, and Regulatory and compliance partnerships. The first have the ability to drive innovation, scalability, and customer value through collaborative networks. The latter are critical for ensuring legal adherence, managing risk, and gaining market access in an ever-changing regulatory environment.
We saw ecosystem partnerships predominantly used by enterprises looking to achieve scale. When multiple partners collaborate to deliver integrated solutions, it becomes simpler to expand services or enter new markets. But they’re also useful to mitigating risks given their shared nature.
Regulatory and compliance partnerships emerged as a popular option given their ability to streamline processes related to compliance, reducing the administrative burden on the enterprise. While efficiency is one key driver, the access to legal expertise has helped enterprises navigate what’s becoming a complex landscape, and mitigate the risk of any regulatory fines.
Mesh-AI wins AWS Sustainability Partner of the Year!
It sounds obvious, but the speed to adoption for AI, and particularly Generative AI, in 2023 has been staggering. We’ve seen how the vast majority of enterprises deploying AI have seen positive business outcomes. But we’re still in a nascent phase, with many lacking the maturity, ethical frameworks and in-house talent to truly launch a full-scale organisational transformation.
We’ve also seen big steps in regulation in the form of the EU AI act, the UK and the US both launching institutes to police the development of AI. The good news is our research shows regulation is yet to be a hurdle for AI adoption, but this may also point to a lack of regulatory guidelines for organisations to adhere to. However, with the recent emergence of the EU AI Act, I believe that there will be a lot more focus in 2024 around how to demonstrate compliance with existing legislations, whilst also uplifting approaches to rely more on automation and modern engineering practices to demonstrate 3LOD.
The boom in Generative AI over the last year has shown everyone how important underlying data is. Hallucinations and inaccuracies in the output of Generative AI have been seen even when the size and scale of the model was “state of the art”. Today’s leading models are trained on massive amounts of “general” data, and they can perform a wide range of tasks amazingly well. But, it was apparent early on that specialist tasks were harder to get right. In order to capitalise on the opportunities beyond content creation etc, organisations have started to apply Generative AI to their own data.
We’ve seen an explosion in the platforms and services that allow customers to fine-tune their models at the drop of a hat – such as AWS’ Bedrock. Alongside this, we’ve seen a growing number of businesses make the large investment to train their own models from scratch, like Bloomberg. We’ve also seen debates around the privacy and usage laws of the underlying data. This brings us back to the importance of data and data ownership. Where this trend takes us into 2024 is yet to be seen but we can look at some examples, like Adobe, who are making all the right moves and keeping it safe – training hugely performant large models on data they own.
Organisations have historically had a data/AI strategy and a Digital strategy. However, as organisations start to realise that Digital and Data/AI are two sides of the same coin, one being the channel and the other being the engine that fuels it, they will increasingly start bringing these together. Over the past 12 months, we’ve seen more and more organisations start to do this with great success.
Furthermore, in the current economic climate where companies are strapped for cash with a relentless focus on value, bringing these strategies together will allow them to deliver on the market opportunity sooner.
Companies have started moving from Proof of Concept Graveyards (PoC's that never make it to production) to Production Graveyards (AI-powered solutions in production but not being used or returning value). Many are still experimenting in PoC's as the advancement in new technologies and services from companies such as OpenAI and the cloud service providers is rapidly evolving.
Many have struggled to understand what these new AI technologies mean to them. What are the opportunities and what are the risks? What do they need to do to get started? Where should they start?
There's a number of PoC graveyards where ideas are tested but necessary capabilities, such as effective controls in Governance Risk and Compliance (GRC) and legal expertise that can understand and mitigate the risks, are limited, meaning there is no route to move into production.
Without these, much of the value won’t be realised and insights won’t be actioned by users of these services. Users, beneficiaries and stakeholders are generally not engaged early enough in the development cycle to understand how they need to adapt their ways of working and operating model to take advantage of these new tools and insights.