As organisations experiment with Generative AI, we’re finding that not every business has all of the required guardrails in place. This introduces risk and potential exposures to the leakage and or loss of sensitive data.
For highly regulated enterprise organisations, the risk is even greater. Not only does a lack of verifications and controls risk incurring the interest of regulators, but also for reputational damage. A dynamic Chain of Verification is what’s needed to scale AI adoption in a responsible way.
The "Chain of Verification" in the context of large language models (LLMs) refers to the comprehensive process that ensures the accuracy, reliability, and appropriateness of the data and algorithms used in these models, particularly in regulated industries where transparency and explainability is increasingly key.
In this blog, I’ll briefly outline five steps to build a risk framework for Generative AI to give you the right controls over the input, process and output of your Generative AI. In essence establishing a Chain of Verification across your end to end Generative AI lifecycle.
Find out how we’ve helped enterprises leverage Generative AI safely and securely across multiple industries
Firstly, you’ll need to establish a capability model with clear AI leadership, a strategy that is in line with overarching business objectives and an education and literacy programme that will offer support to impacted individuals across your organisation.
The model should take into account both the ideation and delivery of your project. It also requires cross-functional teams who can ensure the project is delivered in a way that is trustworthy and without a detrimental impact to the business. These teams should be made up of 1st, 2nd and 3rd Line risk management professionals familiar with your product, data & AI engineering teams to ensure that your control environment can be embedded into the very fabric of your ML & AI development and training procedures.
Like well established risk mitigation, we need to make generative AI risks measurable. This is where Key Risk Objectives & Key Risk Indicators can help. Borrowing heavily from Google’s site reliability engineering concepts, Key Risk Objectives and Key Risk Indicators are a great way to bridge the gap between data & AI engineering teams and their 3 Lines of Defence contemporaries who operate across risk, compliance and audit teams. Key Risk Objectives and Indicators should be established to ensure any risks around your Generative AI are measurable.
Key Risk Objectives (KRO’s) in generative AI are strategic goals focused on minimising risks associated with the development and deployment of AI models. These objectives aim to ensure the AI operates safely, ethically, and effectively within its intended scope.
They align with broader organisational goals, ensuring that AI technologies contribute positively without causing unintended harm or ethical concerns.
Key Risk Indicators (KRI’s) in the context of generative AI are measurable metrics that help in identifying and quantifying risks. They serve as early warning signals to detect potential issues before they become problematic.
Effective KRIs are specific, measurable, and aligned with the Key Risk Objectives. Ideally they can be captured and visualised in a codified manner.
You’ll need to ensure your data is of high quality and that you have stringent data governance in place. This is where we testify for a modern distributed data architecture founded on data mesh principles. In doing so, we apply a data product mindset that ensures we are able to identify, curate, own and govern primary data sets that can be the fuel for our LLM that powers Generative AI.
In addition, by putting in place governance controls using metadata management practices and data contracts, we can be sure on the origin of the data, its purpose and how it has been applied to feed into our generative AI inputs. This is super important in respect of explainability and being able to determine what data has been used, where, when and how to establish your generative AI insights.
Businesses should look to establish a Chain of Verification that offers an end to end appreciation of the input that is feeding into the LLM as well as the output that is coming out the other end. This is especially important when fine tuning models on your own data. This is where a Language Model Operations (LLMOps) approach can help.
LLMops refers to the various operations, techniques, and methods employed in the functioning and utilisation of LLMs. These include a range of processes such as training, fine-tuning, inference, and deployment of language models. At a high-level LLMOps is broken down into a series of steps and is illustrated in the image below:
Consider using Generative AI to govern your Generative AI.
For example, you could use Generative AI to validate your control environment by getting your LLM to analyse AI regulations and industry best practices from around the world and compare them with your control environment policies. Your results will determine which legislation is relevant to which control, and how it is supported by your organisation.
This enables you to apply the controls across your technology estate. Leveraging the technology capabilities of we outlined in Step 3 above in order to (i) identify the regulation, (ii) document how will you comply with the regulation, (iii) use tools, people and processes to evidence the control and (iv) demonstrate through automated controls where possible your capacity to audit the enforcement of the controls.
You organisation could also use Generative AI to:
This step involves creating a system for ongoing evaluation and enhancement of the generative AI risk framework. It requires the establishment of protocols for regular review, feedback incorporation, and adaptation to new risks as the technology and its applications evolve.
This step ensures that your generative risk framework remains relevant and effective in the face of changing circumstances, such as advancements in AI, shifts in regulatory standards, or emerging ethical considerations.
We could write full whitepapers on each of these stages outlined above to develop your own Generative AI risk framework. But the foundational concepts above give you an idea of the necessary steps to adopt and develop Generative AI in a safe and secure way.
Doing so without these controls and an understanding of risk not only leaves organisations exposed, but it means your project won’t ladder up to the goals of the overall business and therefore fail to make an impact.