At last week’s Data & AI Symposium, I had the pleasure of hosting a powerful discussion on one of the most talked-about topics in AI right now: Agentic AI. I was joined by Gavin Starks (CEO, Icebreaker One), Huw Jones (Head of Intelligent Automation, Lloyds Banking Group), and David Elliot (Director, AWS) who together unpacked the promise and pitfalls of agentic systems, how to separate hype from reality, and the cultural shifts required to unlock lasting business value.
While Agentic AI can offer short-term efficiency gains, the panel agreed that long-term value only comes with mature foundations and strong problem-solution alignment. David Elliot shared how AWS saved over 10,000 hours through a single technical shift, but also stressed the underlying infrastructure and cultural readiness that enabled it.
For organisations just beginning their journey, it’s critical to view agentic systems not as a silver bullet, but as a natural option in the evolution of data, AI and organisational maturity. Without high-quality data, strong governance, skilled teams, and problem solving before tooling selection the potential for ROI can remain elusive.
A key part of the conversation centred on what Agentic AI actually means. Definitions vary, but our panel described it as:
“A multi-modal, goal-driven system that can understand intent, plan and execute tasks, and adapt over time to the changing nature of a problem – ideally, explaining how and why it’s making decisions along the way.”
This ability to understand intent and take action was a consistent theme. Huw Jones emphasised that "understanding the intent of the customer and baking it into your business objectives" is foundational if agent-based systems are to meaningfully reduce burden and deliver value.
The conversation quickly moved to governance. All panellists agreed that the key to safe and effective deployment is starting with small, well-scoped use cases and putting guardrails in place early.
These guardrails must align with:
Gavin Starks said it best: “Trust is a very hard thing to build and a very easy thing to break.”
For agentic systems to gain traction, particularly in sectors like financial services, insurance and healthcare, they must be built with trust at the core. This goes beyond explainable AI – it’s about curating the entire customer journey, ensuring users feel supported and understood when interacting with autonomous systems.
David Elliot echoed the importance of baking trust into the product development process itself, telling how AWS promotes trust through multi-disciplinary teams, human oversight, and thoughtful design that considers edge cases, accessibility, and the broader ethical implications of automated decisions.
One of the most powerful messages from the session was the need to balance innovation with oversight. AI isn’t something to be locked away until it’s perfect – but it also shouldn’t be rushed into production without structure.
To foster this balance, organisations must:
The panel concluded with a clear message: Agentic AI holds enormous potential, but getting it right requires deliberate action, strong culture, and a clear-eyed view of both opportunity and risk.
Organisations looking to move beyond the hype should focus on:
In summary, I thought Huw Jones captured it the best:
“If agents are to work, they need to understand intent. That starts with us understanding our own.”