The buzz around artificial intelligence has reached a fever pitch, but in the financial markets, it’s essential to cut through the hype, writes Justin Llewellyn-Jones, chief executive, Trading Technologies.
Many are applying AI without first demonstrating a good business-driven use case – a phenomenon we also saw in the early days of blockchain. I view the current wave of AI as not a revolution, but an evolution. We are witnessing the next, most potent iteration of automation, a continuous journey toward greater efficiency that has defined capital markets technology for decades.
From ‘AI 0.1’ to Generative AI
Our industry’s pursuit of automation is nothing new. Automation efforts in capital markets over the past few decades, including electronification and digitisation – embodied by capabilities including ladders, spreaders, algorithms and smart order routers – were aimed at creating efficient ways to access liquidity.
These earlier, rules-based engines, using data inputs to optimise execution quality, could colloquially be considered ‘AI 0.1’. While they’re certainly not what we would consider AI today, the underlying principle is the same.
Large language models (LLM’s) and natural language processing (NLP) have been crucial evolutionary steps, enabling more efficient data access that then inform the automation paradigms we already have, while moving from a purely deterministic approach toward a more probabilistic approach.
This technology has supercharged existing systems. With Gen AI, we’ve fundamentally enhanced our execution automation modelling capabilities. Where models once took days or weeks to train on a year’s worth of data, they can now be trained in seconds and minutes on decades of historical data, incorporating hundreds of millions of data elements.
Gen AI hasn’t fundamentally changed our approach to execution, but it has dramatically ‘goosed’ the existing execution paradigms. It enables the rapid transmission of terabytes of data, making systems faster and more robust.
Crucially, it has smoothed the ingestion of unstructured data – from satellite images of oil containers, to shipping data, to central bank reports – efficiently surfacing insights to the trader. I would posit that this application of Gen AI is simply making our established automation and execution paradigms more intelligent and quicker.
The agentic AI shift and regulatory challenge
The true, fundamental shift in market automation, however, will be the move from Gen AI to agentic AI. This is where we ‘release the hounds’ – transitioning from models that merely suggest a course of action to autonomous agents that reason, adapt and execute independently in pursuit of a high-level objective.
This challenges the sanctity of the human-in-the-loop, a contentious shift in responsibility that will need to be addressed. An AI summary of agentic AI is as follows: “Agentic AI refers to autonomous AI systems that can perceive their environment, reason, plan, and take actions to achieve complex, multi-step goals with minimal human intervention, moving beyond simple commands to proactively manage tasks by interacting with external tools and data sources…”
Agentic AI moves beyond ‘goosing’ pre-existing rule-based engines with more data. Instead, the agent autonomously determines which data to use in real time. The rules are no longer predefined; they are determined and made in real time by the agent – a profound change.
How far away are we?
The technology for agentic AI is already in use today in capital markets. We see it in:
- Regtech, where agents replace the “level one check” in surveillance models, deciding whether to surface an alert for suspicious activity to a compliance officer.
- Analysts’ papers, where determinations of content inclusion are decided by agents.
- Collateral management, where optimisation decisions are being made by agents.
- Payments processing, where agents are determining what behaviour to surface for additional oversight.
But in the trading world, there’s a significant, understandable reluctance to let an agent loose on making decisions, stemming from the single biggest barrier to agentic adoption: the regulatory mindset.
Regulators around the world are taking a similarly situated approach to AI as they have to automation, insisting that the ultimate responsibility, no matter the underlying technology, resides with the human overseeing the computer.
This is a fundamental barrier as humans are incapable of reviewing terabytes of data in nanoseconds. This reasonable regulatory expectation creates a systemic friction point. To maintain control, agentic systems are currently cobbled to make decisions only within a predefined box, fundamentally reducing the agency of the agent itself.
Our industry will have to find ways to make regulators comfortable with the application of agentic AI in trading to fully realise the benefits.
The future is hybrid: Oversight and intuition
Meeting this challenge necessitates establishing robust audit trails, activity logs and action determination to prove to regulators that agents make systematic, reviewable and overseen decisions. We must implement checks and balances to prevent ‘agent collaboration’ – inadvertent price manipulation – and ‘black swan’ events from multiple agents acting identically.
The technology needs to evolve to incorporate these safeguards, and regulators could also benefit from adopting agents for oversight.
Even with technological advances, the human element remains vital. A global macro trading head noted that while 95% of a trade decision relies on available information, a critical 5% is based on the trader’s market knowledge and intuition – emotional intelligence that a machine cannot yet replicate.
Forward-thinking companies will embrace this ground-breaking technology as the next frontier of innovation, applying creative uses to benefit clients and markets. This must be done while ensuring regulators have the necessary tools for investor protection and fraud prevention.