Blog

What AI in in financial services will look like in 2026

Yichuan Zhang, chief executive and co-founder of Boltzbit, explores how the rise of AI is shaping financial markets, and looks ahead to 2026 and beyond, to see how live learning systems may make AI more useful and sustainable for the industry.

Yichuan Zhang, chief executive and co-founder of Boltzbit, explores how the rise of AI is shaping financial markets, and looks ahead to 2026 and beyond, to see how live learning systems may make AI more useful and sustainable for the industry.

A year ago, predictions from financial-services experts about generative AI followed a familiar pattern. There was genuine excitement about its transformative potential, tempered by caution: concerns about hype, return on investment, and the robustness of real-world implementations. Some expected slow adoption, others forecast rapid uptake across functions such as risk assessment and portfolio management.

Fast-forward to today, and the picture is far clearer.

Financial services are adopting AI at pace, and the use cases are multiplying. That shift is evident not just in headlines but in day-to-day conversations across the industry. What began this year as largely theoretical discussions quickly evolved into concrete, production-level deployments. Today, the return on AI adoption is no longer speculative: faster processing, more informed decision-making, and significantly greater value extracted from proprietary data are already being realised.

One topic, however, has loomed large throughout the year: the so-called AI bubble.

Whether we are in a bubble is no longer the question. The more important question is whether it will bust, and what happens when it does. My view may not satisfy everyone, but history offers a useful parallel. When the dot-com bubble burst, it stripped away noise and hype, leaving space for the companies with durable models and real value to emerge stronger. I expect the same outcome for AI.

In fact, I would argue that what we are witnessing today is not an AI bubble, but an OpenAI bubble.

OpenAI has become the gravitational centre of the AI universe. Its valuation, models, and partnerships move markets. Its brand has become shorthand for the entire field and that is precisely the problem. Attention has become over-concentrated on one company, one narrative, and one style of intelligence. Meanwhile, quieter innovators are building domain-specific models, live-learning architectures, and capital efficient alternatives to the compute-heavy status quo. When the OpenAI bubble eventually deflates, as all bubbles do, AI itself will not collapse. It will mature.

That brings us to the more important question: where are we headed next?

The next phase of AI will not be defined by who can train the biggest model. It will be defined by who can make generative AI useful, sustainable, and scalable in real enterprise environments. The battleground for that shift lies in model training and specifically, in live learning.

Today’s generic large language models cost millions to train. On top of that, enterprises must customise those models to extract insight from their own data, an exercise that is costly in time and resources and likely out of reach for many firms. Worse still, those customise models are static: frozen at a moment in time and entirely dependent on the performance and continued existence of the underlying foundation model.

In other words, most companies building “AI solutions” are not actually building AI. They are renting it via an API, on someone else’s servers, running on someone else’s compute budget. And if that provider disappears or fundamentally changes its economics, much of that investment disappears with it.

Live learning changes this dynamic entirely.

By enabling continuous, real-time model training, live learning is cheaper, faster, and inherently adaptive. Models evolve with every interaction, improving continuously rather than degrading over time. Live learning also fundamentally lowers the barrier to adoption: by streamlining data collection and model training, it removes the need for heavy upfront data preparation, allowing enterprise AI systems to deliver value from day one.

Think of it as an intelligence layer, a brain placed on top of any model, reducing the importance of which underlying foundation model sits beneath it. It ensures systems never become outdated and that performance keeps pace with constantly changing environments. In fast-moving domains such as capital markets, this capability is not a “nice to have”. It is essential.

As we move into 2026, the debate will shift accordingly. The next phase of differentiation and return on investment will come from sustainable, scalable model training and move from beyond artificial intelligence toward something more powerful: general learning intelligence (GLI).

AI’s future will not belong to the biggest models. It will belong to the systems that can keep learning.

The overlooked dataset: Conversations in capital markets

As AI and large language models (LLMs) begin to play an increasingly vital role in financial markets, Matt Cheung, chief executive of ipushpull, explores how conversational data is becoming an important source of early market signals.

Navigating the T+1 transition: Europe’s path to settlement efficiency

As the deadline for T+1 settlement in Europe draws closer, Xavier Crépin-Leblond, lead summit product manager at Finastra, highlights the associated settlement risks and operational challenges firms may face, the importance of early planning and automation, and how Europe should learn from the US’ example.

Why the primary exchange price is no longer sacred

With less than 40% of equity volume now trading on primary exchanges, legacy benchmarks increasingly fail to reflect where liquidity truly resides. As fragmentation deepens across exchanges, MTFs, dark pools and bilateral venues, firms risk misjudging execution quality unless they adopt smarter, independent reference prices shaped by their actual trading footprint. The trading industry’s next edge lies in smarter measurement, not speed, writes Robin Mess, co-founder and chief executive, xyt.

How predictive pricing is reshaping the bond market

As bond markets evolve, Eugene Grinberg, chief executive of SOLVE, explores how the introduction of innovations such as AI and machine learning are being incorporated into predictive pricing, to reshape the fixed income landscape.

Data in motion: Unlocking the path to streamlined connectivity

Tim Lind, managing director and head of data services at the Depository Trust Clearing Corporation (DTCC), explores how financial services firms can use technological innovations such as automation, artificial intelligence (AI) and distributed ledger technology (DLT) to transform the corporate actions data process and network.

Europe’s consolidated tape: A once-in-a-generation opportunity

For firms, a consolidated tape for equities and ETFs isn’t just about access to data; it’s about lowering complexity, cutting costs, and freeing teams to innovate, develop new products, and build stronger market infrastructure, writes David Taylor, chief executive, Exegy.