Yichuan Zhang, chief executive and co-founder of Boltzbit, explores how the rise of AI is shaping financial markets, and looks ahead to 2026 and beyond, to see how live learning systems may make AI more useful and sustainable for the industry.
A year ago, predictions from financial-services experts about generative AI followed a familiar pattern. There was genuine excitement about its transformative potential, tempered by caution: concerns about hype, return on investment, and the robustness of real-world implementations. Some expected slow adoption, others forecast rapid uptake across functions such as risk assessment and portfolio management.
Fast-forward to today, and the picture is far clearer.
Financial services are adopting AI at pace, and the use cases are multiplying. That shift is evident not just in headlines but in day-to-day conversations across the industry. What began this year as largely theoretical discussions quickly evolved into concrete, production-level deployments. Today, the return on AI adoption is no longer speculative: faster processing, more informed decision-making, and significantly greater value extracted from proprietary data are already being realised.
One topic, however, has loomed large throughout the year: the so-called AI bubble.
Whether we are in a bubble is no longer the question. The more important question is whether it will bust, and what happens when it does. My view may not satisfy everyone, but history offers a useful parallel. When the dot-com bubble burst, it stripped away noise and hype, leaving space for the companies with durable models and real value to emerge stronger. I expect the same outcome for AI.
In fact, I would argue that what we are witnessing today is not an AI bubble, but an OpenAI bubble.
OpenAI has become the gravitational centre of the AI universe. Its valuation, models, and partnerships move markets. Its brand has become shorthand for the entire field and that is precisely the problem. Attention has become over-concentrated on one company, one narrative, and one style of intelligence. Meanwhile, quieter innovators are building domain-specific models, live-learning architectures, and capital efficient alternatives to the compute-heavy status quo. When the OpenAI bubble eventually deflates, as all bubbles do, AI itself will not collapse. It will mature.
That brings us to the more important question: where are we headed next?
The next phase of AI will not be defined by who can train the biggest model. It will be defined by who can make generative AI useful, sustainable, and scalable in real enterprise environments. The battleground for that shift lies in model training and specifically, in live learning.
Today’s generic large language models cost millions to train. On top of that, enterprises must customise those models to extract insight from their own data, an exercise that is costly in time and resources and likely out of reach for many firms. Worse still, those customise models are static: frozen at a moment in time and entirely dependent on the performance and continued existence of the underlying foundation model.
In other words, most companies building “AI solutions” are not actually building AI. They are renting it via an API, on someone else’s servers, running on someone else’s compute budget. And if that provider disappears or fundamentally changes its economics, much of that investment disappears with it.
Live learning changes this dynamic entirely.
By enabling continuous, real-time model training, live learning is cheaper, faster, and inherently adaptive. Models evolve with every interaction, improving continuously rather than degrading over time. Live learning also fundamentally lowers the barrier to adoption: by streamlining data collection and model training, it removes the need for heavy upfront data preparation, allowing enterprise AI systems to deliver value from day one.
Think of it as an intelligence layer, a brain placed on top of any model, reducing the importance of which underlying foundation model sits beneath it. It ensures systems never become outdated and that performance keeps pace with constantly changing environments. In fast-moving domains such as capital markets, this capability is not a “nice to have”. It is essential.
As we move into 2026, the debate will shift accordingly. The next phase of differentiation and return on investment will come from sustainable, scalable model training and move from beyond artificial intelligence toward something more powerful: general learning intelligence (GLI).
AI’s future will not belong to the biggest models. It will belong to the systems that can keep learning.