We’re hearing a lot about technology priorities shifting in 2026. From your perspective, what are the biggest capital markets trends that will have the greatest impact on clients this year?
From a client technology point of view, one of the biggest trends we’re seeing is this desire to improve cross-system integration.
A lot of clients have loads of older systems, and the reality is that some workflows still require logging into two or three different tools just to complete one process. Clients come to us with the requirement that they want to make it more seamless – ideally a single product, a single login, or at least much better interconnectivity.
Even if they keep the same systems, they want to log into one place and have data propagate through the workflow.
Also, data is a huge driver. A big trend we’re seeing is firms wanting to bring their data into a single central location. That helps with interactivity, because different systems can access data from the same place, the same source of truth.
It makes the work easier operationally, but it also makes reporting much easier. Clients want to be able to actually see what’s going on: where orders are, what state something is in, and how workflows are progressing.
Sometimes data gets lost in the cracks. We’ve had clients come to us where they’ve been living with all kinds of operational risks due to their old systems. They ask: would your systems help us sort these?
So transparency, seamlessness, and centralised data are all tied together.
Where does AI fit into that conversation? Is it something clients are actively asking for, or is it more something technology providers need to bring to them?
At the moment, clients know they’ve got problems. They know they want to fix aspects of the business, but they don’t always know how to integrate a new solution.
There’s also the hype around AI. Firms want to integrate it, they like the idea of getting all this efficiency and maybe even some potentially really valuable insights, but they don’t necessarily know how to achieve it.
We’ve seen forms of AI in execution for decades, in algorithmic trading and that side of the market. But what the new LLMs are doing, and what we think the future is going to be, is really on the reporting side.
Once you’ve got access to centralised data, you can start building reports, doing workflows, and getting insights you didn’t previously have.
I think that’s where we’ll see AI tools integrated first, as add-ons that interact with existing workflows. It’s not realistic to expect firms to rip out their entire systems and build from scratch.
So it becomes an iterative process: carefully bolting on new tools, creating interface layers between old and new systems, and making sure models can work with current workflows.
So when a market participant is procuring or replacing technology, what are the primary factors that go into the decision?
There are two main aspects. The first is knowing what to build. Clients know pain points exist, they know workflows are slow, but going from business-level understanding to looking at your tech stack can be daunting.
They ask: do I need to rip it all out from the ground up? Or can I iteratively add a component somewhere to fix certain pain points?
We help with that. Often, we come in right at the requirements phase, before even talking about solutions.
Clients come to us and say: “We have a problem. How have you seen this fixed before? What do you recommend?”
The first step is essentially making sure we are delivering the right thing.
The second is the classic buy-versus-build decision, which I think is a lot more complex nowadays given there are some many choices out there. Some clients have in-house developer teams, others don’t. Even the ones that do have technical staff still face the question: do they have the knowledge, time, and cost capacity to build a new system themselves, or is it better to outsource?
Then there’s the trade-off between something off the shelf – a pre-made product that’s reliable and ready to go – versus something fully bespoke.
What we find is that clients like the reliability of something tried and tested, but they still want heavy customisation. At the end of the day, what’s the point of just using exactly the same thing as your competitors, right?
We’ve done a lot recently in metal and commodities brokerage clients, and they have very unique process flows, calendars, delivery dates, and complex instruments.
We offer our prebuilt products with a ready-to-go UI, but still offer heavy customisation around each client’s workflows and use cases as part of our project delivery.
What does good decision-making look like in that context? How do firms avoid making the wrong technology choices?
The most successful projects are the ones where firms reach out early and are open to input and consultation.
Knowing what you don’t know is key. There may be different approaches to fixing pain points that go beyond the current workflow.
It’s also important not to invest in technology just for the sake of it, but to be clear on what your business drivers are, and what you are expecting it to deliver for you.
Having an open mind and working with someone who has experience in the industry and in your niche, is critical.
Are there common risks or pressures that firms underestimate when making these decisions?
Yes, and one of the biggest is long-term support.
A lot of people focus heavily on the upfront process: requirements, building, implementation.
But many clients come to us because they had systems put in place five or ten years ago, and the initial process was good, but the long-term support wasn’t there.
Issues arise. Bugs happen. Regulations change. New pain points emerge.
You need a vendor or partner where you know you can request new features, iterate over time, and continuously improve the system over many years.
That ongoing partnership is another key consideration in making the right decision.
Migration and go-live can also be huge risks if not handled correctly. It’s not just about building the system; it’s about that transition day when it switches on.
You want that transition to be smooth to avoid operational risk. Trading environments are high-intensity, high-stress, and you need traders and operations teams to be accustomed to the software beforehand.
You also need a clear backup plan in case you do need to temporarily revert.
That operational side of implementation is critical.
How is Sinara helping firms navigate these choices in a crowded technology landscape?
Sinara is uniquely placed. We’ve got more than 30 years of experience, so we understand these workflows deeply, and we can optimise systems using all the lessons we’ve learned. I think our combination of pre-existing SinaraTLC tech plus the ability to work with clients to deliver something unique for their business puts us in a great position.
We also have the operational support network in place: overnight support, 24/7 coverage, and we’ve supported some client systems for over a decade or more. Some of them are still running today and we continue to keep them up to date.
That long-term operational reliability is there. You can count us to be there.
Recently we’ve been investing heavily in our new product suite, SinaraTLC.
The idea is offering the reliability and tested nature of an off-the-shelf product, but with open interfaces, centralised data, and client-specific modules where needed.
That allows us to iterate, add new components, and interact with existing systems if clients want that, as well as external systems and new exchanges or market data sources.
Markets evolve constantly, so being able to incorporate new sources and new tools is essential.
Personally, I’m also stepping into a new role as head of product, so it will be my job to work with clients and prospects from the start, understand what they need, and sometimes read between the lines to find what the real pain points are.
That’s about adding value, driving revenue opportunities, but also reducing risk and cost through better workflows and better technology decisions.
There’s a lot of exciting work ahead.