SteelEye recently implemented a ChatGPT surveillance platform, how can AI play a role in trading compliance technology?
I think the recent evolutions in AI that we’ve seen over the last year or two are really starting to be a little more profound in the context of how they may change the role of a compliance professional and how they may change the role of regulators and their expectations.
Legislation varies from region to region and by regulator to regulator but ultimately we can only improve our capabilities by incorporating AI into our strategies. I was sitting with a large European bank recently and they’re looking to replace their current communication surveillance and oversight capabilities. They’ve got an array of different systems. The future of surveillance and compliance technologies will be looking at how we rationalise the number of systems, but also how we make that workflow for the compliance professional better. You can do that through systematic rationalisations from vendor and platform perspective or you can do it from just making the day-to-day easier.
We launched ChatGPT in our system yesterday. The ability to press a button and that button to break down historic communications or even trading patterns and activity and simplify it in a way that you can digest it in minutes and then tell you the steps you should take next as well and then generate your response to the individuals, counterparties, internal and external in an automatic way – it’s profound. It’s quite scary but also exciting. It makes the whole journey for that compliance professional so much faster. In the old days, surveillance was really about reducing the false positives to make a surveillance professional more efficient. We can do that, but we can also massively reduce the time it takes to digest and understand information. This is what ChatGPT will do. This technology has only been around since November. I met with somebody a month ago and they asked what’s your strategy on ChatGPT? I thought oh my God, it’s only been around for two months. It’s fully incorporated in the product now.
It’s about simplifying the analytics of information. You can explain what’s going on in a certain chat room, what people talking about, what the thematic context of why that chat room exists. But do it in a way where I can look at a screen with a few paragraphs in each box that explains it and then tell me what I should do on the back of being able to break that information down. We’re enabling the users of the SteelEye platform to press a button and call out to the open API capability to do this.
We can provide those analytics and what that means is we can start to look at overall patterns of an individual, of an instrument, of a company. Imagine if you were able to press analyse Vodafone and it comes back and tells you everything that your company thematically has done with Vodafone, both from a trading activity perspective. Who your big counterparties are, where your risk and exposure is, who you’re trading with, what’s the overall tone and sentiment of how you interact with the counterparties?
Do you think regulators are encouraging the use of AI in compliance?
From a regulatory view, in terms of using this type of new age technology and the analytics that are available, they’re not vocally saying you should be using it. In some ways they’re saying you should be very cautious of it. You get technologies which in some cases a little too far into the AI side, but I really respect the innovation that’s coming out around the context of effectiveness of surveillance using AI. Explainability and operational efficiencies are not mutually exclusive of each other. The world is trying to get to a place where we’re starting to understand how these technologies work together. The regulator is definitely watching definitely seeing AI as being a core part of innovation. But regulators don’t want financial firms totally dependent on AI. They want a mix of computer and mix of human and this is where it really comes into how you bring these things together.
What role have the recent Wall Street WhatsApp fines played in this move towards AI in compliance?
There are a few components. Firms have been swinging from need to find ways of supervising and monitoring to just outright ban on technologies like WhatsApp. Those who have tried an outright ban will lose not from a compliance perspective, but from a business perspective, because this is how the world interacts. Whether it’s WhatsApp today or whatever it is in a year or two’s time, we’ll always have these technologies that people are using. The reason we use them is they make our lives easier; they make them more straightforward.
We’ve seen instances where big banks are being fined collectively around $2 billion where WhatsApp wasn’t being supervised. There’s technology available today to market participants to supervise this. Regulators are saying if firms are not going be in control of policy and it’s a prevalent thing in an organisation as a communication mechanism you will suffer the consequences of it. The regulators are showing is no longer OK to turn a blind eye or make excuses. As a result, this is where these fines come from. The UK regulator is different. They haven’t taken such a significant tone but that is one of the key differentiators between the US and the UK regulators. The US regulators don’t do a lot of pre-enforcement of systematic controls like the UK but when you do something wrong they hit you hard.
The key thing is we’re talking about huge amounts of information. Being able to pick up that an e-mail or a Bloomberg chat took place where somebody sent a communication trigger like “ping me on WhatsApp” and then go off and search all the plethora of communications is a way that we can use AI to start more in-depth searches. Beyond that you could then look at the trading patterns and see if something took place after that trigger an hour or two later e.g., a set of trades or a market move in a particular way. What did that person do or know? It could be nothing? It could be everything.