The growing regulatory risk around AI

Firms can expect to add proof of AI usage to the ever-growing list of governance and reporting items that need to be maintained just in case a regulator or two come knocking, writes Firebrand Research founder and chief executive, Virginie O’Shea.

The Securities and Exchange Commission (SEC) this month fined two investment advisers for “artificial intelligence (AI) washing,” otherwise known as making false and misleading statements about their AI usage. The ongoing popularity of AI within the capital markets hasn’t escaped any regulator’s notice over the last couple of years and many have kicked off consultations and investigations into AI usage across various functions. However, this is the first time AI washing has resulted in an enforcement and it is big news due to its longer term implications.

The fines weren’t huge – $400,000 in total for both firms combined – but the precedent that has been set here is the important thing. Firms must now expect their technology usage to be under even greater regulatory scrutiny, especially in the investment and trading arenas. If your firm is claiming to use AI to support a function, it better be living up to those promises, in other words.

The European Securities and Markets Authority (ESMA) has already scanned fund documentation for references to AI, and therefore, already has data to hand to begin investigations around AI washing should it choose to go down the same path as the SEC. It’s fairly clear from that research already that many funds (and other types of industry participant) have made reference to AI usage but may not have implemented the technology sufficiently enough to merit the label.

One of Firebrand Research’s predictions for this year was that AI would be big news in the regulatory arena, both from an oversight and a usage standpoint and this news proves the validity of that trend. Numerous governments have prioritised regulations around the ethical use of AI, especially in fields and functions where AI might supplement and potentially, eventually replace human decision-making. However, the practical use of AI and the marketing of that usage by firms hasn’t been much discussed in wider industry arena up until now.

It’s interesting to note that regulators are talking a lot more about their own plans to make use of AI in their supervisory efforts and not every regulator is on the same page when it comes to regulating firms’ technology usage. The Financial Conduct Authority’s (FCA) Nikhil Rathi has repeatedly reassured the industry over the last few public speeches he’s made that the regulator will not “jump in immediately and seek to regulate in detail”. But that doesn’t rule it out completely in the longer term.

Firms can expect to add proof of AI usage to the ever-growing list of governance and reporting items that need to be maintained just in case a regulator or two come knocking. Reputational and regulatory risk are also two of the topics I’ll be covering with a panel of front office experts in an upcoming webinar with Wall Street Horizon. We’ll be looking at how firms can manage the plethora of risks coming their way in 2024 and arm themselves with the right data sets to stay ahead. You can register for said webinar here: