Regulator warns AI could create ‘black boxes’ in decision-making

The Financial Stability Board publishes report highlighting potential risks of using artificial intelligence technology.

The use of artificial intelligence (AI) for functions like trade execution and portfolio management could create issues around accountability and decision-making, according to research.

The Financial Stability Board’s (FSB) recently published report on the use of AI and machine learning technology highlighted the issue and described the potential holes in decision-making as ‘black boxes’.

“[The] use of AI and machine learning risks creating ‘black boxes’ in decision-making that could create complicated issues, especially during tail events.

“In particular, it may be difficult for human users at financial institutions – and for regulators – to grasp how decisions, such as those for trading and investment, have been formulated,” the report said.

Furthermore, it stated communication used by AI is often incomprehensible to human traders and investment professionals, potentially creating further challenges around monitoring.

The FSB then questioned who would be responsible if the use of AI and machine learning based decisions caused losses to financial institutions.

“If a specific AI and machine learning application developed by a third party resulted in large losses, is the institution that conducted the trading solely responsible for the losses? Or would regulators or other parties be able to pursue potential claims against the application developer?”

The report warned the lack of transparency around the technology could exacerbate a regulator’s task of deciphering how ‘undesired events’ occurred in the future.