The locus of responsibility

Regulators want to see tighter controls over algorithms, but debate remains over how these could be effectively implemented and who the checks should apply to.
By None

What do regulators hope to achieve by vetting algos before they hit the market?

The goal is to limit the chances of an algorithm going haywire in financial markets, a risk that became a reality after the 6 May ”flash crash'. Put simply, regulators feel they need to prove they have a grip on electronic trading to ensure market confidence and investor safety.

The root of the flash crash was a sell algorithm initiated by a mutual fund that had just one parameter – a 9% volume participation rate. The lack of any other constraints on the algorithm, combined with a nervy market sentiment caused by the Greek debt crisis, lit the fuse on a chain of events that thrust markets sharply down before rebounding just as rapidly.

Regulators in the US, Europe and Asia are now looking at various ways to ensure such incidents do not reoccur, either by analysing the algos themselves, or introducing tighter controls before they hit the market.

There are, of course, already obligations on authorised trading firms to have adequate systems and processes in place and order entry controls, based on the well-established market principle that the broker should be responsible for its client's orders, regardless of the method or technology used. But it seems regulators are keen to update rules to explicitly demonstrate that they are keeping up with the evolution of trading styles and market structure.

Who would be the main focus of the regulators' attentions: brokers that supply the algorithms, or buy-side users?

While a buy-side trader initiated the flash crash, much of the regulators' attention is still centred on brokers and suppliers of algorithms.

One regulation pushed through after the flash crash outlawed naked sponsored access in the US. The ban, which was enacted by the Securities and Exchange Commission (SEC) on 3 November 2010, prevents buy-side firms from using a broker's trading ID to execute orders directly on a market without pre-trade risk controls.

According to the SEC, the ban will help “prevent erroneous orders, ensure compliance with regulatory requirements, and enforce pre-set credit or capital thresholds”.

In Europe, trading venues already prohibit naked sponsored access, but under the European Commission's MiFID II proposals, firms engaging in automated trading may have to “notify their competent authority of the computer algorithm(s) they employ, including an explanation of its design, purpose and functioning.”

Asian markets are also starting to take action, with Australia proposing in a recent consultation paper that sell-side firms test algorithms before they are deployed and document the logic used.

In India, brokers explain the design and function of algorithms to exchanges and approval is required before they can be used. Brokers have expressed concern at the length of time this process takes, noting that by the time approval is received, algorithmic strategies can be out of date.

In a recent report by consultancy Celent titled ”Electronic trading in India', senior consultant Anshuman Jaswal noted that the Indian regulator, the Securities and Exchange Board of India (SEBI), should invest in technology that would allow them to rationalise rules relating to electronic trading.

“Instead of having to study every algorithm over a period of weeks, SEBI would be able to run quick tests to check for the possible harmful effects of the algorithm in the capital markets and be able to approve, modify, or deny them accordingly,” he wrote.

But do regulators even have the capabilities to monitor algorithms?

Not really and by all accounts they don't really want the job either. At this year's TradeTech conference in London, Tim Rowe, manager, trading platforms and settlement policy, markets division at UK regulator the Financial Services Authority, told delegates on a panel, “There is no way that any regulator will be able to sit down and look at lines of code of an algorithm and work out where the mistakes are.

“Problems with algos generally stem from an error rather than someone doing something malicious,” he added.

Furthermore, regulators would face a tough test trying to compete with banks and proprietary trading firms when trying to attract the talent required assess algorithms.

Passing liability for algo monitoring to regulators could also create a “moral hazard”, considered Rowe, by causing suppliers of algorithms to believe they are absolved of their own responsibilities relating to algo risk.

Click here to vote in this month's poll.