Getting algos in order

The sheer number of different algorithms in circulation makes selection, performance measurement, and broker feedback difficult. So how are the buy-side opting the right algos? Asks Sarfraz Thind.

Equity execution can be a tricky business. Countering predatory traders has been a concern for the buy-side ever since the market became electronic some 15 years ago. Participants have devised many ways of stopping the threat but the problem of information leakage and trade disruption has not disappeared. With MiFID II putting greater emphasis on best execution, algorithms need to be in better shape than ever before.

So what can the industry do to improve this side of the business? Buy-siders have certainly ramped up their algorithmic sophistication since rudimentary execution algorithms came into the market in the early 2000s. And there is no shortage of them out there—indeed, according to estimates, there are some 1600 algorithms currently available to buy-siders globally, incorporating anything from volume prediction analytics, market impact models to liquidity heat maps and venue analytics. The sheer numbers of these different algorithms makes selection, performance measurement, and broker feedback difficult. And this has led to some dissatisfaction. In a report published by Greenwich Associates in January just 7% of US buy-side institutions said they were happy with the standard broker algorithms. Furthermore participants say that the variation in the performance of different algorithms remains small.

At present the industry continues with a heavy use of traditional algorithms to handle its liquid equity execution. The likes of VWAP (volume weighted average price) or participation algorithms are particularly prevalent and have, generally speaking, proven adequate to handle large orders on liquid equities. Michel Kurek, head of quantitative – cash execution at Société Générale, says that VWAP accounts for one-third of all global algorithm orders currently undertaken by Societe Generale with participation algorithms second, accounting for 10% of market volume, part of around 10 standard algorithms the bank runs. These will perform standard execution functions well but may still need to be managed according to conditions.

“You need to randomise your algos—you don’t want to give participants the score of your music,” says Kurek. “If you do some counterparty will be able to game your note.”

Forensic analysis

Greater pressure is likely to occur on algorithms when they are tasked with executing trades in less liquid names.

“We rigorously benchmark different algos and analyse them quite forensically—we don’t have any complaints from the ones we use,” says Michael Horan, head of trading services at BNY Mellon Pershing. “You get potential dumb algos more on the less liquid stocks—but these will be difficult for any algo. It is hard for any algo to adjust where liquidity follows an episodic or erratic pattern.”

Algorithm providers have been working on different techniques to frustrate predators. One is algorithmic switching. So-called intelligent switching engines have been used to move between algorithms and choose the best ones depending on market conditions for a decade now. The next generation of switching algorithms are being developed to incorporate artificial intelligence to harness the vast stores of data from historical trading logs and use this for better execution.

Portware’s algo switching engine (ASE) uses machine learning to determine which of the dozens of trading algorithms that traders have at their fingertips is likely to have the best performance for a particular order under particular market conditions.

“Our ASEs are based on an effort to understand algo trading efficiency as a function of the order flow,” says Henri Waelbroeck, director of research at trading platform provider Portware. “Traders will be looking for signs of algo activity to see if there is a persistent buyer or seller out there. Algo switching engines try and frustrate HFTs by switching between axes of order flow, in effect disappearing from where they might have become detectable.”

New algo regs

The company has been developing a deep learning architecture in the last year to drive the assignation of new order strategies. The technology, which is still in beta testing, will replace the previous version of the machine learning platform, offering more models and with a focus on interpreting data “through the lens of prediction,” says Waelbroeck.

“Rather than collecting data and making a prediction when asked, the Portware brain is designed to predict everything all the time,” he says.

Others are doing similar things:

“You can infer other portfolio managers might be doing similar size trades and get information to help others,” says one bank participant who asked to remain anonymous. “We are seeing improved performance equivalent to 20bps of expected returns in situations where you need to switch algos using machine learning.”

The focus on algorithmic execution comes in the glare of MiFID II which is imposing significant new regulatory edicts on the market. Rule 6 of the MiFID II Regulatory Technical Standards (RTS 6) will require governance and testing obligations for all investment firms running algorithms. While this is likely to increase the time and cost burden on managers to test each new broker algorithm suite the general feeling is it should help to manage risk associated with algorithmic execution.

“RTS 6 is trying to prevent rogue algos which have led to events like the flash crashes,” says Horan. “Regulators are trying to bring in the proper regulations to register algos and get in proper testing which is good. It will also help clients with algo risk management.”

The positive aspects of MiFID II on algorithmic development are countered by continued liquidity fragmentation and greater venue complexity in Europe as new market venues come into a market already crowded with a range of exchanges, alternative trading system (ATS) and dark pools. Navigating this is a key consideration for future algorithmic development. Kurek says that clients have been asking for algorithms to be perfected for new venue trading.

Unstoppable development

“The microstructure in Europe is changing,” says Kurek. “The [MiFID II] rules will lead to diminished dark pool share to be replaced by the large-in-scale waiver although the time to declare trades will be shorter under MiFID II. While the large-in-scale will improve the way we manage information leakage and gaming the delay in trade reporting is going the other way.”

But while there appears to be a definite move to develop new technology in the execution world, with providers increasing the range of algorithms being offered, there are potential negatives to this. BNY Mellon’s Pershing uses six or seven different algorithm providers right now. The company has not jettisoned anyone because of weak algorithms. On the contrary, says Horan, he is happy with the algorithms provided the company and, indeed, says there is a danger of things becoming unnecessarily complicated these days.

“There is only so much maximum performance you can extract,” says Horan. “There does seem to be an arms race on in algos. Some firms will give you an algo rack with 25 algos on which is way too confusing.”

Some have, indeed, criticised things like intelligent switching techniques which can potentially deviate from the original execution objective of a broker as well as making it harder to measure algorithms against each other.

Algorithmic development is unlikely to stop any time soon to deal with the new market complexities fuelling the arms race further. It is a fast-developing environment and one that requires all measures to stay on top.

«