The mirage of error-free trading

The increase in algo errors, which have proved they can decimate companies in minutes or seconds, joins fraudulent trading and rogue traders as central drivers in the push for stricter reporting and compliance within trading firms.

With high profile trading errors forcing trading desks to re-think algo strategies, how will firms know when an algo is ready? 

While fat-finger errors may be less frequent than a decade ago for sales traders due to increased monitoring, the increase of algo errors has dramatically raised the stakes for traders dealing with large volumes.

The Knight Capital trading glitch in August, when the firm incurred US$440 million loss from a rogue algo in less than an hour, shocked the market. The dangers of using increasingly complex algorithmic trading strategies, which seek to automate processes previously managed by people, established a new era for trading worries. Firms can never be completely assured their algo will perform as expected as testing for every market eventualities is impossible.

A well-engineered, back-tested algo must be unleashed to the market with similarly strong pre-trade analytics to ensure there are adequate checks before an order reaches the exchange.

While these very monitoring systems can be warrens of complexity in themselves, the general premise is simple: To ensure an algo is working within its limits. Furthermore, checks need not be overly complicated, and the information these systems feed on should be based on information within the order itself, set against simple market data.

But how susceptible are market participants to rogue trading? 

While high-profile algo glitches may be on the rise, the celebrity and number of rogue traders appears not to have diminished in line with the escalation of trade monitoring.

This year, the trial of former UBS trader Kewku Adoboli, who is accused of losing US$2.3 billion, follows similarly high profile traders who have gambled with their firm’s money. It appears the exploits of Nick Leeson – whose bets on behalf of Barings saw the bank submit to a takeover due to the weight of the trader’s £827 million losses – and Jérôme Kerviel, who allegedly cost Societe Generale €4.9 billion, have not had a sufficient effect. Quite simply, rogue trading still occurs, and on larger scales than ever before.

Rogue trades often occur in sequences of events, forming patterns which real-time processing can be engineered to pick up. Compared to algo errors, their time horizons can differ, as a rogue trader doesn’t operate in milliseconds. Mapping the events of a rogue trader in time-sequence order gives firms a decent tool in monitoring possible rogue activity. The fundamental problems in detecting both are not dissimilar and involve establishing patterns in event-based, streaming market data.

Do the characteristics of fraud alter trade monitoring? 

The true effect of fraud in electronic trading can be harder to gauge than either the above issues. It is something organisations keep close to their chests because they don’t want anyone to know, where errors suffered by the likes of Knight Capital will hit the news as events unfold.

Some types of fraud manifest themselves over long periods of time and are always related directly to human beings – whether internal to the organisation or external. The solutions for evading such issues are much aligned with those for rogue traders; the automated crunching of vast amounts of high velocity data.

Responses to this month’s poll – which seeks to establish the principal driver behind the trend towards improved quality and timeliness of risk and compliance monitoring – have seen a large group of respondents vote for rogue, fraudulent or erroneous trading as the main push behind this movement. The weight of regulatory change sweeping the US and Europe has also received strong backing as the central reason for this push.

Cast your vote here.

«