Did anyone else get a funny sense of déjà vu when the New York Stock Exchange (NYSE) suddenly cancelled trades after “technical glitches” with its opening auction triggered wild price swings in the likes of McDonald’s and ExxonMobil? From Knight Capital in 2012 to Euronext in 2021, over the past decade, all kinds of “glitches” have occurred and in every single case, investors have only been able to respond based on what the exchanges have been willing to communicate.
Last month’s NYSE incident was no different – but what can investors do with such limited insight? Far from being the first time, the incident has been put down to a “human mistake”. Reports claim that the system connecting to the Chicago-based backup data centre, which should be manually turned on and off when the market opens and closes, was not. As a consequence, the backup system was left running, and at 9:30am the next morning, the system skipped the day’s opening auctions that set prices and caused a brief meltdown.
Read More – NYSE claims technical glitch for early trading issues
It is clear that glitches are a recurring nightmare for exchanges – propagating challenges from the regulators, members and, ultimately, overall trust in the market. When a NYSE type incident occurs, there is the inevitable knee-jerk reaction from global regulators to better monitor future situations to the extent that it affects trade reporting obligations and issues of wider market integrity. Then there is the “reviewing existing processes” period to understand in greater detail what went wrong, before then enforcing measures to reduce the risk of similar events happening in the future. The issue is that there are rules already in place alongside a willingness on behalf of the exchange community to abide by them, so it is hard to see what the subsequent “review” processes will achieve. The truth is that no matter how many reviews take place, they are not preventing technical glitches from occurring. The question is, why do they keep happening? After all these glitches should, in theory, be taking place far less frequently due to the significant industry wide technological advancements that have taken place.
The answer may be that humans cannot keep up with the rate at which technology is advancing. The infrastructure underpinning the global capital markets is so complex that it is becoming unmanageable to the human brain. Our best hope may be that computers eventually will become smart enough to maintain themselves. What that means is that when one thing fails, or one piece of information fails, it can affect hundreds, thousands or millions of other pieces of equipment. As we have seen in the case of the NYSE, it is not as easy as flipping a switch to get things up and operational again.
The focus must be on upgrading existing technology to be better prepared for the next time something like this happens. Technology does exist to assist in these circumstances and should be deployed alongside market practices driven by rules defined by the regulator. While we must accept that technology will break from time to time, it can be upgraded and made to work in order to reduce the number of exchange outages happening. Backups, automated alerts, and other functionalities are prevalent across other sectors to keep system administrators apprised. So why is this not the case for exchanges with an equity market capitalisation upwards of 20 trillion dollars?