Prints, orders, prices, axes, levels, direction, colour… market data is the most valuable asset in the world.
Participants directing trillions of dollars in trades use data to form and hone their ideas, and then inform the expression of those positions in the market. They are created using a subtle mix of art and science.
Good traders have a feel for where to be and how to play the trade. Great traders have additional conviction, supported by data. But the business model for data exchange in capital markets is broken.
When traders gather information and execute trades, data is created, shared, and consumed. When this happens on digital exchanges, ownership (and more importantly, economic control) passes to the exchange operator. Not only does the sharing party suffer the exposure of potentially sensitive information, but the data is immediately commoditised, and all commercial benefit accrues to the exchange operator.
Traders see only downsides, so withhold as much as possible or turn to the phones, but this process is slow and also leaks data. The self-fulfilling cycle continues: less data exposed means less data on the exchange, leading to less information available to support the conviction required to get trades done. The result is a more fragile, illiquid market.
Until now, technology has not been available to facilitate pre-trade and conditional data sharing in a way that avoids unnecessarily compromising sensitive information or its economic control. The industry needs to find a way to incentivise data transparency and more data-backed trading.
An architectural problem
Market data today is organised, managed, and distributed in a central hub architecture. The central database collects orders, organises that information, and distributes it based on the rules of the market. All users get the same version of the truth from a central order book. This hub architecture, although helpful in crowding liquidity in one place, is bad at restricting data dissemination and protecting economic control of the data. Once the data is seen by the market, its value decreases, so the exchange operator becomes the greatest beneficiary.
A central database architecture also restricts the expression of trades to a few blunt force strategies. You have more nuance over how your LinkedIn profile appears to the world than in how your order is shown to the market.
Who owns the right version of the truth?
A central order book is much like a shared excel table – everyone sees the same table because it is distributed from a single database. The database manager – a legal entity – creates a natural point of risk and economic control. In worst case scenarios, a breach or mistake could divulge all market data. In the best-case scenario (which is what we have on most days), the central database operator has the opportunity to benefit economically from information flow.
Meanwhile, the producers of market data pay the same as consumers, and all data is treated as the same commodity even though data is valuable to different people in different ways in different contexts. This is becoming easier to measure with the rise of ever more sophisticated, computer-based trading strategies from many market participants.
And users are under more pressure to not just get trades completed to fulfil their positions but to prove they are getting the trade done at the best price. Regulations under MiFID II have obligated participants to both report accurately on transactions, as well as demonstrate under Best Execution rules that traders obtain the best possible result for clients when executing orders.
A more logical market use for data
We need a new evolution in the market for data – one that aligns incentives and operates more effectively and logically. To get there, we need three frameworks.
First, ownership and control of data must be returned to those who create it. We think it’s quite reasonable that those who create, and own data should see the rewards. The underlying technology structure needs to change for that to occur.
Second, the economics of the data marketplace must change so that consumers and producers of data have a more equitable model. Data is not a commodity – its value differs with market and trading situations. Right now, firms derive value from their data only by trading on it, using increasingly precious balance sheet to do so. It should generate returns that correspond to its value in its context, not its capacity to occupy balance sheet. Quite simply, we foresee a model where data becomes an asset to its producers.
The final necessity for a more logical use of data is a well-functioning venue where participants can trade with conviction, addressing the core liquidity problem in certain asset classes. Technology has not been available to facilitate pre-trade and conditional data sharing in a way that avoids unnecessarily exposing material information to a central hub or to other market participants.
Until we have these three points in place, markets in many asset classes, in particular the corporate bond markets, will continue to experience fragility and serious liquidity challenges.
David Nicol
CEO, LedgerEdge