Artificial intelligence has dominated headlines recently, highlighting the best and worst of its capabilities and suggesting there is still work to be done and improvements to be made.
News of Microsoft’s Tay, an artificially intelligent bot which was created to mimic the personality of a 19-year old woman, quickly turned sour as it seemed to transform into a ‘bitter racist’ on the social media website twitter.
When Microsoft was asked to confirm whether the bot had been shut down, it responded: “The AI chatbot Tay is a machine learning project, designed for human engagement.
“As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments to Tay.”
A more successful venture into AI was seen in Google’s AlphaGo artificial intelligence after it defeated Go world champion Lee Se-dol twice.
Se-dol said after the second defeat: “I am quite speechless… I feel like AlphaGo played a nearly perfect game.”
Both scenarios outline that machines are rapidly becoming more intelligent, and in some cases, outsmarting their human creators.
The use of AI in the financial sector has already been implemented in some cases, and investment from major players like Goldman Sachs and JP Morgan are pouring into the technology in the fight to get ahead of each other.
AI machines possess capabilities to evolve, adapt and search for patterns so asset managers can use them to enhance their investment and trading strategies.
Algorithmic trading, for example, is the most widely used form of AI and its uses complex mathematical models to make transaction decisions on behalf of humans.
Microsoft’s Tay however, has proved there is still work to be done and AI has the potential to go ‘AWOL’. So how do we control the use of AI? How do we ensure it cannot be manipulated, like Tay has been?
With AI at the forefront of discussions, questions of its uses and how it will be regulated in the financial world have been raised.
“There are likely to be layers of regulation around artificial intelligence, some sandboxed for low risk and low value trading,” said Jet Lali, head of digital at consultancy Alpha FMC.
A regulatory sandbox allows FinTech firms to test new products without “incurring the normal regulatory consequences”, according to the FCA.
The FCA says its sandbox will provide better services for users, further innovation and an increased range of products and services to market.
The scheme is part of the FCA’s plans to expand Project Innovate, with proposals on how it can work with the government and the industry “to further support businesses.”
Industry participants agree that AI will be regulated, and human “oversight” will be imperative to being compliant.
Chief executive officer at financial services firm, OTAS Technologies, Tom Doris, told The Trade: “What we see emerging is sophisticated behaviours but with human oversight and the ability to override the machine at all times.”
Alpha FMC’s Lali resonated Doris’ thoughts, and said: “Some will require human co-pilots to sign off, where more scrutiny or risk is required. Organisations will still need to indemnify retail customers for losses due to bad advice (rather than bad decision making).”
Aside from regulating AI itself, it could help regulators with implementing and enforcing rules across the financial market, as Josh Sutton, global head of AI practice at Sapient stressed.
Sutton said: "From a compliance and monitoring standpoint, AI is a game-changer. It can be deployed for policing markets and ensuring illegal activity is flagged quickly to regulators – creating a more level playing field."
Doris at OTAS Technologies echoed Sutton’s view and explained that AI could be used to ensure a safer and less volatile marketplace.
He said: “AI systems can help with exceptional market conditions by automatically recognising when the market isn’t operating normally and alerting traders proactively and removing orders from the market while the traders assess the situation.”
Regulating AI is, however, a complex task as Henri Waelbroeck, director of research at EMS provider Portware, told The Trade.
Waelbroeck agreed with Doris and Sutton, and explained it is useful for monitoring markets: “Regulating AI itself is really an unrealistic concept.
“It may however, have a place in reducing the risk of manipulation of markets by not opening doors to practices which are misleading or incorrectly price stocks.”
The complexity of AI leads some to suggest that regulators need to learn more about its processes before setting rules on its uses.
Doris explained the concept of AI can often be confused with popular culture, and regulators need to be fully aware of its capabilities.
He said: “Regulators should know more about the route of developing autonomous entitles, with clear specifications that describes the behaviour of the machine.
“People can be confused about the capabilities of AI, on both sides, people think AI can do things it can’t and can’t do things it can. The general awareness isn’t well correlated with reality.”
Human or machine?
What does the future hold for AI? Will human traders still have a place on the trading floor?
The Trade asked industry participants whether they thought AI, with its mass of capabilities, could replace the human aspect of trading altogether.
The consensus was clear – it’s unlikely.
Henri Waelbroeck at Portware explained this would depend on the size of the company, but human traders would be facilitated by the implementation of AI on the trading floor.
He said: “It realistically depends on the firm, as some may outsource their trading to machines internally.
“Larger firms will always want to have people on the floor to watch over things, but AI will enable traders to be productive.”
Josh Sutton at sapient believes AI will, instead, shift the role of those in the industry, possibly leading to less human traders on the trading floor.
He explained: “AI could possibly replace traders, but portfolio managers will be empowered.
“The role of a portfolio manager or analyst will shift dramatically, as they understand the recommendations of AI systems.”
So it’s good news for portfolio managers, but traders may face the chop?
Jet Lali at Alpha FMC explained that even though AI ‘s capabilities are game changing, particularly in the financial markets, humans will always have a role in trading.
He said: “Despite its huge potential, AI can only take us so far; when transaction costs, large data sets and speed are not the most important factors for decision making, there will still be a role for a human trader.”
Sutton at Sapient made an interesting point when asked this question, drawing similarities from the rise of computers in the financial sector.
He explained: "The financial world will indeed become more AI driven, but computers, for example, didn’t replace traders, instead they shifted the way the financial sector operates. It’s exciting and terrifying at the same time.
“The impact of computers happened over decades, but the early stages of AI suggest the impact will be over years rather than decades."
A combination of human and AI capabilities has the potential to shift the financial landscape spectacularly, just as the rise of computers once did.
AI is still in the early stages, as Microsoft’s Tay has exposed, but as Josh Sutton at Sapient explained, AI is being developed and implemented rapidly.
For now, it seems the job of a human trader is safe, but who knows where AI could take the trading world in the near future.