fbpx

IT’S NOT MAGIC: WEIGHING THE RISKS OF AI IN FINANCIAL SERVICES

OVERVIEW

Artificial intelligence has enormous potential for financial services – but ethical challenges, a skills gap and market vulnerabilities pose risks that the industry must confront. These include biases leading to discrimination against some customers and increased danger of ‘flash crashes’, which could be amplified by inter-connections to pose a systemic threat. These judgements form part of a new report from the Centre for the Study of Financial Innovation, an independent London-based think-tank.

The authors, Keyur Patel, research associate at the CSFI and co-author of its ‘Banana Skins’ risk reports, and Marshall Lincoln, a Silicon Valley AI expert, interviewed a wide range of AI and ML specialists, financial practitioners, risk managers and regulators for the report. With AI and machine learning (ML) set to become ubiquitous, they found that some risks are inherent in the new technology, while others stem from a lack of human understanding and preparedness. The full report can be found here.

KEY MESSAGES

AI is fundamentally different from traditional forms of automation.

The report identifies three principal ‘risk drivers’:

  • Opacity and complexity: A trade-off at the heart of many AI models is that the more effective the algorithms, the more difficult they are to scrutinise.
  • Distancing of humans from decision making: AI is different from previous ‘rule-based’ forms of automation because it enables many actions to be taken without explicit instruction.
  • Changing incentive structures: The benefits to successful firms and the risks of getting left behind create powerful incentives to implement AI solutions faster than may be warranted.

ML models are just as fallible as rule-based ones.

  • New ethical challenges include algorithmic biases that could lead to discriminatory practices. These biases can be extremely difficult to root out because ML excels at finding complex ‘hidden’ relationships in data.
  • A purported benefit of AI is that it dispassionately draws conclusions from data, without prejudice. In practice, however, the beliefs and values of the people who build the models affect the outcomes.
  • AI systems can perform poorly in previously unencountered situations – potentially amplifying the impact of “black swan” events.

ML-driven solutions may undermine social benefits.

  • In insurance, greater risk differentiation could lead to high-risk individuals being priced out of the market, even though they may be the ones most in need of insurance.
  • ML’s ability to combine data on individuals from diverse sources might challenge our concept of fairness, as well as raising privacy concerns.
  • More personalised financial products could come at the expense of price transparency.

AI could contribute to a future financial crisis.

  • One trigger might be a particularly sharp “flash crash”, where many interconnected AI trading programs react in the same way to some market event.
  • A second might be an event that undermines public faith in the financial system, such as a coordinated cyber-attack crippling critical IT infrastructure.
  • A third relates to financial institutions using AI for risk management. How will ML-powered models trained on data when market volatility was low react to extremely rare ‘black swan’ events?

A pronounced skills gap ratchets up the risks of AI implementation.

  • Financial institutions might become dangerously over-reliant on specialists with highly technical skill sets that decision-makers do not sufficiently understand. There are parallels here with the industry’s uncritical trust in quantitative analysts in the lead-up to the global financial crisis.
  • There is a global shortage of people who can design, deploy and maintain AI systems. Hiring expert programmers who lack financial services knowledge increases the risk of poor outcomes.
  • Decision makers at financial institutions typically do not know how AI works and fail to grasp its limitations. This could lead to inflated expectations and a failure to make effective use of the models’ output, or to boards signing off on decisions without understanding the implications.
  • Other managerial weaknesses might lead to a lack of accountability, the implementation of individual solutions that do not work together and expensive duplication of effort. It may take institutions longer to accomplish less at greater cost, and expose them to security and compliance risks.

The proliferation of AI could fundamentally change market dynamics

  • ‘Fintech’ challengers that use AI most effectively could take advantage of data network effects to dominate markets. Even without explicit anti-competitive behaviour, this might make it difficult for others to compete effectively.
  • AI could lead to new forms of interconnectedness in financial markets at the IT systems level, increasing the probability of flash crashes. Financial institutions could become over-dependent on a few third-party tech providers, making them vulnerable to single points of failure.
  • Regulators will face new challenges in determining which institutions fall under the scope of financial services regulation, as more non-traditional firms challenge incumbents and lines between sectors become blurred. They must also protect competition in financial markets, while acknowledging that AI needs scale to be effective.

Outcomes depend upon humans, not machines

It is becoming increasingly common for financial practitioners to work with AI and ML. This means that they – and particularly decision-makers – must be able to critically evaluate these technologies. Their ubiquitous deployment will have consequences for consumers, institutions and the stability of the financial system. A decade after the global financial crisis, the world is still grappling with the ramifications of the industry’s embrace of complex financial instruments. Any comparisons to be made with the impact of AI are speculative, but the parallels should not be dismissed out of hand.

The report also discusses the potential benefits of AI in financial services, which include facilitating the ‘democratisation’ of the industry and offering major improvements in security, compliance and risk management. The authors argue that these benefits are compelling but focus their analysis on risks because of the hype around new technologies. The report was produced with support from Swiss Re and Endava