We humans are a sceptical bunch. When new technologies emerge, we view them with apprehension, worrying about their potential negative impact on our lives and future. However, as they reveal their wonders, we often embrace them without question, placing our trust in their ‘capabilities’ without fully considering the consequences. Perceptions of artificial intelligence (AI) vary greatly. Some view AI as a threat to the future of humanity, while others see it as a transformative force with the capacity to resolve pressing human problems. While there is no single notion of what AI is, it is useful to think of it as a set of computer algorithms that can perform tasks otherwise done by humans.
AI attracts human trust through its adept execution of tasks suited to its capabilities, particularly those characterized by clear-cut rules and abundant data inputs. However, this trust can sometimes lead to its deployment in critical functions ill-suited to its strengths, driven by cost-saving motives, a problem often compounded by inadequate, outdated or irrelevant data. This is why we fear AI ‘hallucinations.’ AI bots are programmed to provide guidance even when the accuracy of their answers should inspire minimal confidence. They may even fabricate facts or present arguments that, while plausible, are deemed flawed or incorrect by experts. The danger lies in AI tools making false or harmful recommendations.
The widespread adoption of AI has sparked concerns of transparency, accountability and ethics. The worries include patchy compliance with data protection and privacy rules when it comes to these AI tools sourcing and processing data, as well as the representativeness of their data samples, which can introduce biases in their output. Challenges also arise of the accuracy and interpretability of what they generate, exacerbated by the opacity of many algorithms.
The deliberate misuse of AI presents a significant threat, especially within the financial sector, which has a large presence of profit-driven entities, many of which operate without much concern for the societal ramifications of their actions. Some of them also have the capability to circumvent controls and manipulate the system to their advantage, making detection by competitors and regulators a challenge. In some cases, they may exploit AI engines to exploit regulatory loopholes, capitalizing on the inherent complexity of the financial system.
AI relies on computing power, human expertise and data. A financial company that commands a significant quantum of each resource could establish dominance in the use of AI for business ends. And if regulatory authorities also rely on the same AI tools for analytical purposes, they may overlook potential vulnerabilities until it’s too late. Such failures could occur because if regulators and regulated entities use the same tools, thus sharing their understanding of the stochastic processes underlying the financial system, then it would be less likely that a potential fragility gets identified.
Financial regulators emphasize the importance of exercising caution in regulating AI, acknowledging that its implications are yet to be comprehensively understood. However, they must also acknowledge the dual nature of AI regulations—they have the potential to mitigate market risks but can also inadvertently contribute to them. Relying on AI without human clarity on what’s going on can lead hidden risks to escalate. Financial markets have a record of being hit by data-driven algorithms that made extrapolations which humans might have called untenable. AI may make these risks even more complex.
Financial regulators are understandably torn between recognizing their limitations in regulating AI and maintaining confidence in the financial sector. Admitting to constraints would risk undermining trust in their oversight, while banning AI usage would stifle innovation and disadvantage financial players. Yet, the intricacies of AI systems make it nearly impossible for regulators to keep abreast of every development. Regulators are right to worry about what AI adoption implies for financial stability. As AI tools are increasingly integrated with financial operations, the challenge will only multiply.
Financial regulators must confront the reality that their traditional methods of supervision are falling short. While they possess invaluable expertise, the rapid evolution of technology has created a widening gap between regulatory capabilities and the pace of innovation. As a result, supervision often lags the ideal regulatory responses that we need. To address this challenge, regulators must embrace real-time digital supervision techniques, leveraging activity-based supervision and algorithmic data analytics proactively (rather than reactively). Just as one cannot bring a bow-and-arrow to a gun battle, regulators must equip themselves with the tools necessary to effectively monitor and regulate financial activities in the digital age. This is the only way to ensure the stability and integrity of the financial system in the face of rapid technological changes.
The challenge has always been steep. Despite our ability to instruct AI to mimic human behaviour, there’s no guarantee it will adhere to our desired standards. Given that financial regulators still grapple with the task of regulating human behaviour for ethical conduct, achieving similar control over AI is a distant prospect.