The United States (US) Securities and Exchange Commission (SEC) Chair Gary Gensler delved into the realms of artificial intelligence (AI) and the crypto market in a speech just yesterday at the National Press Club.
According to him, AI stands tall as "the most transformative technology of our time," comparable to the revolutionary impact of the internet and the mass production of automobiles. However, this thought-provoking viewpoint comes with a twist. Despite his reputation as a stringent regulator, overseeing a recent "crypto crackdown," he holds reservations about AI's implications for markets.
With rapid advancements in the field of AI, there are growing concerns about the associated developments and risks. Remarkably, it precedes Gary's forthcoming appearance before the Senate Appropriations Committee on 19 July 2023. During this appearance, he will address the committee's review of the fiscal year 2024 budget for the SEC.
If Going After Crypto Does Not Work, Go After AI Next
Gary, renowned for his firm position against cryptocurrencies and his unwavering commitment to the "come forward and comply" strategy of the SEC, has garnered significant attention. However, given the recent Ripple vs SEC lawsuit ruling by Judge Analisa Torres which marked a significant setback for the SEC, seemed to have crashed his momentum and it feels like he has set new sights on another industry — AI.
He sheds light on the looming "significant" transformations within the labour market and the intensifying competition between the US and China as they vie for supremacy in AI system development. However, these concerns merely scratch the surface of the potential challenges ahead.
In light of the increasing demand for data and computing power, Gary also raised a crucial concern regarding the potential dominance of a few tech platforms in the field. This scenario, wherein a limited number of platforms prevail, poses a significant challenge for companies seeking diverse AI models. Should an AI model provide inaccurate or irrelevant information, financial institutions run the risk of adopting flawed data and consequently making poor decisions. According to him, this is reminiscent of the 2008 financial crisis, where banks blindly followed the lead of credit raters, or the Twitter-driven frenzy that targeted Silicon Valley Bank.
“AI may heighten financial fragility as it could promote herding with individual actors making similar decisions because they are getting the same signal from a base model or data aggregator. This could encourage monocultures.”
Gary's insights resonate strongly when considering the impact of herding behaviours on stock market crashes. A notable 2001 study conducted by Markus Konrad Brunnermeier, an economics professor at Princeton University, established that such herding behaviours play a significant role in explaining these market downturns.
Additionally, a more recent study conducted in 2022 by Asad Ayoub and Ayman Balawi from the University of Pécs in Hungary further solidified this notion, confirming that herding behaviour influences stock prices in both bear and bull markets. In light of these findings, he issued a warning, expressing concerns that this herding behaviour could potentially exacerbate if a small number of dominant tech firms come to rule the AI landscape.
Generative AI in Finance
AI systems have long been integrated into various facets of the financial sector. Notably, insurance companies and creditors have harnessed the power of algorithms and natural language processing to analyse financial data, aiding them in making informed decisions about loan amounts. Additionally, trading firms have leveraged AI technology to swiftly identify fraud and detect market signals, surpassing the capabilities of human observers staring at computer screens.
In this context, Gary directed his attention towards large language models (LLMs), considering them as the "most transformative technology of our time." However, it is important to note that his remarks sometimes conflate this specific category of AI technology with the broader realm of AI, even though these systems do not necessarily pose identical risks or prompt identical inquiries. Moreover, he acknowledges that generative AI, a subset of LLMs, is not yet widely adopted in the finance industry.
“The possibility of one or even a small number of AI platforms dominating raises issues with regard to financial stability…While at MIT, Lily Bailey and I wrote a paper about some of these issues called ‘Deep Learning and Financial Stability’. The recent advances in generative AI models make these challenges more likely.”
Does AI Have the Potential to Replace People?
Gary went on to highlight other key issues such as the significant challenges arising from the implementation of AI in data privacy and intellectual property. Notably, he drew attention to the ongoing Hollywood writers and actors' strike, which seeks to address these very issues. Screenwriters involved in the strike are raising concerns over compensation disputes and the utilisation of AI in entertainment productions, arguing that the technology has the potential to replace them by employing their work as training material.
He acknowledged that as individuals, we all contribute to training the parameters of AI models, leading to a critical question: "Whose data is it?" This contentious debate is unfolding presently, as stated by the SEC chair, who underscores his commitment to closely monitor its progress.
“For the SEC, the challenge here is to promote competitive, efficient markets in the face of what could be dominant base layers at the centre of the capital markets. I believe we closely have to assess this so that we can continue to promote competition, transparency, and fair access to markets.”
He Provided an Ironic Solution
The SEC chair asserts that relying solely on risk management tools is insufficient to mitigate the risks posed by advanced AI tools to the US and global financial systems. Moreover, the existing guardrails have become outdated in the face of groundbreaking advancements in "a new wave of data analytics."
“Many of the challenges to financial stability that AI may pose in the future…will require new thinking on system-wide or macro-prudential policy interventions.”
Ironically, Gary puts forth an intriguing proposition to revamp regulations for the current era — an unexpected solution in itself: AI.
“While recognising the challenges, we at the SEC also could benefit from staff making greater use of AI in their market surveillance, disclosure review, exams, enforcement, and economic analysis.”