Wall Street Giants Raises Concerns Over AI Hallucinations and Illicit Use
Major Wall Street firms, including Goldman Sachs, Citigroup, and JPMorgan Chase, are raising concerns about emerging risks tied to the increasing adoption of artificial intelligence (AI).
In their 2024 annual reports, banks highlight threats such as flawed AI models, regulatory uncertainties, and potential misuse by cybercriminals.
JPMorgan warns that AI-driven workforce displacement could impact employee morale, retention, and competition for top tech talent.
Meanwhile, Citigroup acknowledges the risks of generative AI producing inaccurate, biased, or incomplete data, which could harm its reputation and financial performance.
Ben Shorten, Accenture Plc’s lead for finance, risk and compliance for banking and capital markets in North America, said in an interview:
"Having those right governing mechanisms in place to ensure that AI is being deployed in a way that’s safe, fair and secure - that simply cannot be overlooked. This is not a plug-and-play technology.”
While banks have been flagging AI-related risks in recent years, the growing reliance on both proprietary and third-party AI solutions is amplifying these concerns.
Firms face mounting pressure to keep pace with AI advancements or risk losing customers and business.
However, increased AI adoption also exposes them to cybersecurity threats and the potential use of outdated or biased financial data.
As the financial sector deepens its AI integration, navigating these risks will be critical to maintaining stability and trust.
The Integration of AI
Goldman Sachs acknowledges that while it has ramped up investments in digital assets, blockchain, and AI, integrating these technologies quickly enough to enhance productivity, reduce costs, and improve client services remains a challenge.
Increased competition in AI adoption could impact customer acquisition and retention, the firm noted in its latest annual report.
Financial institutions also face mounting data privacy and regulatory compliance risks in an evolving landscape.
The EU Artificial Intelligence Act, which took effect in 2024, introduces new rules for AI use in Europe, where many US banks operate.
Shorten noted:
"This act establishes rules for placing on the market, putting into service and using a lot of artificial intelligence systems in the EU. The outlook for the US and the US market is less clear.”
As a result, banks are deploying both proprietary AI tools and third-party solutions.
Citigroup is leveraging AI to synthesise key data from public filings, Morgan Stanley’s AI Debrief automates routine tasks with a ChatGPT-like interface, and Goldman Sachs' private wealth division uses AI for portfolio evaluation and analysis, according to CIO Marco Argenti.
He expressed last week at the Bloomberg Invest conference in New York:
"It’s so important to take a responsible approach and really be applying controls so that you protect yourself from potential inaccuracies and hallucinations.”
JPMorgan CEO Jamie Dimon has described AI as the most significant issue facing the bank, comparing its transformative potential to that of the steam engine.
He believes AI could augment nearly every job.
However, as banks accelerate AI adoption, cybercriminals are doing the same.
A recent Accenture survey of 600 banking cybersecurity executives found that 80% believe generative AI is advancing criminal tactics faster than financial institutions can respond.
Morgan Stanley warns that generative AI, remote work, and third-party integrations pose heightened data privacy risks.
With AI being used in decentralised work environments, firms must establish safeguards to mitigate emerging vulnerabilities.
As AI reshapes the financial sector, banks must balance innovation with security, compliance, and trust.
He stated:
"These steps are only going to increase in criticality as attackers are being enabled by this technology faster than the banks are able to respond.”