June 30th, 2024

Financial services shun AI over job and regulatory fears

Financial services are cautious about adopting AI due to job loss fears, regulatory hurdles, and resistance. Only 6% of retail banks are prepared for AI at scale, despite its potential benefits. Banks face challenges transitioning to digital processes and ensuring AI accuracy and security. Compliance and ethical considerations are crucial for successful AI integration in the financial sector.

Read original articleLink Icon
Financial services shun AI over job and regulatory fears

Financial services are hesitant to fully embrace artificial intelligence due to concerns over job losses, regulatory issues, and institutional resistance. Only 6% of retail banks are ready to implement AI at scale, despite potential productivity gains and cost reductions. While AI could add significant value to the global banking sector, there are fears of job displacement. Banks struggle to transition from analog to digital processes, with reluctance to adopt AI models that could streamline operations and save costs. Concerns also arise regarding the accuracy and security of AI-generated information, especially in sensitive areas like money laundering checks. Despite the potential benefits in customer service efficiency, there is a need to balance AI deployment with ethical considerations and compliance with strict industry regulations. Some banks have faced legal challenges related to AI usage, highlighting the importance of working closely with regulators to ensure compliance and mitigate risks. Overall, the financial industry is cautious about AI adoption, balancing the technology's potential with the need for responsible implementation.

Link Icon 11 comments
By @hn_throwaway_99 - 5 months
It's not surprising that a technology where "we don't really understand how it all works under the covers" is anathema to an industry where nearly everything must be auditable.

In finance the reasoning behind a decision (e.g. to extend a loan, to do a deal, to find a business, etc.) is nearly as important as the decision itself, and "because the black box machine told us so" is not a sufficient explanation.

By @jl6 - 5 months
There are really only two use cases for LLMs that have gained any traction in the enterprise: productivity and triage.

Productivity is largely being done to them, with devs using LLMs every day of their own accord, and most orgs leaving Microsoft to do the heavy lifting of making Copilot work over all their unstructured docs and emails.

Triage is the immediate prize. So many of these mega-corporations are doing mega-scale things (millions of customers, billions of transactions) that there is huge opportunity to put an AI layer in front of staff to guide and prioritize their work. Not to do their work, but to increase the chances that they are focusing on the most valuable work. The ideal AI here works like a secretary: “Good morning, I’ve reviewed all the recent calls/cases/leads/transactions and these are the top 20 that seem worth looking into.”

I don’t think anybody trusts AI to do the actual looking-into.

By @jsemrau - 5 months
I have implemented AI/ML solutions in Financial Services in 15 countries and know for a fact that this is not true. If they only refer to generative AI, I'd argue that the space is moving extremely fast right now. That makes it hard to implement anything as one might not know if next week another company will have a better model or AI safety alignments completely bork an existing model. On top of this comes the regulatory burden, which is in place for a good reason.
By @andreagrandi - 5 months
If "implementing AI" means implementing yet another chatbot, I'm happy they are not using it. One of the banks I use, just added a chatbot as default option when you contact support: I'm planning to move to another bank. When I have a real problem I want to speak with a human, not with a bot. I do use AI stuff for other tasks, but I don't want it to replace real customer support.
By @datahack - 5 months
I think taking a slow path with AI around financial services is probably wise.

Unfortunately I don’t think other countries and multinationals will take the same approach.

So how do we avoid another arms race? This seems like a good public position, but ignoring AI isn’t the right private position if you care about your financial system.

By @surfingdino - 5 months
After 2008, regulators have taken a dim view of "magic" in financial services. It was a painful lesson and they don't want a repeat.
By @greenyoda - 5 months
By @choeger - 5 months
Regulatory fears, not fears of getting it wrong. That's just trying to pass the buck to agencies or lawmakers. As soon as someone in this chain gives in, e.g., an official short before retirement or in need of a consulting gig in the "AI" industry, the "best practices" will be applied everywhere just out of FOMO. The results will be ... interesting.

That being said, I still think LLMs will make for novel user interfaces.

By @bpodgursky - 5 months
It doesn't really matter whether the firms formally "shun" AI, their employees are all going to be using AI in their own customer communications, documents, decision-making, programming, etc. The productivity gradient is just too strong to keep leak-free.
By @davedx - 5 months
I can’t read the paywalled article but this just isn’t true (I work in this area).

Financial services is taking a thoughtful approach to where and how to apply AI, yes. “Shunning” it? Not at all.