December 10, 2025 — Press Release

AI sandbox bill would lead to lending discrimination, consumer harm, and financial instability

WASHINGTON – A coalition of 64 civil rights, consumer, labor, technology, and other advocacy groups delivered a letter to the House Financial Services Committee (HFSC) opposing the Unleashing AI Innovation in Financial Services Act (H.R. 4801). The legislation would allow financial firms that use AI to request waivers from civil rights, consumer, and investor protection laws, exposing consumers, investors, workers saving for retirement, the financial industry, and the economy to substantial risks with very little if any regulatory safeguards or accountability.

The urgency of the message was underscored as President Trump on Monday announced he would issue an executive order aimed at blocking state legislation that protects people from AI’s potential harms.

“AI systems can improperly deny people credit, jobs, and housing, freeze access to your money, violate your privacy, and enable imposters to take money out of your account. Congress must reject bills that allow companies that use AI to skirt federal consumer protection laws or to ignore state laws that protect people from problems caused by AI,” said Lauren Saunders, associate director and director of federal advocacy at the National Consumer Law Center.  

The letter noted: “The use of AI in the financial sector has the potential to reduce costs, improve efficiency, detect and prevent fraud, and increase the access, quality, and choice of financial services and products. But these potential benefits for consumers, customers, investors, markets, and the financial system can only materialize if people are protected from the many risks of AI in financial services through the consistent application and enforcement of federal civil rights, consumer protection, investor protection, market integrity, and financial supervision statutes and regulations.”

The letter was delivered ahead of the Committee’s hearing promoting the benefits of artificial intelligence in the financial services industry. Among the legislation to be discussed during the hearing was the Unleashing AI Innovation in Financial Services Act. 

The AI sandbox legislation is woefully lacking in provisions that address the risks to individuals, families, and communities who may interact with AI tools and completely ignores the well-documented civil rights and fair lending problems created by AI use in financial services.

“Allowing AI to be deployed by banks and other financial firms without any regulatory oversight poses significant risks to consumers,” said Demetria McCain, Director of Policy at the Legal Defense Fund. “From higher cost loans to unfair denials, we know that AI and algorithmic technologies have inherited many of the human biases the civil rights community has long sought to overcome. Congress must remain vigilant in ensuring these technologies aren’t preventing economic prosperity or interfering with the livelihood of Black communities and countless other groups across the country.”

“This bill gives financial institutions a chance to bypass critical protections just as AI use is booming in banking and lending,” said Santiago Sueiro, senior policy analyst at UnidosUS. “Without meaningful safeguards, transparency or accountability, sandboxes risk exposing Latino and working-class consumers to biased decisions and discriminatory credit access. As a result, many could lose access to safe and affordable financial products at a time when high costs are hurting the pocketbooks of so many Latinos and working class people across the country.”

Opaque AI systems that perform critical functions can undermine the safety and soundness of financial firms, market integrity, and financial stability. The black box nature of AI models may make it difficult or impossible for financial firms to fully understand how the AI systems are evaluating or managing institutional trading or risk management functions.

“Banks may rely on AI systems that overestimate safety and soundness or underestimate risk exposures, much as the industry was oblivious to the mounting risks of subprime mortgages before the financial crisis,” said Patrick Woodall, managing director for policy at Americans for Financial Reform. “AI-powered trading systems will self-correlate trading strategies that can amplify bubbles, volatility, and crashes that can exacerbate market fragility that can put the whole economy at risk.”

The letter also identifies many other risks to consumers from AI, including the exposure of personal financial information, fraud, aggressive debt collection, account freezes and closures, and more. 

“Consumers, not companies, need protections right now. With constantly growing use of AI in finance, consumers face increased privacy, security, and fraud risk. Sandboxes and moratoria prevent critical protections and endanger consumers – the absolute opposite of what we need right now,” said Ben Winters, Director of AI and Privacy at the Consumer Federation of America.

The letter concludes: “H.R. 4801 would allow financial firms to profit by capturing most of the benefits of AI but force their customers and the economy to bear the burdens from the risks and the harms of AI deployment in the financial sector. We urge the Committee and the Congress to oppose this legislation that provides broad immunity for unlawful practices or outcomes that can substantially harm people, communities, and the economy.” 

Support NCLC

Please support NCLC's work to advance consumer rights and economic justice with a tax-deductible contribution today!

Donate