Technological leaps, shifting federal priorities, and state-level developments are driving compliance, legal, risk, and operations teams to rethink how they structure AI governance, disclosures, and monitoring. The challenges lie in building a clear inventory of AI tools, updating disclosures to reflect when AI is in use, centralizing opt-out and complaint processes, and piloting risk assessments and audits to ensure readiness for regulatory action.
Artificial intelligence is no longer confined to “pilot” status within many card issuer environments. In contact centers, for example, financial institutions are increasingly abandoning the age-old practice of “call sampling,” in which a small percentage of randomly selected agent interactions are reviewed manually by QA teams. Instead, compliance teams are reviewing AI output that transcribes and analyzes all agent interactions. Natural language models can detect regulatory-risk language, flag potential compliance failures, and identify CX issues. In addition to QA, this type of output increasingly feeds into complaint management and root-cause analytics pipelines, according to Auriemma Roundtables’ Card Compliance Roundtable members.
The use of AI adds a new dimension to privacy and consent risk. Previously, recording calls for later review was the most common trigger of a privacy disclosure. A system that also transcribes, analyzes, or otherwise processes the content for patterns, sentiment, or compliance purposes, may cross into territories governed by stricter disclosure or wiretap statutes (especially in states like California). Indeed, in the Galanter v. Cresta class action, plaintiffs assert that an AI vendor captured and analyzed calls without proper notice, forcing a test of whether generic “this call may be recorded” disclosures suffice when AI is in the mix.
California is one of the first states to enact robust rules for automated decision-making technology (ADMT). Approved in July 2025, these regulations require pre-use notices, give consumers the right to opt out of ADMT, mandate risk assessments for high-risk processing, and impose annual cybersecurity audits on larger entities. The framework takes effect January 1, 2026, with ADMT obligations (notice, opt-out, etc.) beginning in 2027.
For card issuers, these rules will touch fraud scoring, credit eligibility, collections, and dynamic pricing—any system whose output materially influences consumers. Key questions include how opt-outs will be handled and what form risk assessments and audits will take. In short, California is ushering in a new era where AI in several forms, including credit operations, requires deeper governance, consumer rights, and stronger integration across compliance, privacy, and risk teams.
Automated Approval and Underwriting
Automated approval and underwriting sit at the center of credit card origination, allowing issuers to process applications and set credit limits in real time. But as AI and advanced analytics play a larger role in these decisions, regulators are paying closer attention. The CFPB’s stance under ECOA and Regulation B affirms that automated models must still produce clear, specific adverse-action reasons, while California’s ADMT framework and similar initiatives are setting new expectations for transparency, fairness, and audit readiness.
“Regulators are signaling that automation doesn’t reduce accountability—it raises the bar,” said Sheldon Stewart, Director at Auriemma Roundtables. “Issuers that can explain how their models work and demonstrate effective oversight will be better positioned for what’s ahead.”
Utah is also looking at AI, albeit through a narrower lens. For the financial services sector, a company that uses AI to interact with a person was required to “clearly and conspicuously” disclose such use if asked or prompted by that person. A safe harbor protects firms that proactively disclose AI use. Penalties run from $2,500 to $5,000 per violation, with potential for broader enforcement. The law sunsets in 2027.
For card issuers, this means blanket disclosures are required in many interactions, and, in less sensitive interactions, systems must respond accurately when asked. Utah’s approach reflects a balance between transparency and practicality, offering a test case in how states may regulate AI in consumer finance.
As AI becomes embedded across card operations, issuers face a patchwork of evolving state rules. In Auriemma’s Card Compliance Roundtable, members compare playbooks on how they harmonize governance, manage disclosures, and prepare for audits under differing state frameworks. While requirements vary, success depends on centralized oversight, clear documentation, and cross-functional alignment between compliance, legal, and data teams.
Best practices shared by Card Compliance Roundtable members include:
Harmonize Governance First, Then Localize
AI/ADMT laws differ in nuance. California gives consumers veto power and demands thorough and ongoing compliance documentation from companies, while Utah emphasizes transparency through disclosure. In response, many compliance teams are adopting a “strictest-first” approach by defining a baseline compliance posture that satisfies the most stringent rules, then layering in regional exceptions. For example, central AI disclosure governance, notice templates, and audit paths can be designed to satisfy California regulations and then adapted for states with lighter requirements.
AI Model Inventory, Risk Scoring & Escalation
Compliance teams must treat AI models (especially those influencing credit, collections, dispute resolution, and agent scripting) more like regulated functions to satisfy both examiners and state privacy/AI enforcers. That means:
Consent, Opt-Out, and Suppression Logic
Upcoming federal telecom and privacy reforms (e.g., the FCC’s unified “STOP” consent revocation across channels, effective April 2026) mean firms must already be planning for a centralized revocation/suppression service. That engine must tie into AI-driven dialers, messaging tools, call systems, and contact-center orchestration logic. Add to that the need for state-aware AI disclosures and opt-out logic (as in California’s ADMT and Utah’s disclosure-on-request regimes), and the operational complexity becomes evident.
Complaints, QA, and Audit Trails
As more compliance and customer advocacy operations move toward AI summarization and prioritization in complaint logging, root-cause analytics, and trend detection, teams must bake in auditability. Examiners and privacy regulators alike will expect traceable internal audit paths. Best practices include:
Modern card compliance is being shaped by a convergence of AI adoption and proactive regulation. California’s ADMT regime will bring new obligations around notice, opt-out, risk assessment, and audit readiness, while Utah’s iterative AI legislation shows how states are experimenting with targeted disclosure regimes that balance enforcement with operational practicality.
These shifts have been front and center in Auriemma Roundtables’ Card Compliance Roundtable, where issuers benchmark their programs, compare audit and monitoring frameworks, and discuss how to align AI-driven processes with evolving regulatory expectations. Members benefit from peer-only collaboration, real-time benchmarking, and direct dialogue with compliance leaders facing the same scrutiny. The group’s off-the-record format allows institutions to share policy language, vendor oversight strategies, and lessons learned from examinations—all in a confidential setting that accelerates learning and preparedness.
For membership information or to learn more about how your compliance team can participate, contact Barry Lynch.