Executive summary
Michigan’s new AI guidance establishes detailed supervisory expectations for financial service providers, even as federal standards continue to evolve. It outlines requirements for governance, risk management, and oversight of AI systems. Organizations should prepare for increased regulatory scrutiny by developing formal AI system programs, strengthening controls, and aligning practices with emerging state and federal frameworks.
New state guidance raises expectations for AI oversight
On Jan. 14, 2026, the Michigan Department of Insurance and Financial Services released Bulletin 2026-03-BT/CF/CU – Use of Artificial Intelligence Systems by Financial Service Providers (PDF - 220.36KB) (the bulletin). It applies to depository institutions; mortgage brokers, lenders or servicers; money transmission service providers; licensed lenders; installment sellers and sales finance companies; credit card companies; debt management entities; deferred presentment service providers (payday loans); and Class I or II licensees under the state’s Consumer Financial Services Act.
Financial service providers should understand the foundational expectations for artificial intelligence systems (AIS) program examinations to prepare for internal and external reviews. AIS programs will require enhanced risk management practices beyond traditional compliance, incorporating IT and security, model risk and broader operational and strategic considerations.
The U.S. Department of the Treasury observed in a December 2024 report (PDF - 1.07MB) summarizing feedback from an AI request for information that there is currently no comprehensive framework of federal AI laws. The report noted that state legislators are continuing to propose a wide array of laws related to the uses of AI within the states. It also stated that conflicting state laws may lead to uneven requirements for AI developers, users and financial firms of different sizes, as well as varied product functionalities for consumers.
In February 2026, Treasury released six resources developed in collaboration with industry and regulatory partners through the Artificial Intelligence Executive Oversight Group (AIEOG), providing a foundation for the use of AI in financial services, addressing governance, data practices, transparency, fraud and digital identity in an integrated way.
The White House also released a March 2026 briefing, A National Policy Framework for Artificial Intelligence. However, it remains a set of legislative recommendations that must move through the congressional process.
Against this evolving and fragmented federal landscape, the Michigan bulletin provides one of the most detailed and actionable state-level supervisory frameworks to date, translating broad principles into specific expectations for financial service providers. The bulletin emphasizes principles outlined in Treasury’s 2024 report, including fairness, ethical use, accountability, regulatory compliance, transparency, and system security. Similar to the nationwide impact of the California Privacy Rights Act, state-level AI regulation could drive broader regulatory change management expectations.
Even for organizations that do not operate in Michigan, the bulletin may serve as a model for other states. For organizations operating in Michigan, compliance expectations are immediate.
What this means for financial service providers
The bulletin’s regulatory guidance states that compliance with the standards is required regardless of the tools used to make decisions. It notes that, without proper controls, AI may increase the risk of inaccurate, arbitrary or discriminatory outcomes, making it critical to implement controls that mitigate adverse consumer outcomes.
The bulletin further states: “All financial service providers are expected to develop, implement and maintain a written AI Systems Program (AIS Program) for the responsible use of AI systems.” Organizations not formally using AI should, at a minimum, establish policies governing employee use.
The bulletin outlines four key pillars: general guidelines, governance, risk management and internal controls and third-party AI systems and data. These pillars provide an overview of how institutions can develop and test an AIS program:
General guidelines
- Design the AIS program to mitigate risks of adverse consumer outcomes, addressing governance, risk management controls, and internal audit functions.
- Hold senior management accountable to the board or relevant committee for AIS program oversight
- Tailor controls and procedures to the organization’s use of AI systems.
- Align the AIS program, where possible, with recognized frameworks such as the NIST AI Risk Management Framework.
- Address AI use across all business lines and lifecycle phases, including third-party systems.
- Establish processes to notify consumers of AI use and provide appropriate transparency.
- Maintain clear policies for employee use and retain responsibility for risk management, including outsourced activities.
Governance
- Establish a governance framework covering policies, procedures, risk management and controls across the AI lifecycle.
- Document AIS program standards and requirements for consistency and compliance.
- Define accountability structures, including committees, escalation protocols and training responsibilities.
- Document predictive model practices across design, development and monitoring, including error detection and bias mitigation.
Risk management and internal controls
- Implement oversight and approval processes for AI system development, adoption and acquisition
- Establish data governance practices, including security, testing, quality and bias analysis.
- Maintain model inventories and documentation, including purpose, development, validation and performance monitoring.
- Conduct validation testing and ongoing reassessment of data and models.
- Protect nonpublic consumer information from unauthorized access.
- Maintain appropriate data and record retention practices.
Third-party AI systems and data
- Conduct due diligence to ensure third parties meet regulatory requirements.
- Include contractual provisions for audit rights and regulatory cooperation.
- Require prompt notification of unauthorized access events.
- Define responsibilities for data protection and risk mitigation.
How we can help you
INDUSTRY
SERVICES
Responsible AI governance is key to examination preparation
Financial service providers should be prepared to produce a written AIS program and supporting documentation during examinations. However, documentation alone is not sufficient. The bulletin states that examiners may request detailed information on specific models, AI systems, and their applications. Organizations should treat their AIS program as a structured framework that connects policies, procedures, and operational practices across the enterprise.
Responsible AI principles provide a foundation for developing and deploying AI systems that are reliable, fair, transparent and accountable. These principles align with Treasury guidance and AIEOG resources released in 2026. Additional tools (PDF - 136.67KB) developed by the Financial Services Sector Coordinating Council support implementation, including guidance on terminology, NIST framework alignment, identity and authentication, explainability, data quality and fraud risk management.
“Modern AI doesn't just carry risk at the point of deployment — it generates risk continuously as it operates. That’s why runtime AI governance is essential: real-time monitoring, dynamic guardrails, and live auditability give organizations the confidence to scale AI rapidly without sacrificing control,” said Vikrant Rai, Grant Thornton Cyber & Risk Advisory Managing Director. “The reality is that policies, reviews, and approval workflows alone are not enough. Effective AI governance must embed clear accountability structures, ongoing operational monitoring and enforceable oversight principles into the fabric of how AI systems actually run.”
How to strengthen your AI governance and compliance
Financial services organizations can take practical steps to align with emerging supervisory expectations and strengthen AI governance, risk management and compliance frameworks.
- Implement enterprise policies and procedures: Design and enhance tailored policies, standards and procedures that reflect organizational operations and clearly define roles and responsibilities.
- Design compliance reporting and monitoring frameworks: Develop frameworks and AI-enabled monitoring approaches to support performance tracking and alignment with applicable laws, regulations and internal policies.
- Assess current-state AI compliance: Evaluate existing environments to identify gaps, prioritize improvements and define next steps.
“Organizations should be establishing clear policies that define accountability for AI decision-making, building monitoring frameworks that track compliance in real time and assessing their current state to close gaps before examiners find them,” said Leslie Watson-Stracener, Grant Thornton Regulatory Compliance Solutions Partner. “The institutions doing this work now are the ones that will scale AI with confidence.”
Contacts:
Partner, Regulatory Compliance Solutions, Risk Advisory Services
Grant Thornton Advisors LLC
Leslie Watson-Stracener is a Partner and Regulatory Compliance Capability Leader at Grant Thornton Advisors LLC
Dallas, Texas
Service Experience
- Advisory Services
- Risk Advisory
Managing Director, Cyber & Risk Advisory
Grant Thornton Advisors LLC
Vikrant Rai is an Advisory leader in Grant Thornton’s Cyber and Risk practice delivering IT, Cybersecurity and AI risk management solution.
Edison, New Jersey
Industries
- Banking
- Healthcare
- Life Sciences
- Manufacturing
- Private Equity
- Technology
- Transportation & Distribution
Content disclaimer
This Grant Thornton Advisors LLC content provides information and comments on current issues and developments. It is not a comprehensive analysis of the subject matter covered. It is not, and should not be construed as, accounting, legal, tax, or professional advice provided by Grant Thornton Advisors LLC. All relevant facts and circumstances, including the pertinent authoritative literature, need to be considered to arrive at conclusions that comply with matters addressed in this content.
Grant Thornton Advisors LLC and its subsidiary entities are not licensed CPA firms.
For additional information on topics covered in this content, contact a Grant Thornton Advisors LLC professional.
Share with your network
Share