LANSING, Mich. (Michigan News Source) – Michigan regulators have officially entered the discussion on artificial intelligence (AI), and the message is simple: if you’re using AI in financial services, the rules still apply. The Michigan Department of Insurance and Financial Services (DIFS) issued a bulletin this week laying out expectations for how banks, insurers, mortgage companies, and other regulated entities use AI when making decisions that affect consumers. In other words: fancy algorithms don’t get a free pass.
AI is fast, but the law is still the law.
DIFS Director Anita Fox said AI is rapidly reshaping the financial services industry – but speed and efficiency don’t outweigh consumer protections.
MORE NEWS: ‘Islamist Influencer’ Tells Dearborn Arab Youth Allah Sent Them To Civilize The West
“Artificial Intelligence is changing the financial services industry,” Fox said, adding that companies must ensure AI systems comply with all federal and state laws while prioritizing consumer protection.
What could go wrong? Plenty.
The bulletin flags several risks tied to AI-driven decision-making, including:
- Inaccurate outcomes
- Unfair or discriminatory results
- Data security vulnerabilities
- A lack of transparency when consumers try to understand how decisions were made
- Third-party risks
DIFS makes clear that existing laws already apply to these issues – even if the decision was made by a machine instead of a human.
Expectations.
Rather than creating brand-new AI laws, the department is emphasizing that current regulations still govern how financial products and services are developed and delivered. Companies are expected to understand how their AI tools work, monitor outcomes, and ensure consumers aren’t harmed by opaque or biased systems. AI may be the new brain behind financial decisions – but Michigan regulators want to make sure it’s using common sense, fairness, and the law.
Under the new guidance, insurers and other financial service providers are expected to have their AI house in order – on paper and in practice. That means maintaining a written AI governance program that spells out exactly how artificial intelligence tools are selected, deployed, monitored, and corrected. Companies can’t simply plug in an algorithm and hope for the best. AI systems must be tested before use, re-tested after deployment, and continuously monitored to ensure they remain accurate, fair, and legally compliant over time. If an AI model starts producing flawed or biased results due to changing data or conditions – a problem known as “model drift” – regulators expect companies to catch it and fix it.
The bulletin also makes clear that AI does not replace human judgment, nor does it shift responsibility elsewhere. Financial companies are expected to keep humans meaningfully involved in decisions that affect consumers and to be able to explain how AI-assisted outcomes were reached. Outsourcing AI to a third-party vendor doesn’t change that obligation. Even when outside companies build or supply the technology, insurers and lenders remain fully accountable for the results. If an AI-driven decision harms a consumer or violates the law, regulators won’t accept “the software did it” as a defense.
