Model bias rules target insurance practices

 

Colorado requirements may be first of many

 

As the focus on consumer protection continues to increase in the regulatory space, the pressure is on for insurance providers to fight against unfair discrimination and algorithmic bias.

 

New legislation in Colorado has created insight into possible new compliance requirements for insurers that do business there, and other states are expected to follow suit with similar rules designed to promote fairness in insurance rates and coverage decisions.

 

Colorado Senate Bill (SB) 21-169 took effect in July 2021. The law was passed in an effort to hold insurers accountable for “using any external consumer data and information source (ECDIS), algorithm, or predictive model (external data source) with regard to any insurance practice that unfairly discriminates against an individual based on an individual's race, color, national or ethnic origin, religion, sex, sexual orientation, disability, gender identity, or gender expression”. As lawmakers have magnified focus on consumer protection related to insurance practices, there is pressure from the insurance carriers for advancement in testing big data systems, including predictive models.

 

Stemming from the 2021 law, on Feb. 1, 2023, the Colorado Department of Insurance (DOI) issued a draft proposed regulation, 3 CCR-702 on the Governance and Risk Management Framework Requirements for Life Insurance Carriers’ Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models.

 

This draft regulation establishes requirements for the internal governance and risk management framework implemented by life insurers to safeguard the use of external consumer data sources in modeling to prevent discriminatory insurance practices. Examples provided of external consumer data sources that life insurers may use to supplement traditional underwriting factors, as mentioned in 3 CCR-702, include: credit scores; social media habits; purchasing habits; home ownership; educational attainment; licensures; civil judgments; court records; occupation that does not have a direct relationship to mortality; morbidity or longevity risk; and any insurance risk scores derived by the insurer or third-party. This regulation affects life insurance providers, but the trend toward this type of regulation has the potential to affect the insurance industry as a whole, as there is heightened attention from a regulatory perspective on the company’s internal governance and risk management to monitor and test algorithms and predictive models so their use of ECDIS does not result in unfair discrimination.

 

This regulation is applicable to all life insurers authorized to do business in Colorado and consequently, at a board level, these insurers are to be held accountable for utilization of ECDIS, predictive models, and algorithm.

 

 

 

Downstream effects

 

While this practice applies to life insurance there should be careful consideration to the effects of similar regulation as it relates to accident, health and property and casualty insurance. Although the regulation in discussion applies to the state of Colorado, other states are likely to follow suit in the coming months and years as increased consumer protection rises to the forefront of the industry.

 

 

 

Model Risk Management across industries

 

While the 3 CCR 702-4 rule is groundbreaking for the life insurance industry, the concepts pertaining specifically to Model Risk Management are nothing new to the financial services industry. The Federal Reserve and Office of the Comptroller of Currency (OCC) laid the foundation for Model Risk Management framework requirements in the banking industry following the joint release of the Supervisory Guidance on Model Risk Management (SR 11-7 / OCC 2011-12) on April 4, 2011. Over the past 12 years, SR 11-7 has evolved to become the gold standard for Model Risk Management beyond banking. The requirements it puts forth are often leveraged as criteria for success among a broader range of institutions, including insurance companies, finance companies, fintech companies, money service businesses, broker-dealers, and more.

 

As such, it is no surprise that 3 CCR 702-4 incorporates many of the key elements of SR 11-7, such as the requirement to maintain a model inventory and to periodically validate models. However, it is important to note the more nuanced requirements that 3 CCR 702-4 includes specific to the life insurance industry:

  • An increased focus on the prevention of unfair discrimination of protected classes under the rule including new protected classes such as gender identity, gender expression, disability status and sexual orientation.
  • Heightened concerns around the source of data used in modeling.
  • Development and implementation of training programs for key personnel to address concerns around unfair discrimination.
  • Processes and procedures to enhance transparency for consumers around the use of ECDIS, algorithms and predictive models.
  • Promoting clarity for the options available to consumers in the event of adverse decisions.
  • Action plans to respond to unintentional unfair discrimination outcomes.
  • Documentation of how consumers may be adversely affected by the use of ECDIS, algorithms, and predictive models.
  • Documentation surrounding key decisions on the use of ECDIS, algorithms, and predictive models to verify that rationales are sound and accountability is clear.

 

 

 

Effects on AI and machine learning modeling

 

The growing use of artificial intelligence and machine learning (AI/ML) across the business landscape has escalated the scrutiny over decisions derived from these types of models. In particular, there are elevated concerns regarding the conceptual soundness of these models because their black-box infrastructure does not allow users to truly understand the underlying logic that transforms model inputs into model outputs.

 

Given that ECDIS, algorithms, and predictive models often fall under the purview of AI/ML models, the life insurance industry can turn to the banking industry to leverage updated guidance from the OCC on how to better substantiate the decisions driven by these models. In 2021, the OCC updated its handbook for Model Risk Management, which builds upon SR 11-7 by providing more prescriptive details on the requirements for Model Risk Management. The handbook highlights the following actions for building support around AI/ML model outcomes:

  • Evaluate the model complexity, how it is used, and its degree of risk to gauge an appropriate level of reliability on the model for the intended use.
  • Establish a culture of posing effective challenge for the inputs and outputs pertaining to the model.
  • Thorough documentation of how the model was built and how it works, including support for the relevance and quality of development data.
  • Ongoing model testing.

 

 

 

Proper governance and documentation

 

Proper governance of life insurance companies is key to enablingthe visibility of underwriting processes that occur behind the scenes. Documentation of these processes is critical as well. With the enactment of 3 CCR 702-4, it is a growing theme within the insurance industry to see the traditional banking models bleed into the models used for underwriting procedures in insurance. So, what can be done by life insurance companies now that the use of AI/ML is on the rise?

  • Board oversight
    • Educating the board and senior leadership on these issues and potential risks through proper monitoring and reporting processes is essential for mitigating risks and complying with new regulations.
  • Principles, policies and training
    • Verify that current processes that comprise ECDIS are documented, and any necessary trainings are developed. Communicate to your company, colleagues and subsidiaries that such laws are being discussed and not far from reality.
  • AI security response plan
    • Monitor outputs of data and develop incident response plans for events where the dependency of AI would not be prudent based on data availability, ad hoc processing and other factors that may hinder its effective use.
  • Model inventory
    • Regularly update and maintain a full population of ECDIS, AI models, statistical models and rule-based models. Continuously address changes of the models throughout their lifecycle and any testing completed. Review the results of models to verify that all output data is appropriately reflecting insurance industry standards.

Governance over customer data being used as an input to verify that customers are receiving the appropriate coverage is a top priority for life insurance companies. Establish a proper governance footprint over the use of models to determine customer coverage. With the enactment of SB 21-169 it is time to start thinking about this state law becoming industry-wide. 

 

 

 

Steps for getting started

 

1) Examination of current models

 

Analysis of existing models and the tools used to create them is a key first step in compliance with the draft regulation. Insurers can begin with reviewing which of the current models in operation fall under the purview of ECDIS and predictive algorithms. This includes development of the model inventory to establish the complete population of ECDIS tools and predictive models. Additionally, the discussions surrounding the classification of in-scope and out-of-scope models as ECDIS should include representatives from key functional areas including legal, compliance, risk management, product development, underwriting, actuarial, data science, marketing and customer service.

 

2) Develop an ongoing testing plan

 

Life insurers can look to the banking industry for the framework on detecting unfair discrimination during the underwriting process. Banks conduct disparate impact testing as a part of maintaining compliance with fair lending laws and regulations. Disparate impact testing includes measuring the impact of protected class variables on underwriting decisions. The framework laid out by the OCC in the Comptroller’s Handbook on Fair Lending can serve as a starting point for for detecting unfair discrimination in underwriting processes.

 

3) Use of external tools

 

While insurance companies collect data on some of the protected class categories, there may be constraints on the collectability of all these attributes. As such, proxy methodologies may be required to derive these attributes in order to conduct ongoing detection tests for unfair discrimination. Insurers can utilize Bayesian Improved Surname Geocoding methodology, an open-source code developed by the CFPB, which employs geography and surname-based information to predict race and ethnicity. Moreover, the Federal Reserve has developed an open-source mapping tool to predict gender based on an individual’s first name.

 

Contacts:

 
 
 
 
 
 
 

Our insurance featured industry insights