Skip to main content

A little while back we published an article about the circular letter issued in January to insurers by the New York State Department for Financial Services concerning the use of external consumer data and information sources for life insurance underwriting. On the face of it, the circular constitutes a stark warning for insurers and places severe restrictions on how they use data for risk assessment and pricing purposes. And in a way it is and it does, but its stipulations are not exactly unreasonable. Indeed, largely it makes pretty good sense.

Indirect use of discrimination factors that are outlawed is inexcusable and needs to be avoided by due diligence by insurer and indeed the firm supplying the data. Any decent carrier would not want to cross those red lines anyway – and that is even if the data company involved is not itself subject to regulatory oversight and/or consumer protection laws. Moral: act with integrity and choose your business partners carefully.

The circular provides that carriers should be able to justify their underwriting decisions to the consumer. This is a fair point, although maybe primarily in the interests of transparency and good customer relations. The other side of the coin is that it may be that robust data supports the underwriting philosophy, but the underlying mechanism is poorly understood – data analysis can throw up some puzzling surprises. Suppose you review the data repeatedly in an endeavour to discover what is going on and you fail? Does that mean the data is invalid and should be ignored?

Health risks can create the same situation. In 2012 researchers at Harvard, having followed  120,000 people over the course of more than 20 years, calculated that eating an extra portion of unprocessed red meat was associated with an overall 13% increased risk of death annually (and the figure for processed meat was even higher). Can one say there is cause and effect here? After all, the researchers statistically controlled for a wide number of other potential risk factors, such as alcohol consumption, calorie intake, activity levels, and family history of cancer. In reality, all that can be said is that there appears to be a strong correlation between eating this kind of meat and extra mortality due to cancer and cardiovascular disease. If an insurer wished to underwrite using red meat in diet as a risk factor, would that be OK by the legislators?

This matter brings to mind that, for example, historically US underwriters have used solid driving record data in risk assessment. Is it good enough to accept what the data says at face value, or do insurers have a moral duty to look further and try and stratify risk more accurately? And if they do delve deeper into data can they be really sure when their moral duty has been discharged – that is, when it is OK to stop?

One is reminded of the 2012 ruling by the European Union (EU) Court of Justice that different insurance premiums for men and women purely on the grounds of sex are incompatible with the principle of unisex pricing for goods and services, regardless of the fact that there is abundant data to show that, on average, women outlive men. Insurers are obliged to look for the risk factors that account for the difference if they want to avoid pricing male and female risks the same. Of course, the legislators are forcing insurers to underwrite the individual and not the group – which is fair enough – but whether insurers were truly wanting in this regard is questionable.

Actually the EU gender issue is perhaps an ‘inverse parallel’ in that there the force is towards individual underwriting, whereas the New York State directive is as much about avoiding generalisation – ‘lazy underwriting’ if you will – as unfair discrimination. But does use of predictive modelling based on external data mean laziness? Not necessarily. If the data is layered on other risk information, the end result could be a rather sophisticated underwriting approach.

And anyway, in a given market there is always scope for more than one product proposition. ‘Accelerated underwriting’ has the major virtues of simplicity, speed and convenience. That is appealing to many – even if they are unsure how their premium has been calculated. It is not as though there is no alternative.

Arguably it would be a pity if well intended but unnecessarily tough legislation prohibited mass marketing of useful products to target groups or, more particularly, personalised offers to individuals with a product proposition ‘especially designed and priced just for you’; for with the right data such precise targeting and accurate pricing to go with it are quite feasible. As always, the consumer has the ability to decline the offer, but for some people it could be an appealing way to buy insurance.

In its circular the New York State Department for Financial Services threw down a challenge to ‘selfie underwriting’. One can’t help being intrigued by the concept but can it really work satisfactorily? Some commentators are vehemently against it, but then the firms that have created the algorithms appear to have compelling data correlating facial appearance with age and mortality. Again, is that a fair basis for risk stratification, or do insurers have a duty to look for the factors driving the correlation? Recently Gen Re announced it is piloting, in conjunction with Lapetus Solutions, selfie underwriting in various European and Asian countries. Is this mainly a concept for emerging markets with relaxed legislation, or can it and should it be allowed to succeed in sophisticated markets like the US? It will be interesting to see how things play out.

It is useful to reflect that consumer credit databases are used to determine eligibility for loans, credit cards, etc and interest rate charges, and there appears to be little disquiet over that. But could the criticism of ‘lazy’ risk stratification be levelled at that part of the financial world? It would be wrong for life insurers to be subject to tougher rules than other financial institutions. And it will be interesting to see if other states follow New York’s lead or whether they take a more lenient view.

But maybe the tide is turning against the data holders, data analysts and data users. California has just passed tough new data privacy laws, coming into force in 2020, that give consumers the right to know what information companies are collecting about them, why they are collecting it and whom they are sharing it with. It gives consumers the right to tell companies to delete their information as well as to not sell or share their data. Vermont passed a law regulating data brokers in May 2018, and it went into effect in January. Other states are tightening up too.

This is a hot topic in the US – and elsewhere – and it looks like it could get hotter still.