Fixed indexed annuities with Market Risk Benefits
This paper focuses on the cash flow modeling aspects of Market Risk Benefits on fixed indexed annuities contracts, especially the methodology for projecting indexed account growth.
The Internet has clearly changed the insurance business. Customers have more choices and increased sensitivity to price, leading to price pressure. Insurers can gather competitor quotes online and analyse them to understand the entire competitive market and their position within that market. However, acquiring competitor quotes and analysing them in an efficient manner is a challenge which is new to most insurers.
Traditionally, pricing actuaries spend a major part of their time analysing historical data. A precise estimate of the cost of a policy remains the foundation for profitability, which is why risk modelling is a core task of any pricing team. During this process, actuaries produce a variety of graphs and statistics, prepare their data in different ways, evaluate interactions1 between variables, analyse the added value of third-party data and examine trends in the data.
Once an actuarial best estimate for the price of a policy has been derived (and loadings for expenses and margins are added), the question arises whether this is the price the company wants to quote. The answer depends on many factors, such as whether it is an existing customer or a new customer, the distribution channel through which the product is offered, the strength of the brand of the insurer, the price sensitivity of the customer and the number of competitors on the market and their prices.
For insurance companies to be able to compete, it is crucial to have an understanding of their competitors’ tariffs. Machine learning techniques are an excellent tool set for this problem.
On price comparison websites (also known as “aggregators”), all offers may be compared on a single web page. But competitor prices are not only of interest for online business but also for business written by tied agents. It is not uncommon for customers to walk into an agency with a competitor quote. Agents are better prepared for discussions with clients if they already know the main competitor is likely to have a lower quote for the given customer.
From the perspective of a customer, especially a price-sensitive one, aggregators provide an ideal platform. Customers can compare all competing offerings by only providing their details once. Of course, not all market players might actually be present on this platform (or at least not all product offerings), and certainly the coverages of the different offerings will not be identical. But these two aspects are either negligible to many consumers or they do not even know about them.
Many insurers analyse their competitive positions for certain customer segments (e.g., young female drivers with inexpensive cars in a certain region) and use this information when updating the tariffs of their own products. Some insurers still pursue ways of obtaining competitor prices beyond aggregators, such as web scraping, outsourcing or mystery shopping, a cumbersome and expensive market research tactic where people are sent “undercover” to retrieve consumer information. Others, however, go a step further and try to reverse engineer competitor tariffs. This means that they try to find out the pricing formula and the different loadings their competitors are using.
However, reverse engineering a competitor tariff without complete knowledge about the tariff structure is a very difficult task in general. Consider a European motor insurance tariff. Often, a multiplicative tariff structure based on a generalised linear model (GLM) is used. Thus one approach to reverse engineer a tariff is simply to assume a certain tariff structure and to derive the relativities2 using the competitor data gathered. What sounds reasonably simple is not so straightforward because:
In many instances, it is virtually impossible to find the exact formula used by a competitor because there are simply too many possibilities to be evaluated and compared, even in cases where only a selection of the above issues can occur.
Machine learning is a computer science discipline focused on algorithms that learn from data as they go. Such algorithms make many fewer assumptions about the data structure than other modelling concepts and can, therefore, be applied to a variety of problems. Also, they are more automated because there are fewer input parameters needed, which speeds up the model-building process. In the insurance sector, machine learning algorithms are used for various tasks such as detecting fraud, pricing and processing telematics data.
We examined the results of an analysis where we used machine learning to derive an estimate for a motor insurance premium. We gathered 20,000 quotes from a website to build this model. The premium we modelled was the sum of three different elements: liability, own damage (partial casco) and collisions. It is extremely likely that these three premium elements have their own underlying tariff structures (as it is market practice). Still, for this case study, we decided to model the three premium elements as a whole. We note that we had no knowledge of the underlying tariff structure. For instance, we did not know which rating variables were used by the tariff and therefore simply fed our model with all available variables we possessed. The variables which were used for this were the same variables which an insurer would use to derive the pricing of a policy: data of the policyholder (age, gender, place of residence, license date), information on the insured car (make, model, price) and so forth. We used a combination of multiple decision tree models, a so-called ensemble, to model the premium. The algorithm is driven by the data: it uncovers patterns and follows them to model the premium as best as it can. This shows the flexibility which machine learning algorithms can provide.
Figure 1 shows the absolute value of the relative error on validation data, i.e., data which was not used to build the model. We can, for instance, see that for 50% of the validation data, our estimated premium was within 3.2% of the true competitor premium. Similarly we can see that the relative error is less than 10% for about 95% of the data. In fact, only 0.7% of the validation data has an error larger than 20%. The average absolute relative error was at 4%.
Figure 1: Competitor Premium Estimation Error on Unseen Validation Data
In a market where a competitor premium can easily be 10%, 20% or even 30% lower or higher than one’s own premium (which is not at all uncommon), an estimation error of 4% seems quite good. Still, one might say that this is not accurate enough. In this case, there are two straightforward ways to improve the accuracy of the model:
1. Build three different models for the three individual premium elements.
2. Gather more data to train the algorithm.
By producing a model for each individual coverage, we provide the algorithm with a more robust target. If we feed the model with more data, it will be able to model more granular effects, which will improve the performance of the model. Also, if we have any a priori knowledge about the tariff, we can use that, too. We may, for instance, know the grouping of certain variables (maybe the region or vehicle classes) or we may even know the exact relativities for one variable (possibly from the advertisement of a discount). Such information can be used to pre-process the data which is fed into the machine learning algorithms, which will reduce the degrees of freedom and improve the estimation error.
The flexibility of machine learning algorithms also enables us to model, for instance, the premium of the third-lowest offering. Of course, one would need the appropriate data. In this case, that would be the quote which ranks third for a sizeable sample. We note that, compared with modelling the premium of a single competitor, it will be more difficult to model the premium of the third rank. Such a model would effectively depend on the tariffs of all market players as different competitors with different tariffs will rank third, depending on the profile quoted. But the results of such a model can provide valuable insight. It would allow an insurer to adjust its own tariff (probably only for certain customer segments) such that it ranks third or less. Similarly, one could target a position within the top 10 for each policy.
Machine learning techniques provide a flexible tool set to derive accurate estimates of competitor premiums without any knowledge about the underlying tariff structure. The machine learning approach we developed as part of our research is faster and much less expensive than exhaustive web scraping or mystery shopping. It has enabled insurance executives to make better informed decisions about not only tariff changes, but also marketing campaigns and commercial discounts for certain customer segments. The impact of a tariff change on profitability and business volume can certainly be much better assessed in the presence of competitor premiums. In an ideal scenario, a company has an estimate of the competitor premiums at the point of sale. This allows adjusting one’s own quote to increase either the probability of conversion (by lowering the quote) or the profitability (by increasing the quote).