What is a catastrophe model?
Climate-related catastrophes are increasing in number and severity. While efforts to mitigate the impact of climate change continue, many of its effects are already inevitable. Society must adapt to address the financial and human costs of these events. This requires improving our ability to model and predict catastrophe risk so we can make the right decisions about insurance, investments, emergency preparedness, regulation, and land use.
One tool that can be used to understand this risk is a catastrophe model. With the availability of modern computing, catastrophe models were developed for the insurance industry to provide a way of measuring and managing low-frequency, high-severity risks.
Catastrophe models use significant computing power to analyze many potential scenarios in a specific geography to estimate risk and potential loss. Fully probabilistic catastrophe models simulate thousands of stochastic events, often simulating thousands of possible years.
They incorporate scientific understanding about risk drivers as well as detailed information about exposures. For example, in a hurricane catastrophe model, the home location, construction material, roof type, and other factors can all be included to make the results more accurate.
Each simulated event produced by a model is translated into an effect on the modeled exposures, usually in the form of damage. Calculated damage is then interpreted as financial estimates, integrating insurance policy terms such as insured limits and deductibles. By evaluating insurance losses over thousands of events over thousands of simulated years, catastrophe models can calculate both average annual loss (AAL) and exceedance probabilities1 for a given property and exposure.2
One of the first instances where catastrophe models provided significant value was in predicting hurricane-related losses. In 1992, Hurricane Andrew caused $30 billion in losses and caused 11 insurers to become insolvent. In 2005, Hurricane Katrina caused $87 billion in losses, yet no insurers faced insolvency.3,4,5 Between Andrew and Katrina, insurers adopted catastrophe models to better understand their hurricane risk. This allowed them to have more adequate pricing and reinsurance to survive Katrina.
Catastrophe models have been used across the insurance industry for rate-making, buying reinsurance, managing exposures, and to meet rating agency standards. Now they are also being used by other stakeholders for new purposes, including loss mitigation, climate change planning, real estate evaluation, and to address other risks. However, as their use becomes more widespread, it is important to understand how a catastrophe model can be used and to help decision makers learn how to evaluate them effectively.
Understanding the catastrophe model lifecycle
Broadly speaking, methods of evaluating property insurance risk can be divided into historical models and catastrophe models.
Historical models are useful for predicting losses caused by perils with relatively high frequency, such as automobile accidents. Auto insurers have sufficient historical accident data to be able to understand the full range of possible events and to estimate future losses. Even when the frequency or severity of losses is changing over time (such as with inflation or improvements in vehicle safety features), these trends are generally discernible in the historical data. In other words, historical models work well in cases where there is sufficient historical data and where past occurrences are good predictors of future results.
Certain catastrophic perils such as hurricanes, wildfires, floods, or severe convective storms do not fit this description. Extreme weather events are infrequent, which means the limited historical record does not represent the full range of possible events. This hurdle would exist even in the absence of climate change. However, climate change makes historical data even less predictive of future results, as certain perils may be getting more frequent or more severe.
Catastrophe models are useful because they simulate a wide range of potential scenarios and their likelihoods, including those that have never been observed in the history of a given peril. This is important because if a peril is getting more severe over time, then historical data will always be biased toward underestimating the range of possibilities.
However, not all catastrophe models are equally mature, and different approaches may produce different results. For some perils, such as hurricane, catastrophe models have been developed and evaluated over several decades. In this case, mature models will tend to vary less than immature models because the inputs and approaches have been validated over time. Two different mature models will typically produce results in aggregate that are in the same ballpark, although results for individual risks may still vary widely. Hurricane catastrophe models are widely used and trusted in the insurance market.
On the other hand, some perils such as flood and wildfire have not been the focus of catastrophe modeling for as long. The models are less mature and may show greater variability in results from one model to another. Figure 1 shows a possible range of flood AALs for a beach house. While they are not actual AALs from catastrophe models, they are realistic and illustrative of actual catastrophe model outputs. For individual exposures, differences of this scale are not uncommon for flood catastrophe models.
Figure 1: AAL for beach house
Note: For illustrative purposes only.
Even so, it’s important to note that less mature models are not necessarily “wrong,” even if they disagree. They each reflect an estimate of risk based on various inputs, sensitivities, and calculations. Using them effectively may require more evaluation and comparison, a deeper analysis of how they are arriving at results, and, in some cases, an adjustment to fit the circumstances. New models will be tested and improved over time, and their results are likely to converge as they become more accurate in varying scenarios.
Techniques for evaluating catastrophe models
Especially when working with less mature models, comparing different options and analyzing the results can improve the likelihood of finding a model or combination of models that predicts risk appropriately for your specific needs. Understanding why models vary and their respective strengths and weaknesses enables you to choose the best tools for the situation. The level of vetting needed depends on the intended usage of the model. New users need to pay particular attention because a model built for one purpose may not yield reasonable results if it is used for another purpose.
An important consideration for evaluating catastrophe models is what exposure data to use. It should be realistic with a sufficient volume to be comprehensive. If actual exposure data is unavailable, users may turn to a representative market basket6 of risks or use a grid-based7 approach.
Some effective strategies for evaluating models include reasonability, mapping, sensitivity, and outlier analysis.
This approach simply involves asking: do the results make sense? Does the model produce low estimates in known high-risk areas or vice versa? For example, a storm surge model in which damage estimates increase the further a house is from the coast should warrant further investigation. There may be a good reason for the result—or the model might not be appropriate.
Zero-dollar results should also draw attention. Depending on the peril, a zero-dollar loss may be completely reasonable. For example, a home in downtown Los Angeles is extremely unlikely to be damaged by a wildfire. In other cases, zero risk is unrealistic—like a coastal home having a zero-dollar expected flood loss. If a portfolio shows many zero-dollar results, it may indicate a systemic underestimate of risk.
Models should incorporate all important drivers of risk for a particular situation. That is to say, wildfire models should consider the type of construction and flood models should consider the presence of a basement. Determine which drivers of risk are relevant to your use case and review the model for these components.
In addition, models can be examined for appropriate sub-perils. If they do not include all consequences of the given exposure, such as smoke damage for wildfire, then that exposure may need to be measured via other means.
Model results can be plotted on a map, which is often the simplest way to visualize estimates and assess their reasonability. This way, you can see how estimates change spatially over a specific area and spot both consistencies and irregularities. Evaluating models using the mapping technique works best when comparing two or more models.
Figure 2: Flood AAL in Carteret County, North Carolina
Note: For illustrative purposes only.
With data from multiple models plotted out, differences at certain locations can help you assess the reasonability of each. Figure 2 shows the actual AALs from four different flood models plotted for single family homes in Carteret County, North Carolina. Homes are represented by the colored dots, with AAL scaling in color from low (blue) to high (red). While the four models show good agreement on where the flood risk is in general, there are differences worth understanding. For example, Models A and B appear to show significantly higher flood risk on the southern barrier island than Models C and D.
Loss estimates that change sharply at a specific point in space, unrelated to the nature of the risk, are known as discontinuities. For example, if AALs are large on one side of a county border and small on the other side, this difference may or may not be reasonable. A reasonable explanation for such a difference in a wind catastrophe model could be that one county enforces significantly stricter building codes, which might justify a lower wind loss estimate. However, if no reasonable explanations exist, this discontinuity could be an artifact of the modeling approach.
Another way to evaluate catastrophe models is by applying a sensitivity analysis. This is done by changing a given input in a model and examining the change in the output.
In a storm surge scenario, two models may both produce smaller losses further from the coast, but one model’s losses may decrease faster than the other’s losses. Both models may be reasonable, but one is more sensitive to distance to coast. Neither is necessarily more correct but appreciating the difference will aid in your understanding of the models.
Figure 3 illustrates how these example model results might look. It also includes a third, possibly unreasonable model that produces storm surge AALs that increase as you move away from the coast.
Figure 3: Storm surge AAL by distance to coast
Note: For illustrative purposes only.
Model A and B results both look reasonable, with decreasing AAL further from the coast. Model A appears more sensitive to this variable than Model B. Model C results are different from expectations and may be unreasonable.
Determine how relevant factors should relate to a model’s output. Then consider whether a model’s results reasonably perform and whether the level of sensitivity is warranted.
An outlier analysis can be performed when three or more models are compared. Determine outliers by defining a rule. For example, the difference between Model A and the maximum of the other models in Figure 3 could be determined for each exposure. Positive values represent cases when Model A has the highest estimate, and large positive values may signify unreasonably large estimates.
Mapping this and similarly defined quantities will help to understand where the models have different views of risk. Look for patterns indicating how the different models behave.
Making models better with blending
Employing one catastrophe model might not always be enough to satisfy the needs of a particular situation. In this case, blending multiple models can be used to create a more reliable estimate of risk. In blending, more than one model is used to produce the needed result. Blending allows insurers to consider multiple views of risk produced by one or more models with different strategies and strengths. While this technique can result in a more encompassing assessment of risk, it may still be limited if a model is producing outliers that have a noticeable impact on averages.
Blending is especially useful when more than one model appears acceptable. In this case, weighting the models together may be more reasonable than selecting one model over the other. Different weights can be used for different situations. While blending can be challenging for some uses such as exceedance probabilities, determining AALs with this strategy can be quite simple. For example, your final result for each exposure could be the simple average of the AALs from each of the models.
Into the future with open eyes
In a world with a changing climate, catastrophe models, with their ability to simulate thousands or millions of events, provide a powerful tool for predicting risk and are being adopted for a wider range of perils. The models are more mature for some perils, less so for others. The less mature models can be expected to demonstrate greater variety in the results, making it essential to understand and compare them before applying them in a commercial context such as rate-making.
The robustness of data that can be simulated is a key advantage of catastrophe models. It demonstrates that they are flexible and can be used for investigating drivers of risk at a deep level, even when historical experience is scarce or difficult to analyze. As catastrophe models become more widely adopted, consumers, businesses, and policy makers will have better tools for making critical decisions about risks that affect all of us.
1 Exceedance probability is the probability that losses will exceed a given threshold during a set period of time, usually a year in property insurance applications. This can be calculated on an occurrence or aggregate basis. Exceedance probability is useful for insurers to understand their tail risk, the risk due to relatively unlikely events.
2 Some catastrophe models stop short of producing AALs and exceedance probabilities, and instead produce a different measure of risk, such as a hazard score. This article is directed at fully probabilistic models that do develop AALs.
3 Both figures are stated in 2020 U.S. dollars.
4 Insurance Information Institute (December 13, 2021). Spotlight on: Catastrophes – Insurance Issues. Retrieved June 15, 2022, from https://www.iii.org/article/spotlight-on-catastrophes-insurance-issues.
5 Churney, B. & Ma, N. (August 23, 2012). Twenty Years After Andrew – How Far Have We Come? Verisk. Retrieved June 15, 2022, from https://www.air-worldwide.com/publications/air-currents/2012/twenty-years-after-andrew-how-far-have-we-come/.
6 A market basket is a sample of exposures, usually blending actual and representative risk characteristics. For example, a single family home market basket may be based on actual home footprint data, allowing the location and some other characteristics such as year built to be known. Other unknown characteristics are then assigned to each risk in a systematic way, so that, in aggregate, the market basket resembles reality.
7 A grid-based approach involves placing risks, usually with base risk characteristics, in a grid pattern. The risks do not represent the locations or characteristics of actual market risks, but they can offer insight into the geographic dimensions of a catastrophe model.