Evaluation of risk through catastrophe models has improved significantly since the early 1980s. Models have become a critical tool, and have no doubt prevented many re/insurers from tipping into insolvency. However, the advance of technology and science has outpaced the ability of many cedents to provide adequate data. This report from Ed’s Mr Marcus Taylor.
Modelling firms are increasingly driven by market forces to update and develop new models for developing regions but, in some cases, these markets have been unable to respond at an equal pace. Without quality inputs, model outputs are of limited use.
Similarly, the benefits and limitations of modelling are not always well understood. Brokers are now expected to deploy catastrophe modellers and analytics teams, but a significant knowledge gap remains.
The uncertainties contained within models and the impacts of data quality and model assumptions are often not widely discussed with cedants. Outputs are generally summarised by the time they reach board level, so only a handful of decision-makers and senior executives truly understand, for example, the difference between ALM and DLM, how the PML is derived, or the impacts of a small assumption change on the overall result for a one in 250-year return period.
Brokers are at least partly at fault. Within Asia’s important co-broking environment, it is not uncommon for three or four brokers with the same underlying data to deliver as many very different modelled outputs. Each broker will promote its view as the best view of risk, but may often fail to highlight the real risk and uncertainty by changing, for comparison, even the most minor model assumption.
I fear that if we idly follow model outputs without any real understanding of the overall risk picture, our industry will place too much reliance on those modelled outputs. As an industry we all need to gain a greater understanding of the models and their practical applications, but also of their limitations.
One common misconception is that catastrophe models are forecasting tools. It is important to realise that they allow users to evaluate and manage catastrophe risks by showing a range of possible events that may occur, their probabilities, and their likely costs – but they do not forecast events.
A second misconception is that models provide all the answers. In practice, catastrophe modelling should be just one component of a comprehensive risk-management strategy. We increasingly hear, across the market, that insurers need to develop their ‘own view of risk’. To do so effectively, they must consider much more than simple model outputs. These should be blended with their own proprietary and public information to form a more complete and accurate view of risk.
Thirdly, modelling firms are often perceived to be only reactive. They of course update models post-event, when new data becomes available, but the technology and scientific capability behind models improves continuously and immeasurably. All known eventualities, and some unknowns, are incorporated into most vendor models.
New losses invariably bring new understanding that informs updates, as do new data sources and fresh scientific approaches. It is a constant learning curve, and models are always evolving to incorporate what has been learned, so much of the criticism is unjustified.
Model use and the knowledge and data gaps
Models are increasingly used to help determine the right level of natural catastrophe premium to apply to original policy pricing, but in the Asia Pacific the experience is currently less mature than other markets. That of course is changing.
Some insurers absolutely price business incorporating a load for catastrophe, but they are often at a competitive disadvantage in the current pricing environment, where typically very little, if any, natural perils premium is allocated to original risks. Greater market attention to this shortfall should come with improved knowledge.
The rise of risk-based capital (RBC) regimes around the world has brought model outputs into regular use for regulatory capital requirement analytics. Effective organisations now also incorporate model outputs into their internal reviews of financial performance and solvency capital.
As Asia works through various RBC regimes, the importance of providing accurate data to inform views of risks increases, as does knowledge of their interpretation. Many brokers provide value-added services in this area, but the lack of detailed data remains a concern.
Model use is increasing in the growing ILS market. Third-party investors such as pension funds have less knowledge of the risks involved, and therefore place more reliance on an analytical approach. However, the lack of quality data across Asia is one of the major brakes on ILS activity in the region. This limitation was noted by the Monetary Authority of Singapore when the Natural Catastrophe Data Analytics Exchange was formed by an alliance of insurers, brokers and information companies to address Asia’s data gaps.
Several areas of uncertainty are built into each catastrophe model, because of the limits of data and science. The uncertainty may lurk in non-modelled losses or perils, event frequency, risk vulnerability, or financial calculations.
One clear demonstration of this is the disclaimers now commonly found in model output presentations, which note ranges of error and uncertainty, and often amount to more pages than the presentation and output itself. It is incumbent upon the modeller to articulate and communicate each of the aspects of this uncertainty to the client clearly.
Meanwhile it is incumbent upon the client to ensure the accuracy and granularity of exposure data. The underlying data impacts each aspect of models’ innate uncertainties, and data gaps must be filled with accurate assumptions.
Even in mature markets such as Japan models do not always perform as expected. Two major modelling agencies announced initial loss range estimates for typhoon Jebi of $3bn to $5bn and $2.3bn to $4bn respectively, but by May 2019 the range had quadrupled to $15bn to $16bn. Insurers and reinsurers that had relied solely on modelled outputs may face serious challenges.
Catastrophe modelling is often seen as the single source of truth. Models will remain an important tool for our market, but to follow outputs blindly and demand modelling as a ‘value add’ service without providing sufficient data makes models dilutive, rather than accretive to risk management decision making. It is therefore imperative that organisations develop a detailed understanding of risk as a whole. Risk-management decisions need to be taken in the knowledge of the full picture, with a clear understanding of model uncertainties and limitations.
With increasing model penetration throughout the Asia Pacific region, it is imperative that adequate data capture is addressed now, to enable fully informed decision making. Models are undoubtedly here to stay, and their accuracy and precision will only increase.
It is a responsibility of reinsurance brokers to spend time with cedants to explain the benefits and limitations of models, to encourage their correct use, and to help clients harness the powerful data which fuels their probabilistic outputs. Those that do so will enjoy significant benefits over time, and, most importantly, so will their clients. A
Mr Marcus Taylor is head of reinsurance, Asia Pacific for Ed.