Takeaways from CAT Risk Management and Modelling Summit 2021

Speak to our team

Interested in learning more about how our flood data can help you? Fill in the form below to speak to one of our team.



We take your privacy seriously. We will securely store the data that you share. We will not share your data with any third party. If you would like to unsubscribe at any time please contact us at hello@jbarisk.com with the subject line Opt-out or call JBA Risk Management Marketing on 01756 799919. All updates will also give you the option to unsubscribe. Read our complete privacy policy here.

In a continuing trend of digital conferences, the annual Aventedge CAT Risk Management and Modelling Summit moved online for 2021 with three days of sessions covering multiple different time zones and catastrophe risk subjects.

JBA’s Head of Consultancy and Singapore Managing Director, Dr Iain Willis, spoke on a panel exploring the role of catastrophe modelling in climate change risk assessment, with others in the JBA team attending sessions on topics ranging from US hurricane to geocoding of locations.

We’ve rounded up the top takeaways from the conference below.

Panel: How can catastrophe modelling address the feedback loop between climate change risk and resilience?

  1. Cat model developers need to begin incorporating new metrics such as property level protection, asset resilience and transition measures into their models. Although these metrics are uncertain and difficult to capture, incorporating them (alongside factors that vary with different climate scenarios) will allow us to better understand ways to build resilience to climate change and make space for the next generation of cat models. This is something JBA plans to address in the future.
  2. Developing countries are likely to be particularly affected by climate risk. Catastrophe models have huge potential outside of re/insurance to help policymakers understand the costs and impacts of climate change, as well as to help build resilience and adaptation. These tools, alongside initiatives like the Oasis Loss Modelling Framework, play a vital role in helping to close this protection gap and build resilience (read more on this topic here).
  3. Modelling micro-scale adaptation activities can be more powerful than looking at aggregated Average Annual Losses (AAL) alone as it demonstrates very clearly to stakeholders the cost-benefits of intervention measures. Conducting consultancy projects using catastrophe models allows us to incorporate both structural and non-structural adaptation measures. An example would be combining land use management (e.g. the use of mangroves in mitigating flood risk) and local factors that otherwise might be overlooked in a macroscale approach (e.g. increased drainage capacity/ reforestation). This is something we undertake within JBA alongside our sister company, JBA Consulting.
  4. Cat modelling will always have a level of uncertainty, and climate change modelling is no exception, but it can still provide us with incredibly useful insights for future risk assessment. Climate change model uncertainty shouldn’t devalue insightful results, as long as the uncertainties are communicated effectively to the end user. The industry is getting better at communicating the intricate inner workings of catastrophe models, moving away from ‘black box’ models and being more transparent with uncertainty, but there is still a way to go.
  5. The ‘baseline’ view of risk is a constantly moving target – climate change will already have impacted present-day views of risk, and data must be updated accordingly to accommodate this. It would help end users if catastrophe model developers could quantify the contribution of climate change to current loss model results, but this is difficult to do.
  6. The practice of incorporating climate change data or climate model data into cat models is not new – the influences of the AMO (Atlantic Multi-Decadal Oscillation) or NAO (North Atlantic Oscillation) on North American hurricanes and European windstorms has been acknowledged and incorporated into many catastrophe models already. Given that future climate change risk is largely about capturing hazard severity, frequency and exposure, catastrophe models are well placed to capture these impacts.
    Climate change is a topic we’re frequently asked about, especially in relation to regulatory requirements, different time horizons/scenarios and the data available. Our specialists have developed a range of climate change flood data to help.

“Catastrophe model uncertainty is not a bad thing. But we need to express it in a meaningful way”

A topic initially picked up by the climate change panel, uncertainty in catastrophe modelling was a common theme throughout the conference.

The session ‘Considering the limits of windstorm, flood and precipitation coverage in light of recent perils and innovations’ highlighted one of the key elements in model uncertainty: non-stationarity in data. Much of catastrophe modelling relies on stationary methods, using past events to represent future possibilities but in reality, hazard, exposure and vulnerability characteristics constantly change. This reliance on stationary methods for non-stationary factors underpins uncertainty in the catastrophe modelling process.

Non-stationary behaviours are, by their nature, difficult to model, and this difficulty increases further when modelling climate change. The methods and data used to create catastrophe models are often based on underlying historical observations and therefore contain existing trends. For example, there is research across the globe showing the presence of trends in river flow levels and rainfall totals. Trends in historical records lead us to the question: are these models providing a good representation of the hazard, or are they reflecting the hazard over the past 30-40-year period that the data represents? And can these historical trends be extrapolated to represent a future climate? 

Developing methods to better represent uncertainty around cat model results is also key. At JBA, we’re rethinking the way we develop and build catastrophe models through our modelling technology. This modelling engine enable us to bring the model into being at run-time, rather than from pre-compiled parameters. It’s an ongoing development, with the aim of bringing more and more individual steps into a single at-runtime workflow that allows us to better represent uncertainty and non-stationary in our input data.

One of the first steps is the introduction of sensitivity analysis. We’re exploring ways to perturb input parameters by amounts that represent their uncertainty and run ‘ensembles’ of analyses in order to obtain a mean loss and standard deviation of loss, rather than using pre-defined distributions. This way, the uncertainty in the loss is informed solely by uncertainty in the inputs and does not rely on anything predefined. By using sensitivity analysis techniques, we can then find out which parameters most drive uncertainty in the loss.

JBA’s ability to carry out sensitivity analysis underpins our ability to effectively model climate change. We plan to generate events on-the-fly for a specific portfolio that represent future climate scenarios and we need to be able to perturb these inputs. By using sensitivity analysis techniques, we can investigate how uncertainty in the future climate scenarios impacts the loss.

Climate change certainly makes the uncertainty bigger, but model uncertainty is not a bad thing – it is an inherent part of any model that attempts to represent the natural world. It enables multiple views of risk and allows users to sensitivity test their model. However, for all catastrophe modelling, including climate change, uncertainty must be communicated effectively to clients, end-users and the industry to improve understanding . These models are becoming more sophisticated and will increasingly require a more technical understanding of how best to use them, as well as open and transparent communication on the behalf of model vendors to enable this understanding.

Open platforms, open standards and open technology

This increase in transparency and communication around model uncertainty ties into another key theme of the conference: openness.

In a number of sessions, Aon Impact Forecasting’s team discussed the importance of open data standards, open platforms and open technology not only to improve access to cutting-edge tools, but also to make it easier to bring models to market and diversify the industry through increased choice. Oasis LMF, Open Data Standards, Nasdaq Risk Modelling for Catastrophes and Impact Forecasting all play a vital role in centralising and simplifying access to multiple models, including JBA’s global catastrophe modellingin a cost-effective way.

This openness enables science and tools that have previously been restricted to re/insurance to be used more widely, including in international development, helping to build resilience in lower income economies. Read more in our blog.

Location, location, location

A final takeaway from the conference was the importance of accurate geolocation, and the struggle to achieve this effectively. Geolocation of properties and risks can be hugely varied depending on the information and data available, and inaccurate geocoding can have a significant impact on risk assessment decisions. It’s a topic with lots of different factors involved, including the play-off between precision and accuracy – we’ve explored this in a previous blog here.

It was another great few days of catastrophe risk discussion – if the sessions have raised questions about your own risk management needs, get in touch with the team using the form above to learn more about our global flood data and consultancy services.