Interested in learning more about how our flood data can help you? Fill in the form below to speak to one of our team.
In a continuing trend of digital conferences, the annual Aventedge CAT Risk Management and Modelling Summit moved online for 2021 with three days of sessions covering multiple different time zones and catastrophe risk subjects.
JBA’s Head of Consultancy and Singapore Managing Director, Dr Iain Willis, spoke on a panel exploring the role of catastrophe modelling in climate change risk assessment, with others in the JBA team attending sessions on topics ranging from US hurricane to geocoding of locations.
We’ve rounded up the top takeaways from the conference below.
A topic initially picked up by the climate change panel, uncertainty in catastrophe modelling was a common theme throughout the conference.
The session ‘Considering the limits of windstorm, flood and precipitation coverage in light of recent perils and innovations’ highlighted one of the key elements in model uncertainty: non-stationarity in data. Much of catastrophe modelling relies on stationary methods, using past events to represent future possibilities but in reality, hazard, exposure and vulnerability characteristics constantly change. This reliance on stationary methods for non-stationary factors underpins uncertainty in the catastrophe modelling process.
Non-stationary behaviours are, by their nature, difficult to model, and this difficulty increases further when modelling climate change. The methods and data used to create catastrophe models are often based on underlying historical observations and therefore contain existing trends. For example, there is research across the globe showing the presence of trends in river flow levels and rainfall totals. Trends in historical records lead us to the question: are these models providing a good representation of the hazard, or are they reflecting the hazard over the past 30-40-year period that the data represents? And can these historical trends be extrapolated to represent a future climate?
Developing methods to better represent uncertainty around cat model results is also key. At JBA, we’re rethinking the way we develop and build catastrophe models through our modelling technology. This modelling engine enable us to bring the model into being at run-time, rather than from pre-compiled parameters. It’s an ongoing development, with the aim of bringing more and more individual steps into a single at-runtime workflow that allows us to better represent uncertainty and non-stationary in our input data.
One of the first steps is the introduction of sensitivity analysis. We’re exploring ways to perturb input parameters by amounts that represent their uncertainty and run ‘ensembles’ of analyses in order to obtain a mean loss and standard deviation of loss, rather than using pre-defined distributions. This way, the uncertainty in the loss is informed solely by uncertainty in the inputs and does not rely on anything predefined. By using sensitivity analysis techniques, we can then find out which parameters most drive uncertainty in the loss.
JBA’s ability to carry out sensitivity analysis underpins our ability to effectively model climate change. We plan to generate events on-the-fly for a specific portfolio that represent future climate scenarios and we need to be able to perturb these inputs. By using sensitivity analysis techniques, we can investigate how uncertainty in the future climate scenarios impacts the loss.
Climate change certainly makes the uncertainty bigger, but model uncertainty is not a bad thing – it is an inherent part of any model that attempts to represent the natural world. It enables multiple views of risk and allows users to sensitivity test their model. However, for all catastrophe modelling, including climate change, uncertainty must be communicated effectively to clients, end-users and the industry to improve understanding . These models are becoming more sophisticated and will increasingly require a more technical understanding of how best to use them, as well as open and transparent communication on the behalf of model vendors to enable this understanding.
This increase in transparency and communication around model uncertainty ties into another key theme of the conference: openness.
In a number of sessions, Aon Impact Forecasting’s team discussed the importance of open data standards, open platforms and open technology not only to improve access to cutting-edge tools, but also to make it easier to bring models to market and diversify the industry through increased choice. Oasis LMF, Open Data Standards, Nasdaq Risk Modelling for Catastrophes and Impact Forecasting all play a vital role in centralising and simplifying access to multiple models, including JBA’s global catastrophe modelling, in a cost-effective way.
This openness enables science and tools that have previously been restricted to re/insurance to be used more widely, including in international development, helping to build resilience in lower income economies. Read more in our blog.
A final takeaway from the conference was the importance of accurate geolocation, and the struggle to achieve this effectively. Geolocation of properties and risks can be hugely varied depending on the information and data available, and inaccurate geocoding can have a significant impact on risk assessment decisions. It’s a topic with lots of different factors involved, including the play-off between precision and accuracy – we’ve explored this in a previous blog here.
It was another great few days of catastrophe risk discussion – if the sessions have raised questions about your own risk management needs, get in touch with the team using the form above to learn more about our global flood data and consultancy services.