Flood modelling without borders: Exploring global coverage and unprecedented flexibility

Speak to our team

Interested in learning more about the Global Flood Model? Fill in the form below to speak to one of our team.



We take your privacy seriously. We will securely store the data that you share. We will not share your data with any third party. If you would like to unsubscribe at any time please contact us at hello@jbarisk.com with the subject line Opt-out or call JBA Risk Management Marketing on 01756 799919. All updates will also give you the option to unsubscribe. Read our complete privacy policy here.

Part two

Making our flood science and data accessible for our clients has always been a core belief at JBA, which is why we’ve partnered with Nasdaq to release JBA’s first of its kind Global Flood Model on the Nasdaq Risk Modelling for Catastrophes solution.

In a recent webinar with Nasdaq, we explored how global coverage and unprecedented user customisation options can help re/insurers to successfully manage flood exposure worldwide and truly own their view of risk, for the first time. 

In this blog series, we provide the highlights from the webinar. In part one we explored how global coverage can help fill the gaps and identify losses in previously under-modelled regions, and how users can leverage the model via Nasdaq’s system.

Part two of our blog series examines the impacts of different customisation settings on losses, and how we’re placing the user at the centre of our model development.

Unprecedented flexibility

The Global Flood Model is underpinned by JBA's modelling technology, enabling the model to be built on-the-fly. Users can bring the model into being at run time for the exact portfolio required, thus avoiding pre-building and pre-compiling the model using set parameters and data.

As a result, the model offers unprecedented flexibility. Analysis and data settings can both be varied, with user choices including:

  • Optional flood defences
  • Multiple hours clauses
  • Multiple flood types
  • Optional tropical cyclone activity
  • Multiple depth thresholds
  • Choice of flood map resolution
  • Specification of hazard exposure buffer
  • Choice of selecting, editing or uploading vulnerability functions

This flexibility and wide range of customisation options enables users to investigate which changes make the most impact on their losses and fully understand the results.

Impact of customisation on losses

Using an example portfolio spanning multiple European countries, we can see how this unprecedented customisation can impact losses, and how re/insurers can start to better understand their results.

30m V 5m maps and defences

Looking at the 200-year loss, the orange bar represents 30m resolution maps and defences, while the blue bar shows losses using 5m resolution maps and defences. Across the portfolio, there is a significant decrease in loss by making this single change, although this varies by country (Figure 1).

Figure 1: 200-year losses for baseline 30m data and settings and 5m maps and defences

The impact of switching to 5m mapping is particularly seen in Germany, with a 67% drop in the 200-year loss.

Figure 2: 5m resolution river and surface water extents for a 100-year return period for the city of Münster.

Increasing the resolution of the flood map and defences has many benefits, including improved definition of the river channel and terrain features to capture the movement of water in a much more realistic way. Extents are generally smaller with higher resolution mapping because smaller topographic features are picked up in the terrain data and water is routed along areas of lower elevation instead of being spread out across wider, flatter floodplains. This is especially critical in complex urban areas where exposure is concentrated.

With the option of 30m mapping and defences and 5m mapping and defences, users can investigate the impacts of changing hazard map input and run the model against their own risk appetite.

Intensity look up

One of the Global Flood Model’s analysis setting options is the ability to edit the intensity look up. Users can set the size of the exposed area (also called a hazard exposure buffer), with the option to either use the flood depth from the single map pixel at the coordinate location, or to consider depths from a wider area to reflect flood hazard across a given site.

The radius from a coordinate location is used to extract water depth for each event. To help set a suitable size, users can run multiple analyses with different values to easily test the impact on loss for a portfolio.

Figure 3 shows what 15m-60m radius buffers look like on satellite imagery of a residential area in Germany.

Figure 3: A range of exposure buffers, from 15m to 30m, for a residential area in Germany.

Using a portfolio for Germany with 5m resolution maps as an example, there’s up to a 15% impact on Annual Average Loss (AAL) by adjusting exposure buffer size. Figure 4 shows the % difference from the 15m default radius size.

Figure 4: Percentage difference in AAL to default radius of 15m for a Germany residential portfolio using 5m maps and defences.

Across the example portfolio, there’s an increase when 0m is used and a decrease for larger buffers of 30m, 45m and 60m.

For some locations, including the one in Figure 3, increasing the radius above 15m can reduce average losses at that particular site – increasing the buffer starts to capture neighbouring properties and areas that are actually not at risk to flood. At different sites, the opposite may be true – an individual property might not be at risk to flood, but increasing the radius captures flood water around neighbouring properties and therefore increases average loss estimations at that individual site.

It’s worth considering what the most suitable buffer size is for the type of risks in a portfolio as larger buffers may help to better assess risk across larger commercial and industrial sites. As users can vary by line of business, there is the option to stick with the defaults of 15m for residential, 120m commercial and 300m for industrial, or change some/all of them.

Uncertainty

The new flexibility that this model gives us opens up other doors. In traditional cat model builds, it’s common to calculate uncertainty from predefined distributions. Arguably, these don’t truly represent the uncertainties of the model parameters that are part of the model calculations, and they don’t help the user to understand what contributes most to the uncertainty or to what parameters the results are most sensitive.

We are moving away from sampling from predefined distributions to a different way of calculating uncertainty using sensitivity analysis. We can use uncertainty in the actual data the model uses to inform the measure of uncertainty in the loss.

In this example, we are perturbing depths and damage ratios. For Europe, if we estimate the vertical uncertainty in the depth to be +/-20% and the uncertainty in the damage ratio to be +/-30%, a simple approach is to run nine combinations using the default damage and depths, or increasing and decreasing them by the uncertainty to see the spread of loss.

Figure 5 is the AEP for each of the nine analyses, run as a single ensemble. As you would expect, the highest losses are produced by increasing the depths by 20% and vulnerabilities by 30% (dark purple line at the top), and the lowest is a 20% decrease to depths and 30% decrease to vulnerabilities (brown line). The nine analyses allow you to see the spread of results produced using values within a reasonable uncertainty for these two datasets.

Figure 5: AEP curves for nine combinations of depth and damage ratios.
 
From these we can calculate a mean, standard deviation and coefficient of variation. Figure 6 is the resulting AEP.

Figure 6: Resulting mean and standard deviation AEP.

This is the first step towards a more sophisticated calculation of uncertainty using global sensitivity analysis. We will make further enhancements to this, for example perturbing more variables, increasing the number of combinations used in an ensemble run, and applying non-uniform distributions across the range of perturbation.

User focused updates

The flexibility and choices are there if you want to make them, but we’ll always provide a JBA view of risk based on expert judgement and the information currently available. The model will always provide JBA’s best estimate of flood risk to a market portfolio for each country, but users can also run the model with previous datasets and methods if they wish.

We’re working in an agile way, which means that the Global Flood Model will be continually updated, sometimes regionally or nationally, as improvements to data and science become available.

Users will be free to choose when they take updates to the model rather than having large, infrequent model updates imposed on them. Future updates to the Global Flood Model include enhancements in the UK to incorporate 5m mapping for river, surface water and coastal flood as well as updated baseline and climate change event sets, while the US will similarly be updated to include our new 5m US Flood Map for all three flood types.

Fundamentally, the Global Flood Model is a catastrophe model to be used like any other. But more excitingly, because of the unprecedented global coverage and user customisation, it offers so much more.

Interested in finding out more? Fill in the form to request a call-back from one of the JBA team.

Book a demo and access the Global Flood Model via Nasdaq here.