Climate change makes future flood losses more uncertain – but not all that uncertainty comes from the atmosphere. While catastrophe models must account for evolving hazard, they also depend on assumptions about what assets are exposed and how vulnerable they are. In this blog, we use global sensitivity analysis to explore how uncertainty in exposure and vulnerability assumptions compares to uncertainty in climate hazard itself. This builds on our latest Australia Inland Flood Model, which includes new event sets that incorporate the effects of climate change. The blog is based on a scientific study (Pianosi et al., submitted; preprint here).
Risk is more than just hazard
In flood catastrophe models, hazard refers to the depth and extent of flooding under different events. With climate change, this hazard is evolving – and JBA’s modelling reflects that, both through updates to our baseline hazard layers and through tools that allow users to simulate risk under future climate scenarios.
But hazard is only part of the picture. To estimate loss, a model must also know what assets are exposed, where they’re located, and how vulnerable they are to flood damage. These inputs – collectively known as exposure and vulnerability – are as central to catastrophe modelling as hazard, and they carry their own uncertainties.
Some portfolios provide detailed, address-level data: construction type, number of storeys, first-floor height, insured value. Others arrive aggregated to postcode, province, or state, with limited structural detail. In these cases, the model must fill gaps – disaggregating locations and inferring building characteristics. These assumptions are necessary, but they introduce uncertainty. We wanted to know how much.
Testing the assumptions
We applied global sensitivity analysis (GSA) to a case study of flood risk in Queensland, Australia. GSA allows us to vary all uncertain inputs simultaneously, across their plausible ranges, and to measure each one’s contribution to modelled loss uncertainty.
We tested five model components, ordered below by their influence on results:
- Damage ratio – the relationship between flood depth and expected damage – was adjusted using a scale factor. This reflects uncertainty in how sharply losses increase with water depth for a given asset class.
- Exposure resolution was varied by using four portfolio formats, from full coordinate-level detail to aggregations at CRESTA province and state level. Coarser resolutions require the model to disaggregate exposures across wider areas, affecting how many flood events intersect with assets.
- Asset detail level describes the amount of structural information available. We compared portfolios with rich detail (including line of business, first floor height and number of storeys) to those with only line of business information, requiring broader vulnerability assumptions.
- Flood depth was scaled using factors drawn from a uniform distribution to reflect uncertainty in hazard intensity. This approach provides a simple way to explore how uncertainties in physical modelling and event set construction might influence modelled losses.
- Climate scenario was represented by five event sets: a present-day baseline and four future sets under RCP4.5 and RCP8.5 for 2050 and 2080 (see here for an explainer on climate scenarios). Unlike our latest Australia Inland Flood Model, all were derived from a single global climate model for this experiment.
For each of 210 combinations of these inputs, we ran our flood model and recorded the average annual loss (AAL). To explore how each input influenced modelled loss, we visualised the results across all simulations – both the individual relationships between inputs and loss, and the overall contribution each input made to loss variability. These are summarised in Figure 1.
Each dot in the top panels of Figure 1 represents one model run. For continuous inputs (like damage ratio and flood depth), the x-axis shows the adjustment factor applied, while the y-axis shows the resulting AAL. For inputs with distinct categories (such as portfolio resolution), each boxplot summarises the spread of losses across all relevant simulations. Variation within each panel reflects how other inputs – held constant in each run – influence the resulting loss. The bottom panel summarises the relative contribution of each input to overall loss uncertainty, using a sensitivity index.
What we found
The analysis shows that exposure and vulnerability assumptions dominate loss uncertainty in this case study. Damage ratio, exposure resolution and asset detail level were the three most influential variables. Each of these directly shapes how assets are classified and how losses are calculated, through the vulnerability curves assigned in the model.
Uncertainty in flood depth – used here as a proxy for hazard intensity – had less impact on average annual loss (AAL) variation than might be expected. Of the variables tested, the climate scenario had the least effect. However, this result is specific to the event sets and region in question: Queensland, using scenarios based on a single climate model.
The apparent insensitivity likely reflects the spatial complexity of climate signals in this region, where projected changes in flood hazard vary widely in both direction and magnitude. Importantly, sensitivity to flood depth or climate input is unlikely to be uniform across Australia. For instance, steeper catchments or more confined river valleys may respond more strongly to shifts in flood depth than broad, low-gradient floodplains. These findings should therefore not be generalised beyond this test case or assumed to apply to all future climate analyses.
Figure 2 helps explain how exposure assumptions affect loss. As exposure resolution becomes coarser, two things happen. First, the number of flood events causing loss increases, as buildings are spread more widely and intersect more events. Second, the model assigns higher-risk building characteristics to assets – most notably, more residential classifications and lower first floor heights. These shifts reflect a loss of structural variability during disaggregation and contribute to higher and more variable loss estimates.
In short, when exposure data is coarse or incomplete, the model fills the gaps – and those inferred assumptions can have a measurable impact on loss.
Implications for practice
Catastrophe models bring together many moving parts. As they grow in complexity, it becomes harder to trace which assumptions have the greatest influence on results. Global sensitivity analysis provides a transparent, systematic way to identify the key drivers of loss.
In this case study, exposure and vulnerability assumptions mattered more than hazard or climate scenario. That doesn’t diminish the importance of climate modelling – especially as multi-model ensembles become standard. But it highlights where model improvement and validation efforts might yield the most immediate gains in loss credibility.
Let’s talk about your risk
We’ve developed a flexible GSA tool that can be applied to different regions, portfolios and model setups. It can support validation, disclosure, regulatory submissions – or simply help you understand what’s driving the spread in your modelled losses.
To find out more, email hello@jbarisk.com.