Modelling and Uncertainty - the extra dimension

As JBA continues to consider new data and methodologies, members of the climate change team share insight into the uncertainties present in climate models and how you might handle uncertainty in future flood risk modelling. This is the first of two blogs on this subject.

Climate model uncertainty: why does it matter when modelling future flood risk

At JBA we have a pan-company initiative to improve how we calculate and communicate uncertainty in all our products including our flood maps, event sets, and loss models. In this blog, we explore the extra dimension that climate model uncertainty adds to an already complex problem when assessing how flood risk may change in the future.

Climate model output is used to inform us about possible future impacts of climate change. However, future climate change is uncertain in a myriad of ways and climate risk products and methods should acknowledge this uncertainty in their design and delivery.

So where are the sources of uncertainty in climate projections, how is this quantified, and what are the implications for those who use climate model output in their workflows?

A framework for climate model uncertainty

There are three main sources of uncertainty in climate projections: natural variability, model uncertainty, and scenario uncertainty.

Natural variability

Natural variability refers to climatic fluctuations that happen without any human influence. It results from internal variability in the climate system across a range of timescales and includes day-to-day changes in the weather, intra-seasonal changes in the strength of the jet stream, and multi-year fluctuations in atmosphere-ocean energy transfers such as El Niño Southern Oscillation (ENSO). At shorter timescales, natural variability is also referred to as “weather noise”.

Natural variability is an irreducible uncertainty, but it can be minimised using multi-year samples (e.g., 30 years of data is a “climate normal”). It can also be explored and quantified using an initial condition ensemble, which is where the same climate model performs a single experiment multiple times with the only difference between the simulations being the initial conditions. At the global scale, the different ensemble members will all evolve in line with the specified climate scenario; however, there will be differences between the ensemble members at a regional scale due to the chaotic nature of weather (also referred to as the “butterfly effect”).

Model uncertainty

Model uncertainty arises from our incomplete knowledge of the climate system and shortcomings in our ability to represent environmental processes using physical equations. This includes the need to develop approximations (“parameterisations”) of processes at sub-grid scale (e.g., convection and clouds) because limitations in computing power place limitations on achievable model resolution.

Given the same inputs of greenhouse gas emissions (or “forcing”), different models will simulate different levels of climate change. For instance, for the same climate scenario and future time period, some models will warm more and some will warm less, related to having a higher or lower climate sensitivity, respectively. Models can also have divergent responses to identical forcing, such a given region getting wetter or drier, due to contrasting atmospheric responses (e.g., a poleward versus an equatorward shift in the jet stream).

While model uncertainty will never be fully quantified (due to the presence of “unknown unknowns”), it can be explored by comparing simulations where there are differences between the models. Multi-model ensembles are used to explore large differences in model structure (i.e., where different choices have been made about how to capture climate system behaviour in physical equations). This is where different models, typically from different institutions around the world, run the same simulations. Perturbed parameter ensembles are used to explore the effect of different choices for the values of parameters used in the physical equations. This is where one climate model performs a single experiment multiple times varying the values of key parameters within a plausible range. Multi-climate model perturbed parameter experiments are rarer as there are challenges in mapping equivalent parameter changes between models.

Scenario uncertainty

Scenarios are self-consistent and plausible imaginings of the future. Examples include Representative Concentration Pathways (RCPs) and Shared Socioeconomic Pathways (SSPs). Scenario uncertainty describes our imperfect knowledge of future socio-economic and technological trajectories, and therefore what our future greenhouse gas emissions might be. In the most straightforward sense, this uncertainty is captured by considering a range of different climate scenarios (e.g., by comparing an SSP1-1.9 world to an SSP2-4.5 world).

In the near-term (until at least the middle of the century), natural variability is the dominant source of uncertainty in climate projections. Longer term (i.e., towards the end of the 21st century), scenario uncertainty and model uncertainty become more important and tend to dominate the uncertainty at larger scales (continental and above), although natural variability can still dominate at smaller scales for certain variables (including rainfall).

Making sensible selections from a multitude of models

Hundreds of climate models have been developed by research institutes across the globe; for example, the latest set of simulations of the Coupled Model Intercomparison Project (CMIP6) includes over 100 models from more than 50 modelling centres. Those who ingest climate model output into their workflows must therefore make decisions as to 1) how many and 2) which models they use, with implications for capturing uncertainty in their products.

Model skill varies spatially and by variable; therefore, a tractable starting point to model selection is to use output from a model that compares favourably to historical observations in the region of interest. JBA Risk Management adopted a skill-based model selection process for the climate change tools that we launched last year. However, our approach to climate model selection will continue to evolve and we have been exploring options for sampling natural variability and model uncertainty, including the following:

  • Single model initial condition ensemble – Using multiple sets of output from a single climate model could keep the focus on performance for the region and variable of interest whilst better sampling natural variability. However, model uncertainty would remain unsampled.
  • Multi-model ensemble – Using output from multiple climate models is a good way of sampling model uncertainty. The ensemble can be weighted to give precedence to more skilful models. However, using output from tens of climate models generates large amounts of data.
  • Reduced ensemble – Using output from a few climate models selected via intelligent sampling can provide an indication of model uncertainty without the complexity of the full ensemble. Models can be subset based on their climate sensitivity (i.e., how much they warm for a given amount of CO2) or their response to forcing (e.g., models whose jet stream shifts poleward in response to climate change versus models where the jet stream shifts equatorward). This has been explored further in a second blog here.

We are reliant on climate model output to project future climate change and its impacts. However, it is important to be aware of – and quantify where possible – the range of uncertainties in this data so that we can better understand the uncertainty in our climate risk products.

Make an Enquiry

We'll keep you up to date

Never miss an update about our products and services, company news and event response data. If you would like to receive this information, please tick the box below.



We take your privacy seriously. We will securely store the data that you share. We will not share your data with any third party. If you would like to unsubscribe at any time please contact us at hello@jbarisk.com with the subject line Opt-out or call JBA Risk Management Marketing on 01756 799919. All updates will also give you the option to unsubscribe.

Read our complete privacy policy here.