Catastrophe models:
What should we fix?

Problems with catastrophe models and how we should mend them

Here at JBA, we’ve been making catastrophe models (cat models) successfully for over a decade. We believe it’s time to challenge the way these models are built, allowing us to fill gaps in model coverage and redress the balance of control from the vendor to the user. We want to help our clients to understand, manage and quantify their exposure to flood risk, more effectively and efficiently than ever before.

But why should we change our modelling mindset? And how can this be done?

The need to change the way we build and run cat models

Several factors tell us that a step-change is needed:

  • Expense: Traditional cat models are expensive and time-consuming to make well, and are invariably limited geographically and to certain perils. They may be assembled with the best-quality data available, but the fact that they are pre-processed makes them costly to build and change.
  • Control: How many models to date have been truly transparent? It’s too often difficult to thoroughly investigate and interrogate the behaviour of traditional cat models. If users can’t control the datasets a model uses or the parameters that run it, their understanding is limited to what the model provider tells them. It’s almost impossible to run the model the way the user wants.
  • Flexibility and ease of update: It’s a lot of work to update a model (for both model developers and clients) and there are plenty of market obstacles to releasing frequent, versioned model updates. Often, model versions in production use with our clients are several years old. There’s an over-reliance on the model vendor’s view of risk at the time they compiled the model rather than the current view of the vendor, informed user experience and the latest science.
  • Accessibility: However good a new model is, if it doesn’t run on a user’s existing platform, it’s often easier to stick with an older, coarser, less reliable and ultimately unsuitable model. For a specialist firm like JBA, making our models accessible to those who wish to evaluate and use them is critical.
  • The simplicity vs accuracy struggle: Traditional cat models often contain over-simplifications. For example, diluting the value of the source data is an issue in traditional catastrophe models. Summarising a flood map at 30m or 5m resolution across a province during the model build process feels wrong; data quality is lost by failing to carry high-resolution flood depths through to damage calculations. However, they can also contain over-complexity; for example, using convoluted means to derive an event-location flood depth from an original event return period. Ironically this can go hand-in-hand with over-simplification. Why not simply sample n depths directly from a flood map and use those values in the damage calculation?

We believe that a balance is needed: keep the logic as simple as possible whilst directly using the best quality data for your modelling need. We can compute more today than yesterday, for the same relative cost, so we can move on from decade-old modelling techniques. Why summarise and over-engineer when this is no longer essential to get the model to run?

Steps towards our model Utopia

  • There must be an easier, quicker, more efficient way to make a cat model, and to run it. It’s time to find a better way of making our models so that they address the limitations outlined above.
  • We should make the question, “Have you got a flood model for country x?” redundant!
  • We must offer more control to the informed model user so they can understand and use cat models in the way they want, with their decisions effected at run time; we should design a more continuous modelling workflow to do this.
  • Our new workflow must also easily accept improvements to components and different assumptions. We must design a more flexible model, where users can adjust individual datasets without affecting other parts of the model.
  • We should anticipate future demands on the model. For example, climate change is finally a serious topic within cat modelling. In future, we should be able to model different climate scenarios as a matter of course, for whichever spatial domain we choose.
  • Models should have a widening range of access options, be that as an analysis service, an integrated in-house setup, hosted access via API or UI… the list will grow no doubt.

How will we get there?

JBA has well-established Global Flood Maps and an event set with global coverage. Combined, these form the basis of a new Global Flood Model. Today’s technology enables us to handle data of this size in a probabilistic model. So we’re doing it, and in a way that overcomes the above challenges and creates a new modelling mindset.

Flexibility is key. We are enabling users to supply their datasets of choice to the software which runs the model. We have moved away from pre-baked models and towards a single runtime which incorporates more and more of those traditional model building stages. We are aiming to remove everything that ties our models to limited spatial domains, vulnerabilities or assumptions. We will run our future models on a single, interoperable modelling system with a variety of means of access, capable of supporting different levels of sophistication as the user or territory demands.

In doing this, we intend to free up cat modellers to concentrate on their key principles, and on the analyses they want to perform, rather than trying to bend fixed models to new purposes. As a skilled model provider, we will continue to provide a consistent, stable “JBA” view, but we will also make our models easier to challenge, explore, and influence. We aim to turn the concept of transparency from a buzzword into an approach that offers real user control.

Let's discuss this in person

If you're interested in finding out more about the current challenges in catastrophe modelling and how we're moving towards a new way of modelling with our Global Flood Model, please get in touch.

News &
Insights

Blog The Ongoing Journey of JBA Flood Maps

JBA global flood maps underpin many aspects of our flood risk intelligence. In this blog we highlight the importance of the continuous improvement and ongoing work to review and update flood maps for all parts of the world to achieve the most-informed results.

Learn more
News How are climate scenarios made?

Find out what's behind the alphabet soup of climate scenario names. JBA Risk Management's Head of Science, Dr Paul Young shares a technical explainer.

Continue reading
News GRiP expands into flood intelligence with JBA Risk Management tie-up

South African Spatial Technology and Data Specialist GRiP partners with JBA to deliver advanced flood risk intelligence across Africa.

Continue reading
News JBA Risk Management teams up with Oxford University for infrastructure study

JBA and Oxford University join forces to research the risks of climate extremes on infrastructure networks worldwide today and in the future.

Continue reading