Extending the record: using
historical evidence to
improve flood estimation

Flood frequency estimates are an essential part of flood risk management. They relate the severity of a flood event to the likelihood of that flow occurring in any given year, which in turn helps us quantify risk. February 2019 saw colleagues from JBA Risk Management attend a lecture hosted in partnership with the Royal Geographical Society and the Yorkshire Philosophical Society and given by Dr Neil Macdonald, University of Liverpool, who reiterated a common problem in the estimation of more extreme flood events: the lack of extended records.

The problem

In flood risk analysis, understanding large flood events is key because these are typically the most damaging and costly. However, extremes by their very nature are the most poorly represented by conventional methods, largely because the average length of UK river flow series is only 40 years (NRFA, 2018). Of around 1,500 current gauging stations across the UK, only ~40 have a daily flow record that exceed 70 years with the earliest known flow monitoring site (Wendover Wharf, Buckinghamshire) only dating back to 1841 (NRFA, 2018). This lack of extended data series makes it difficult to reliably estimate extreme events and so presents a considerable challenge for determining flood risk.

The value of historical evidence

Dr Macdonald began his career in hydraulics, hydrological modelling and the statistical aspects of flood frequency analysis but in recent years his research has explored a variety of alternative yet complementary sources of flood information. With a scarcity of measured river data, historical evidence sometimes provides the only insight into extreme events. Examples of the different types of historical records include:

  • Epigraphic markings – inscribed water levels on structures, often bridges, to mark the height of a flood (seen below)
  • Documentary sources – parish records, newspapers, economic records, military sources, estate records and diaries
  • Images – wood carvings, paintings, photographs
  • Markers/objects, e.g. floodstones – placed to mark the greatest spatial extent of a flood
  • Sediments – reconstructing floods from sediment accumulations (palaeoflood evidence)

Pictured right: An example of epigraphic markers. The high watermarks reached by floods have been carved into the stonework of Trent Bridge in Nottingham. (Image provided by Dr Macdonald)

These historical data are found in many towns and cities across the UK (see photos) but also worldwide and offer a much longer record of flooding from which we can reconstruct flood magnitudes. For example, York has one of the longest UK records of river levels (dating back to 1877) but documentary records of flooding exist much further back to 1263 AD. These records can provide information relating to flood timing, the damage/impact of an event, the societal response and the flood generating mechanisms. This provides a fuller picture of flooding than just the quantitative elements of flood magnitude and frequency. Therefore, by embedding these records in flood frequency analysis, we can decrease uncertainty and increase our understanding of flood risk.

Admittedly not all records can or should be incorporated into our analysis – the reliability of the data must be considered. Historical records can be inherently subjective and may not include details on the exact water level or, if they do, they may be relative to structures that do not exist anymore. Also, we may only know the flow depth at a singular fixed point with no information about the extent or duration of flooding, which can be key in defining the ‘size’ of an event. Nonetheless, Dr Macdonald raised the valid point that surely it is better to include these uncertain records of extreme floods than to have no representation at all, especially as we can statistically represent this uncertainty.

Studies in this field

Pictured left: A newspaper clipping from 19 January 1866, documenting the ‘Great Flood’.

Clearly historical records can add value to flood risk analysis but how do they really improve our understanding of high magnitude floods? Macdonald and Sangster (2017) use historical records to examine spatial and temporal variability in river flooding across Britain since 1750 AD. Historical flood levels were estimated from documentary sources, physical evidence and epigraphic markings and then river discharges reconstructed, although greater emphasis was placed on the ranking of event severity than on obtaining a precise discharge estimation. This presents a more extensive record of flooding, allowing longer-term trends and patterns to be explored and in more detail, such as identifying ‘flood-rich’ and ‘flood-poor’ phases and their correlation with climatic drivers.

Studies into the added value of historical data are not limited to the UK. Engeland et al. (2017) investigated the use of historical data in flood frequency analysis for four catchments in Norway where the typical length of a river gauge record is only 40–50 years. They analysed the methods of incorporating historical data as well their associated challenges and conclude that both the reliability (the ability of a model to predict flood levels) and the stability (the sensitivity to the underlying data) of analysis can be improved.

How JBA extends the record

At JBA, we recognise the limitation of the observed record and use scientifically-justified statistical and physical principles to simulate events correlating to all sources of flood risk (river, surface water and coastal), generating a long stochastic event set that extends the measured flood record. For further information on the problem of relying on short record lengths, you can read our recent blog on using a flood catastrophe model to estimate financial loss.

Another established method of improving flood frequency analysis that we haven’t discussed in this blog is regionalisation, which is particularly useful for catchments where no gauge data is available. You can read more about this process in our blog on estimating flows in ungauged catchments.

Finally, our UK Flood Model incorporates a range of historical flood events to reduce uncertainty, allowing for validation and portfolio stress testing.

If you would like to find out more about probabilistic catastrophe models or how we can help you understand your flood risk, please get in touch.

You can find further information on Dr Macdonald’s work here.


Engeland, K., Wilson, D., Borsányi, P., Roald, L. and Holmqvist, E., 2017, Use of historical data in flood frequency analysis: a case study for four catchments in Norway, [online] Hydrology Research 49(2): pp.466-486.

Macdonald, N. and Sangster, H., 2017. High-magnitude flooding across Britain since AD 1750, [online] Hydrology and Earth System Sciences 21: pp.1631-1650.

National River Flow Archive, 2018. Early Records: About Data. [online] Available at: <https://nrfa.ceh.ac.uk/early-records> [Accessed 6 March 2019].

News &

News Celebrating our 10th anniversary

We're celebrating 10 years of The Flood People and creating world-leading flood risk insights.

Learn more
Blog COP26: A window on change

Dr Emma Raven reflects on what COP26 means for climate modellers, the opportunity it presented to move forward together in addressing the challenges, and the importance of open data in doing so.

Continue reading
Blog Investing in Disaster Prevention in the EU

JBA specialists modelled flood risk in Europe as part of a World Bank and EU Commission project on disaster preparedness in the region. Our blog explores key results.

Continue reading
Blog Representing uncertainty in flood maps

As with any scientific modelling, there is inherent uncertainty within flood mapping. We explore JBA tools to help mitigate this uncertainty and help users understand the full risk.

Continue reading