Some thoughts on working with vulnerability indicators

Submitted by Daniel Morchain | published 3rd Apr 2012 | last updated 17th Mar 2020

The recently released IPCC special report ‘Managing the risks of extreme events and disasters to advance climate change adaptation - Summary for policymakers’ (2012) defines vulnerability as “the propensity or predisposition to be adversely affected”. This description is complemented by previous IPCC reports, which define vulnerability as “a function of the character, magnitude, and rate of climate variation to which a system is exposed, its sensitivity, and its adaptive capacity” (2007).  

Defining vulnerability in such broad terms implies that working with the term will normally require the use and combination of data from different sources, including socio-economic and environmental data, as well as estimates of future climatic conditions. 

The wide scope for the understanding of vulnerability - or, in other words, the complexity of mathematically expressing it and of assigning numerical values to its components - largely explains why it is not feasible to define and agree on a single vulnerability measure or 'index' and rank countries in a straightforward and purely ‘scientific’ manner; at least not without introducing considerable normative preferences. (Read an interesting discussion on this subject).  

Additionally, in order for vulnerability to be measured, its elements need to be sub-divided and defined within narrow boundaries. This means that many ‘vulnerability values‘ need to be aggregated in order to draw a complete vulnerability component of, for example, a given community in a given place to a given climate stimulus and at a given time period. 

The complexity in data aggregation and the difficulty for the user to understand the process and its implications further complicates the development and application of these tools. Introducing normative values can, furthermore, lead to bias - even if well-meaning.  

These obstacles in developing robust vulnerability indices has led Fuessel (2009), for instance, to argue that “all existing indices of (social and outcome) vulnerability to climate change show substantial conceptual, methodological and empirical weaknesses including lack of focus, lack of a sound conceptual framework, methodological flaws, large sensitivity to alternative methods for data aggregation, limited data availability, and hiding of legitimate normative controversies”.

Still, many organisations have dedicated considerable efforts to try to overcome these barriers and develop vulnerability indices that can support decision making amidst very complex and uncertain environments. A clearly laid out and not overly complex (aggregation) methodology, together with a well-founded selection of indicators can enhance the transparency and accessibility of any given vulnerability index.

Indeed, there are several applications for vulnerability indicators. They can be useful as communication and awareness raising tools that are easy for non-experts to grasp and work with, as well as for comparing elements (countries, regions, species, ...) and for informing policy makers. Another common use for vulnerability index rankings is for prioritising the allocation of adaptation funding - even if this can be a contentious issue, considering the points discussed above.

In a nutshell: When using vulnerability indices, make sure to have a clear understanding of the purpose and focus areas of the index (e.g. water, governance, extreme events, holistic approach, etc.), what indicators have been used to build it, how these are aggregated and what normative values may have been made in the process. Indices can be valuable tools to gain a general understanding and compare regions, as well as powerful communication instruments. At the same time, understand the limitations and potential biases of vulnerability indices and support your decision making process with other sources.