2 minute read

I participated in the reproduction of the climate change vulnerability score calculation for Malawi last week, following the Malcomb 2014 study. The result was not satisfactory, and I have identified several defects of the study while reproducing.

To construct a vulnerability model, one needs to include a wide range of data sources. The entire construction process involves many subjective decisions. For the reproduction researchers, in order to evaluate the validity of a social vulnerability model, the foremost step is to clarify the steps the author implemented to construct its model. Next, after following the instructions provided by the author, we would like to see if it would lead us to the same results as the original study. Ambiguity in the original model set-up or omission to identify arbitrary decisions would make it less probable to reproduce the exact vulnerability score. A failure to reproduce the social vulnerability model would mean that we may not consider applying it in our study because the model is not reproducible and thus not trustworthy for us.

If we get the same results, fortunately, then it means have a clearer map of the internal structure and logic of the model. This knowledge would benefit us from appropriating it to our own study. To make the model work the best for us, we may need to tune some parameters, methods, or any subjective decisions identified by the model’s author when replicating the study. For example, we may consider using the same variables and data but alternating how scores are evaluated, on what scale the score is presented, etc. We have to be aware of the influence and effect of the calculated vulnerability score in reality, as an underestimation of vulnerability could be detrimental to an area.

Reading Rufat et al.’s study on social vulnerability models for Sandy, I appreciate their effort in applying four different mainstream social vulnerability models: an inductive model based on factor analysis, a hierarchical weighted model based on expert knowledge, a deductive model composed of thematic pillars, and a profile approach based on clusters. Using multiple models and comparing their vulnerability results may lead to fewer incidences of uncertainties and higher confidence in predicting vulnerable regions.

To my disappointment, the study by Rufat et al. concludes with a negative note stating that there is a “mismatch between the rising application of social vulnerability models and understanding of their empirical validity”. The results of the mainstream models failed to coincide with the empirical disaster. I suppose our existing data is unable to capture too many variables, and statistical reasoning may deviate from reality. Given the current background of global climate change and more frequent extreme weather events, past vulnerability models should be updated and re-evaluated to prevent high casualties during disasters.


Bibliography:

Cutter, S. L., Boruff, B. J., & Shirley, W. L. (2003). Social vulnerability to environmental hazards. Social Science Quarterly, 84(2), 242–261. https://doi.org/10.1111/1540-6237.8402002

Rufat, S., Tate, E., Emrich, C. T., & Antolini, F. (2019). How Valid Are Social Vulnerability Models? Annals of the American Association of Geographers, 109(4), 1131–1153. https://doi.org/10.1080/24694452.2018.1535887