Is risk communication always needed in risk management?

The dangers of “over-communication”

In our day and age, the spread of information has become quickest than ever before thanks to the internet, social medias and smartphones. Although allowing to communicate with the public with unprecedented haste, the rise of these new technologies is also synonymous with the discovery of novel issues and challenges. Perhaps the most spoken of lately is the rise of the “fake news”, or alternative facts, which also applies to risk communication. However, another issue which is perhaps less often discussed, but similarly important, is that of “over-communication”.

General context

Nowadays, it is widely accepted that knowledge is derived from scientific research and is opposed to the “lay knowledge” accumulated by the public via experiences, traditions, or beliefs. This vision gives scientists (certified experts in a domain) credit and gives their knowledge a sense of factual truth from which the public’s (everyone else’s) is deprived. By deeming inaccurate or clashing with the knowledge amassed by the public, who in the theme of risks can often witness phenomenon first-hand, can lead to a global distrust of the “authorities” and make the creation and implementation of policies difficult. By communicating the results of their research, scientists give themselves a sense of legitimacy, which can be perceived as inappropriate by the public who might be confronted with the risk on a daily basis. So, is communication always needed and how should it be done in order to minimise these issues?

Knowledge creation: different approaches

Three main theoretical approaches of knowledge creation have been proposed by a researcher named Callon in 1999 in which the public is implicated to different degrees in the process alongside scientists. The first, coined the Public Education Model, depicts the widest divide between scientists and the public. In it, the local (or lay) knowledge is disregarded in favour of scientific facts produced by scientists alone. The findings are then communicated to the public to “educate” them, hence the name of the approach. While this approach may have some issues (trust issues towards scientists notably), its importance cannot be ignored in certain scenarios. Indeed, this method can be the only logical choice in situations where the risks are very high or where the public does not have the ability to understand them (risks related to nuclear power plants, for example).

The second approach is called the Public Debate Model and, while the creation of the knowledge is still performed by scientists alone, their findings are then presented to the public in order for them to express their opinion on them. With this model, scientists acknowledge that their views can be incomplete and can therefore benefit from the experience of the people directly touched by the phenomenon studied. However, as commendable as this may sound, the opportunity given to people to express themselves does not guarantee that their opinion will be considered and that the debate will affect the end result of the knowledge creation. It is sometimes argued that these debates are more often than not used by scientists to persuade the public that the scientific facts created are correct and to try to minimise the possible controversy they might engender.

The final model is the Co-production of Knowledge Model. In this approach, scientists and the public create knowledge together from start to finish. Here, scientists acknowledge that the knowledge amassed by the public through their experience is central to the creation of scientific facts and cannot be disregarded. Ideally, as the public plays a central part in creating knowledge in this approach, there would not be any need to communicate the results to them. The reality, however, is not so simple and communication, to some extent, is still needed. While this approach seems to solve the problems of trust and controversy, it is much more difficult to implement and is highly time-consuming, thus not making it ideal for all situations.

Communication and the “cry wold” effect

Theoretically, the three models described require different amounts of communication to the people given that they helped create the knowledge to different extents. In the first model, communication is key as the public cannot otherwise be aware of the findings of the scientists. This can however result in controversy if the knowledge advanced by scientists clashes with that of the public. In the second model, communication is partially done through debate. However, most of the time the entirety of the public touched by the findings is not present during those debates and must thus be informed. Moreover, once the public’s opinion heard, scientists will adjust (or not) their findings and will thus have to communicate the final results to the public. The final method explored requires the least amount of communication as the public took part in the creation of knowledge. However, similarly to the previous model, it is not always the entirety of the concerned people that take part in the process; some communication from the group of researchers (comprising scientists and members of the public) to the rest of the public must be undertaken.

A major issue with communication is the risk of “over-communicating”, a phenomenon otherwise known as the “cry wolf” effect. Because of all the monitoring going on nowadays and because everyone has a smartphone, we are surrounded by alarms informing us of all kinds of dangers at all times. As much as technologies have evolved over the years, warning systems are not perfect and can produce false alarms, which can quickly become an issue were too many to occur; the public could then start ignoring future warnings. Communication to the public must thus be carefully done. The information transmitted must be as reliable as possible in order to minimise the risks of false alarms and thus maximise the public’s responsiveness to them. The model most jeopardised by this phenomenon is the Public Education Model, as the public might already have trust issues towards scientists and the knowledge they claim as true. In opposition, the more the public partakes in the process of the creation of knowledge, the more they will comprehend why false alarms can occur without disregarding future warnings.

Is risk communication always needed in risk management?

Communication is needed to different extents depending on how the knowledge was created. Thus, if scientists were alone in the process and did not include the knowledge of the public at any point, thorough communication is needed in order to inform the latter as best as possible. Controversy can arise from this situation, however, if the knowledge presented does not agree with that already existing from the public, which can lead to a mistrust of scientists as a whole. Were the public to be involved in the process from the beginning, communication would be unnecessary (if the whole community touched by the research took part in it). Whether the targeted population already knows about the information, or whether the knowledge is controversial are also crucial in determining if informing the public is necessary or not. The more delicate a subject is, the more good and thorough communication is needed in order to maximise the public’s acceptance of the new knowledge. People must be informed when the risks are very high or when the subject is very complex, because they have no other way of knowing.

As discussed above, several issues can spawn from communicating about risks. One of which can be labelled the “over-communication” of known information to the public, which can potentially lead to people ignoring warnings issued. This phenomenon, also known as the “cry wolf” effect, can have severe consequences and should not be overlooked. The more the public is involved in the creation of knowledge itself, however, the least likely it will be to ignore warnings, even if repeated false alarms are received. Communication is thus important but should not be overdone to the point that people do not pay attention to what is said anymore.

References

  • Atwood, L. E., & Major, A. M. (1998). Exploring the ‘Cry Wolf’ Hypothesis. International Journal of Mass Emergencies and Disasters, 16(3), 279–302.
  • Bliss, J., Dunn, M., & Fuller, B. S. (1995). Reversal of the Cry-Wolf Effect: An Investigation of two Methods to Increase Alarm Response Rates. 28.
  • Callon, M. (1999). The Role of Lay People in the Production and Dissemination of Scientific Knowledge. Science, Technology and Society, 4(1), 81–94. https://doi.org/10.1177/097172189900400106
  • Kearnes, M. B., Klauser, F., & Lane, S. (2012). Critical Risk Research: Practices, Politics and Ethics. John Wiley & Sons.
  • Lievrouw, L. (1990). Communication and the social representation of scientific knowledge. Critical Studies in Media Communication – CRIT STUD MEDIA COMM, 7, 1–10. https://doi.org/10.1080/15295039009360159
  • Starr, C. (1969). Social Benefit versus Technological Risk. Science, 165(3899), 1232–1238.

2 thoughts on “Is risk communication always needed in risk management?

Leave a Reply

Your email address will not be published. Required fields are marked *