Slovic et al (1982) wrote extensively about the role that controllability and observability play in risk perception.
Next we make the link to management.
There are four broad categories
Risks that we know we know, and can mitigate.
Risks that we know we know, and can’t mitigate.
Risks that we know we don’t know, and can mitigate.
Risks that we know we don’t know, and can’t mitigate.
So let’s break it down with respect to a new advergaming banner ad going out.
We know that there’s a small risk that the banner ad interactivity isn’t going to work, but we can reduce that risk by alloted two hours to quality assurance.
We know that there’s a small risk that the site we’ll be hosting it on might crash, as it has done so frequently in the past, but there’s nothing we can realistically do about that.
We know that there’s a risk that we’re going to offend somebody with our ad, we don’t know the chances of that happening, or who, but we can mitigate some of that risk by not offending any of the major religious and cultural groups that visit the website.
We know that there’s a risk that somebody might hack our ad, but we don’t know how much risk, if it’s even realistic with some new technology, but even if we could quantify those odds, there’s nothing we could do about it.
Now, let’s take this really simple example, and translate it into communication:
When asked, “we have a new advergaming banner ad going up, what’s the chances of this thing really failing?”
You might reply:
“The chances of it really failing are small if we take a few precautions. First, I think we need to allot two hours to some quality assurance, and also make sure that our creative doesn’t offend any major group, though, there’ll still be a chance that we offend somebody. Realistically, there’s a risk that our ad server will crash, but that’s under our vendors control. There’s also the risk that we might end up getting hacked by a new technology, but there isn’t much we can do about that either.”
Frequently, you’ll be asked about the odds of failure, or the odds of a risk coming to fruition. How can you really estimate this kind of risk?
Well, I’ve given you a hint, right? There are two types of risk that we can actually estimate, and then there are two types that we don’t have much of a good grip on. To make matters worse, most people don’t really understand how to combine probabilities.
Let’s put the risk of the banner not working at 2 in 10 if we don’t do QA, and 1 in 10 if we do. Let’s put the risk of a crash at 1 in 10.
What’s the probability of something going wrong if we mitigate risk where we can? Well, it’s 1/10 + 1/10…so the risk is 2/10, since each risk is mutually exclusive (one is not linked to another).
The formula: P(A or B) = P(A) + P(B) – P(A and B) has to be used if the risks are not mutually exclusive (that the two are related to one another).
What of risks that you can mitigate, but can’t quantify?
I’d argue that these are the most dangerous risks of all, and ought to be mitigated within the realm of what is ‘reasonable’.
Whereas with risks that we can accurately measure and mitigate, the argument about just how much risk should be mitigated, at what cost, can be scientifically deduced. It’s just an exercise of math.
With risks that cannot be accurately measured, but mitigated, the question boils down entirely to either politics or ‘common sense’.
I have yet to find a solution for this.
The best I’ve been able to do in the past is say, along these lines:
“The odds of failure, based on what I know that I can quantify, is 20%. There are two risks that I know – offending a culture and the threat of hacking – but I don’t know the chances of them happening. I think that we should pass the proposed creative for the advergame in front of a sensitivity panel briefly just to mitigate the first, but there’s really nothing we can do about the hacking. We just have to accept that as background risk.”
Typically, the stakeholder will take the 20% as ‘controllable’ risk, and will accept the ‘background risk’.