People tend to perceive more risk what they don’t see and don’t understand. Food irradiation (sterilizing food with radiation) is one of them. Dying in a plane crash is another (the physics of flight continue to elude many). New and unfamiliar technologies tend to cause a higher amount of risk perception – the first microwaves, cell phone technology, and limited exposure to environmental toxins.

There’s an element here of people tend to fear the unknown. “The devil you know is better than the devil you don’t”.

One key ways that an analyst can communicate risk is to make the unobservable, or the unknown…known.

One method is to categorize all the risks into categories:

What we know we know
What we know we don’t know


What we don’t know what we don’t know

Then, it’s a matter of systematically running through the risks. There are risks that are typically very well known, and fairly well known probabilities of their occurrence, either through experience and established patterns of failure or success (and note that we should be careful when it comes to these patterns…it just might be like monkeys on a typewriter), or because they are easily calculable.

There is what we know we don’t know. These are risks that are less well understood, but still known. They are very difficult to quantify. For instance, if we’re trying to communicate the risk of a campaign failing, what are the chances of a piece of creative within the ad offending some segment of the population? Well, we certainly know that it’s possible…but assigning odds to it is very difficult, and this is in large part because ‘offense’ is not observable to all audiences. For instance, if one is from a culture that enjoys pork, and had never heard of another religion or sect – how would one possibly know that portraying somebody who resembles something, eating a pork chop, might be offensive?

Finally, there’s what we don’t know what we don’t know. Well – if you knew what you didn’t know…you’d know? Right? It’s this kind of a big black, unobservable hole.