I’m working on another 5-part blog post series on “How individuals decide”.

I’ve hit a snag. And it’s a bad one. It has to do with triggers of search.

There are reasons for why people ask for evidence the way they do, and their subsequent reactions to follow up questions.

For instance:

Questioner: “I need to know how, of how many people visited the Vegan Microsite, who also saw my tweet about Chicken two months later.”

Alright. So, there’s obviously a reason why the questioner is asking the question. And it’s a pretty strange one from the outset.

  • What do they mean by ‘saw’.
  • Two months from when?
  • Cause and effect appear to be really messed up from the way I understand the world.

It needs to be acknowledged:

  • Visitors aren’t people.
  • There exists no unique key between a given Twitter App state at any given time, and the Microsite, that is remotely accessible by you, the client side analyst.

People frequently ask questions because there’s some sort of trigger. Instead of responding with everything that is wrong with the question, asking about the trigger can really help.

Analyst: “Interesting question – what are you trying to figure out with the information?”

You may get back any of the following replies:

Questioner: “Why are you asking? Just give me the god damned data.”

Whoa, they’re in a hurry, they don’t like you, they’re really defensive – what’s going on? This might end poorly. You can explain the five bullet points above, but they have the expectation that since it’s digital, it’s all perfectly cross-referenced.

Questioner: “I know my anti-vegan strategy worked, I just need the numbers to prove it.”

This line of engagement will lead down the campaign effectiveness report that either was, or was not, assembled when the campaign was live. If there was such a document and tracking, such information wouldn’t have made it in there because it isn’t possible. Sometimes, dissatisfaction with what a report says will trigger a search for additional evidence that is beneficial to their point of view.

Questioner: “I want to see the effectiveness of twitter.”
Questioner: “I’m working on a cross-channel synergy strategy.”

Whoa, okay, so they’re exploring for answers. Their inquiry was extremely specific and not well suited. In other words, even if you had the answer (and you don’t), the answer to the question wouldn’t actually answer their trigger question. 

Who’s causing the trigger is just as important as what the trigger is, moderated by the culture of the organization. The higher up the person causing the trigger, the more energy downstream there is in attempting to resolve the trigger. If the query was not well formed in the first place, or, worse, interpreted by various levels in the hierarchy for no other reason than broken telephone, real problems compound.

The snag is that I don’t have a complete collection of search triggers. There’s a lot missing, and I don’t have a full collection. This, in turn, triggers a search for those answers. In the meantime, this is the general route that I’m thinking.


You can be a far better analyst simply by asking better questions.


I’m Christopher Berry.
I tweet about analytics @cjpberry
I write at christopherberry.ca