Nearly 3/4 of the respondents to the WAA Outlook Survey cited ‘making the data actionable’ a top concern and priority. Just under 1/3 of respondents reported using web analytics as an input into budget and planning.

Great. So what are we going to do about it?

To make something actionable, you have to understand what people are trying to action. And there’s huge industry variance. KPI identification is at the core of what web analytics consultants and leaders do. So, that’s all known, and not a research question. We know it.

And sometimes, the website just isn’t that core to how a firm makes money. It could be. And relevance can be found through compelling business cases. That’s all known and effective.

What is it that we don’t understand?

We don’t understand several general laws about the impact of analytical evidence on decision making and judgment.

Do we really understand the impact of previous evidence on current evidence (anchor and adjust)? Do we really understand why variation in web analytics communication exists? Do we really understand why some analysts scream ‘that’s not an insight’ while others say ‘that is an insight’, in response to the exact same information?

So, I’ve raised a lot of questions. What of solutions?

Research Methods

Since joining the research committee years ago, I’ve personally tried to shy away from hitting the membership with surveys. We already have the periodic studies that rely on surveys. We have membership satisfaction, outlook, and the compensation studies – that are important, direct, and relevant. One other project, on the horizon, has collective intelligence written all of it, and I’m cheering that on. So we’re good there. That instrument is in use.

We also have secondary research happening, by way of the Peer Review Journals, and that line of research is rocking.

So what else is available?


A reactor of sorts.

Let’s expose different groups of people to different data structures and ask them to make decisions. Let’s watch people make decisions. Let’s put into place a reward for the best performance, and then measure the meta.

It’s a question paired with a research method.

What do you think?