The Complex Quest For Simplicity in Social Media Measurement
The Quest for Simplicity in Social Media Measurement (#smm) is one that will dominate the year.
Trying to produce something simple out of something complex is…complex.
There are seven axioms that are guiding a lot of my thought in dealing with that complexity:
1. The purpose of analytics is to derive competitive advantage for the organization / firm / entity.
It follows that the purpose of Social Media Measurement is to drive competitive advantage. If the end result isn’t competitive advantage – then it has no value. That unto itself is a value statement.
Simplicity drives competitive advantage because simple is more actionable than complex. I’m often asked questions that have very complex comprehensive answers. I have to sort out that complexity based on relevancy and action-ability. Reality is always so much more complex. And yet, people can’t act on the complexity.
They act on simplicity. And if action is the vital link between the insight/competitive advantage gap – then this mandates a simplified approach.
2. Data alone does not yield competitive advantage.
A major brand might be mentioned 2.5 million times a week on Twitter alone. Having all of that data in a database is of no value if it doesn’t result in competitive advantage.
I’ll go ahead and make a statement: very few people on Earth have the capacity to read and understand what 2.5 million tweets mean on a monthly basis.
3. A sequence of progressive hypothesis testing is the most efficient and effective method to derive competitive advantage from data.
I still hold that the scientific method is the best one we have for learning right now. Someday, somebody will figure out a better algorithm. Until then, the scientific method has this wonderful blend of flexibility, creativity, and evidence.
Progressive hypothesis testing means acting deliberately with marketing messages. The goal might be known – like ‘drive sales’, but the opportunity to message a community becomes all the more useful when, over a sequence of messages, a specific hypothesis is testing. One really basic test might be: “will the community respond more to content about special features instead of content about where our spokesperson is going to be”.
Acting deliberately isn’t always possible, especially in a reactive world, but there’s opportunity to derive learning or insight that can drive the next wave. In social media, the tempo is that much higher. This isn’t 2-year website redesign land.
4. Predicting the future requires an understanding of cause and effect.
At the core of prediction is previous cause and effect. If I touch a hot pan, it will cause my hand to burn. Therefore, I can predict, by touching a hot pan, my hand will burn. Very predictive.
Not everything, especially in marketing, is so clean. At some of the more basic roots – If I spend 500,000 dollars on commercials and run them constantly, I will get 11 GRP. If I get 11 GRP, I’ll move 25,000 toasters.
Statisticians, or Social Science Statisticians, are so incredibly jaded by such simple linear models. Sure, you might get 11 GRP’s, but not all GRP’s are made the same. Moreover, what type of commercial are you going to run? Will it resonate with those who are already looking for a toaster? Will it cause people to suddenly desire a toaster who do not have one? Will it cause people who want to judge others to go out and buy the toaster so they can have a plank to judge? Will it cause people who already have a perfectly good toaster to want to buy, and remember, that toaster – five years down the line to buy that brand?
So frequently, especially when a cause-and-effect model doesn’t jive in our own minds, will we go out and try to discredit other models by introducing other factors that we ourselves deem salient to the situation.
In the end, it comes down to R Square. The percentage of the variation our model predicts the outcome of a variable we care about. A big reason why I rattle on about the importance of goals and KPI’s is because we can anticipate a world where everybody will care about the R Square.
This is especially true in Social Media Measurement. Many people speak of things ‘going viral’. Yet, how many people have truly explored the causes of going viral? There are multiple causes of why something goes viral.
Predicting anything comes from cause and effect.
5. Correlation is not always Causality.
Even a high R Square doesn’t guarantee truth. There might be a great correlation between affinity for John Cena and a love of peanut butter – but I’d be hard pressed to derive a clean causal link between the two. (Perhaps John Cena’s fan base is concentrated in regions where peanut butter is given to young children early?). Unlikely.
Correlation is useful, but without overarching respect for your own theory and your own mental models – it’s dangerous.
This is especially true in Social Media Measurement – where correlations abound – but causality can be fleeting.
6. Accuracy over Precision.
Would you take a thermometer that is right 95% of the time and you were fairly sure that it was always off by 5 degrees, or would you take a thermometer that is right 50% of the time and you were fairly sure that it was always off by just 0.01 degree?
In Social Media Measurement you can have it both ways!
7. It is possible for there to be two optimal, equally true, answers to a problem. (And Sometimes More!) (X^2 = 4, x=-2, 2).
If there are two equally true answers to a problem, surely there could be millions of wrong ones. I’m certain that will make certain people happy to hear.
In Social Media Measurement, it is perfectly possible for two solutions to be both equally right.
A specific instance would be the sentence:
“The boy crossed the busy road carefully.”
I’ll ask you: What was that sentence about? I can see a situation where one of you says, “The boy” and another person says “The road”.
Well, in my view – they’re equally true.
There are multiple right answers. There are multiple wrong ones too.
Simplexity.
The quest for simplicity is complex.
Simplification involves obliteration. It’s possible to take a column of 300,000,000,000 numbers, a massive amount of information, and summarize them into a single figure. In fact, there several numbers that can describe the central tendency of all that information: mean, median, mode. We have a number that describes dispersion of that data: standard deviation. We have numbers that describe the peakyness: kurtosis.
What should get obliterated in the quest for simplicity?
Going back to Axiom 1, variables that do not matter to competitive advantage should be obliterated. Going to Axiom 4, you need to identify the variables that cause a desired effect, in particular, looking for reinforcing effects, all the while knowing that Axiom 5 applies (your theory of how the world works could be wrong even if mathematically it works) and Axiom 7 – it’s perfectly possible for two models to be equally right.
It all comes down to an acknowledgment that Axiom 2 is right: data alone isn’t going to yield competitive advantage, and Axiom 3 is the best way to turn that data into insights that drive competitive advantage – a sequence of progressive hypothesis testing.
I don’t believe we’ve even begun at the beginning yet: what is salient in social media measurement?
We’ll need to get all of those on the table before we can talk about causality, reinforcing effects, and come out to a resolution. I’m pessimistic that there will be a single resolution that will suit everybody: but there is probably a solution that will satisfy 90% of the situations.
What say you?
3 thoughts on “The Complex Quest For Simplicity in Social Media Measurement”
Sometimes you just have to let something go.
One thing I have learned over the years about marketing measurement is this: if you have to go to extraordinary lengths to measure something, there may not be anything worth measuring there.
In other words, the best measurement of the value of social might just be exactly the same as the way we measure anything else – what do visitors from a social source accomplish on our web site?
This is going to be one of those interesting replies:
I’m not entirely certain that all the activity that really matters actually happens on a commercial, owned, website.
Of course, it depends on the nature of the product and LifeCycle, yes, that should go without saying. (But doesn’t).
Directly to your point – there are categories of products and industries where I think a simple attribution statement is going to be enough. Where it’s just not relevant.
(I’ve said this before at conferences, much to the boo’s, hisses and horror.)
And yet, there are instances where something will have to give.
Comments are closed.