John Lovett and Jeremiah Owyang has written (with others) a white paper on Social Marketing Analytics. I’ll be referencing the report throughout this posting, so go check it out.
This response is divided into three parts. It starts with a ‘I see where you’re coming from’, then ‘a few questions and inquiries’ and then ‘a few caveats and ways I’d improve it’.
First, I see where John is coming from.
John states, clearly, that “The objectives and metrics defined….in this report are a starting point for the infrastructure of social media measurement.” (p. 6). The whole document then goes into a very transparent goal alignment strategy – where four business objectives are lined out based on a goal, then KPI’s are identified out. He uses a circle-bulls-eye sort of device to describe that process (which I like, because it’s a lot more accessible than a formal hierarchy, while retaining the relationships you have in a goal architecture framework). He then defines 12 KPI’s and how they align back.
I’ll say that I see where he’s coming from. I’ve been practicing goal alignment strategy for the better part of my career – and this is a very disciplined one. The approach is excellent, and as analytics strategists, we’d all be better off if you used this general methodology.
Next, this quote: “By making learning and continuous improvement a primary goal, your social marketing activity will develop in a positive direction” (p. 6). It’s one of the nicer ways of saying ‘evidence based marketing should result in sustainable competitive advantage’. I enjoyed that passage too. It’s a less jargony way of saying it.
John outlines four broad business objectives associated with social media marketing: foster dialog, promote advocacy, facilitate support, and spur innovation.
These are not the only objectives possible in social media marketing. He never said that they were. In fact, “Not all objectives and metrics will resonate with each audience nor will our foundational framework give you all the elements necessary for success.” (p. 7).
It’s at this point, again, that I accept where he’s coming from. We set that aside, and we start to dig in.
Secondly, a few questions and inquiries:
The initial list of questions was around 100 deep, at which point I realized that there was little utility in going that far. Instead, I’ll focus on just three points of inquiry:
The third metric, conversation reach, is defined as total people participating divided by total audience exposure. Are unique visitors people? Are unique visitor figures traceable? “Conversation reach can be evaluated in both volume and location across social media channels”. (p. 13). Is this indeed the case? Can they? How are creators accounted for – in terms of actual conversing, as opposed to lurkers – or people who are observing the conversation (the exposure)? Is conversation reach better understood as the total number of people who have actually been exposed to the conversation, as opposed to the ratio between the participation and the exposure?
The fourth metric, active advocates, is a marketing one. I applaud John for using the word advocate over influencer (which I think blurs a fundamental marketing line). Could somebody be considered an advocate if they are constructively critical of the product and yet refer people to the product? Indeed, this is very common among innovators at the beginning of the product lifecycle. The devil remains in the term ‘positive’ and ‘negative’, and what an advocate is. The recency aspect is particularly excellent.
Which leads to the eleventh metric: sentiment ratio. First, is the positive/neutral/negative paradigm really indicative of innovation? Ie. Does it measure ‘innovation’? As applied to a topic area, indeed, raw general sentiment scores have been used – but it’s only done well if rigid topic-object hierarchy is identified. NextStage Sentiment Analysis (NSSA) is the closest that I’ve seen that takes into account additional dimensions over and above the straight positive/neutral/negative paradigm.
Finally – ‘a few caveats and how I would improve it’.
A confluence of three thoughts. The first is Claude C. Hopkins who, eighty years removed, implored me to think of analytics and scientific advertising as a profit center, not a cost center. The second is Jim Novo (of course) who has been imploring us to link up with the CFO. The third is a baptism at Syncapse – which is the closest thing to a phd in management science that I could hope for and is responsible for reinforcing an underlining bias about innovation.
There should be three central goals with social media: to make money, to offset cost, and to realize sustainable competitive advantage.
I would improve the framework by calling that out: to make money.
There are many products that are high consideration and where word of mouth / social influence play a huge role. Try ordering a cheap malt liquor at Bier Markt on a Thursday night and watch the reaction from your developer friends. (What? No Delierium?). There is real money to be made in social marketing because the consumption of certain products is indeed a social exercise. Always has been. It’s now, increasingly, in a medium where we can observe and quantify it (The actioning of that intelligence continues to be a sore point). I think that’s what has really changed: the observable WOM.
Some of these metrics can be worked into a cause-effect model of that. Earned Media Value (EMV) might very well be an excellent metric as part of that cause-effect model. There will be no one-size-fits-all attribution model for sales driven by social. (At least, not within the next 2 years).
To offset cost is another one. And that’s attractive with the current state of the economy. Cost offsets may very well be realized through the ‘facilitate support’ business objective.
Sustainable competitive advantage can be realized through learning and spurring innovation. The accumulation and actioning of intelligence and real insight is a huge key. To John’s credit, he uses the term ‘spur’ innovation, not ‘do innovation’ or ‘action innovative ideas’, which is an organizational KPI best left to the mythical balanced scorecard.
There are other dimensions from a different paradigm, for a different time.
In general –
I applaud and thank John Lovett and Jeremiah Owyang for coming out with this. The approach is solid. You can make it your own. It’s an excellent document for what it does.
While this is termed a ‘response to John Lovett’, I’d like to carry on this discussion through cross-blogs, in comments, and at eMetrics London in May with anybody who is interested in this area. There is so much to discuss.