Listicles in analytics communication
The listicle is an amazing communication device. A listicle schema for communication – always in the form of a list. Sometimes that list is random, but, often ordered. I continue to be in awe of the ongoing effectiveness of the listicle.
Lists are effective communication devices in analytics. Why not listicles?
Effective analytics dashboards are filled with lists.
- “The top 10 performing landing pages”
- “The top 5 posts”
- “The top 7 competitor ads…they don’t want you to know about!”
Lists are visually compact and editorial appropriate.
- An executive might scan a list for the top performers and the bottom performers.
- An analytics executive might scan a list for the top 20% and verify that it accounts for 80% of some volume.
- A content producer will scan the list for what they produced, look at the top, and compare the distance in performance.
Ranked lists contain a huge amount of information. Spearman’s Rho is one statistical test that takes advantage of that. They communicate order, and they can communicate which metric has primacy.
Consider the medal count table below:
The table above is ranked by count of gold medals.
In so doing, the author makes an editorial judgment about which key performance indicator takes precedence. It’s gold.
The algorithm that underpins a ranked list forces that editorial judgement. As opposed to say, an inoffensive report that makes no actual normative statement about performance.
Lists are a great way to communicate rank order along a primary axis, and are terrific for tabular data.
I’m intrigued by the listicle as a communication device because I’ve seen evidence of their effectiveness in the field. My twitter feed is routinely littered with blogspam. I’ve poked fun of the tactic on numerous occasions. And I’ve seen figures. They work.
Are listicles far more reliable than the dramatic structure or minto methods?
Are presentations in the format “The top 7 things you should do to supercharge your digital strategy this quarter!” really spam if they’re wildly effective?
Isn’t the structure of recommendation-evidence / recommendation-evidence likely to be far more effective?
Does the listicle reduce thought, and in so doing, makes the message easier to absorb?
Shouldn’t the community be undertaking efforts to reduce confusion?
Ultimately, it comes down to the audience.
If a given audience is heterogeneous, and contains those that cannot follow a causal chain greater than one link (either because of mood, patience, lack-of-slack, or institutionally rewarded unconsciousness), a listicle is appropriate. Listicles of single-chain causal statements may be sequenced in such a way that those that can follow two-chain links will see the relationship. That subtext can be weaved in without alienating the lowest common denominator in the group.
If a given audience is homogenous and contains people that can all follow a causal chain greater than two links, and who are in a state to handle it, then the scenario-based approach of modelling and trial is best. That’s purely because they’re in a state where they can understand tradeoffs and are in a position to make strategic judgements.
The listicle is worthy of testing.