There’s a problem with Key Performance Indicators (KPI’s) as the general religion of Digital Analytics.

Specifically:

  • (1) On their own, a list KPI’s does not constitute a system of thought or logic.
  • (2) A list of KPI’s is extremely difficult to optimize against as a whole.
  • Statement (1) compounds the problems in Statement (2).

Definition of KPI:

Let’s go back to the 2007 Web Analytics Association Standards document for the definition:

  • KPI (Key Performance Indicator) — while a KPI can be either a count or a ratio, it is frequently a ratio. While basic counts and ratios can be used by all Website types, a KPI is infused with business strategy — hence the term, “Key” — and therefore the set of appropriate KPIs typically differs between site and process types.

To unpack that, a KPI is important because it’s strategic, and, it is indicative of performance.

KPI’s as a defense mechanism for too much data

I once enumerated over 500 metrics that were candidates for being KPI’s. I estimated that, for a site with 1,000,000 pages, that there were at least 500,000,000 candidate metrics. It’s likely that there are an uncountable number of candidate metrics. There is no limit.

So, it’s only natural that web analysts, starting in around 1992, invented mechanisms to winnow down this complexity. By focusing only on the metrics that are key to the business, digital analysts had a chance to be effective. Without focus, there’s absolutely no chance.

And then this cycle, which has repeated in many companies since 1992, kicked off. KPI’s would be focused and defined. And then the outputs would bloat with more figures and more data points. The omnipresent force is KPI Creep.

KPI Creep

KPI Creep is the beautification of metrics-previously-not-called-KPI’s by elevating them to KPI status.

I’ve watched many disciplined reports turn into 120 excel tab monstrosities. There are many reasons for KPI creep.

These include:

  • Desire to make a report ‘better’ by adding metrics that, while not KPI’s, are indicators of KPI’s.
  • Desire to elude accountability by sandbagging perception and diluting the effectiveness of the report.
  • Exploration of additional salient features because the report, as is, does not or no longer contains sufficient explanation of why something is happening.
  • Increasing sophistication to understand salient features that correspond with competing models of marketing or differentiating incentive structures.

  • Desire to highlight ones own, specific, contribution to the system.
  • Random walk / changing leadership / analyst churn / changing technology.

As a result, those who run analytics programs confront expanding lists of KPI’s, which augment complexity in many ways, and causes much more severe problems downstream.

Tomorrow, we’ll unpack why a List of KPI’s is not a System or a Model.

***

I’m Christopher Berry.
I tweet about analytics @cjpberry
I write at christopherberry.ca

2 thoughts on “The problem with Key Performance Indicators (Part 1)

  1. Uncountable number of KPI? Bah, Georg Cantor is rolling in his grave right now.

    That definition for KPI sounds like it’s straight out of one of my old MBA books:

    “… a KPI is infused with business strategy — hence the term, “Key” …”

    or straight out of a Simpsons episode:

    “Mono means one, and rail means rail”

  2. Ha!

    The total list is at least as large as the set of rational numbers. Yet, much, much bigger because the rationals can be folded in upon one another.

    Think of assigning a score to a page, and how we might mechanically generate those scores. Enter Cantor, and we’re done. 🙂

    I heard about the Standards Committee calls of 2007.

    It’s no surprise, then, that the committee couldn’t agree on a more robust definition of what constitutes ‘key’. I’d argue that since the metric is infused with opinion.

Comments are closed.