The previous two parts explained what a Key Performance Indicator is, and the cause of KPI Creep.

How do Data Scientists Cope?

Data Scientists are frequently confronted with datasets that contain thousands of variables. If we tried to to understand the relationship of everything against everything using the methods at our disposal, we’d fail.

Data Scientists don’t say, “we want to understand everything”. We know we can’t.

We would fail because:

  • There’s too much complexity for a single human to understand.
  • There’s no way to tell a coherent story.
  • There’s no recommendation that would mean anything to anybody.

The Data Scientist copes by optimizing for a single variable. In every step of their work, they focus on a single optimization objective. Concretely, it’s the most valuable lesson I learned from Andrew Ng.

Effective optimization mandates a single real number to optimize against

The questions isn’t “what is the list of indicators that are thought to be predictive of performance?”

The question is “what performance are you optimizing?”

What kind of question is “what performance are you optimizing?”

  • For those of you in mature industries, the answer might be “profit”. 
  • For those of you in startup mode the answer might be ‘users’. 
  • For those of you trying to move from being 5th place to being 3rd place in a given sector, the answer might be ‘market share’.

There should be only one (1) optimization objective in any given context. This forms your dependent variable.

The Basis for calling an indicator ‘key’

For an indicator to be useful it must rise to the level of being predictive of the dependent variable / optimization objective. The more predictive it is, the better of an indicator it is.

It’s not enough for a variable to be ‘interesting’, or ‘I heard that it’s important’. It has to be actually predictive of the single optimization objective.

In so doing, if you go far back enough in the model, you can identify levers and actions that are likely to have the biggest impact. This forms the basis of a system of thought that is not rooted in opinion or feeling, but rather, rooted in real marketing science.

By executing enough tests, the model can be updated. This generates an evidence-based approach to updating deliverables and communication. The format of the deliverables – what’s included and what’s excluded – evolves as digital marketing evolves.

Finally, all marketing is subject to constraints. By focusing on a single dependent variable / optimization objective, you can deduce the impact of a given constraint (say, paid spend) on the dependent variable. This is where true optimization comes into play.

It reduces the odds for KPI creep and maintains the utility of the report for a period exceeding 15 months.

The Problem With KPI’s

The problem with the existing 15 KPI-bloating-to 90 KPI method presently deployed is that dooms the analyst from ever focusing enough on a single variable to optimize against. It makes it exceedingly difficult for the analyst to generate incremental recommendations given dozens of opinion-based, and frequently hidden, ideas of cause-and-effect.

If we define what performance is, regardless of how horrible that experience may be, we can generate an evidence-based model on what constitutes a KPI worth inclusion.

The solution exists, and is proven to be quite effective in another field. Maybe you should give it a shot.

Go ahead – what’s your single optimization objective?

This concludes the three part series on Key Performance Indicators and the argument for establishing a single in-context optimization objective.

***

I’m Christopher Berry.
I tweet about analytics @cjpberry
I write at christopherberry.ca

5 thoughts on “The problem with Key Performance Indicators (Part 3)

  1. Mark says:

    This comment has been removed by the author.

  2. Mark says:

    Nice series, Chris!

    I’ve generally found that a long list of KPIs is symptom of a lack of strategy.

    No one team can be sincerely trying to move 15 metrics. That’s in the weeds, flailing around. In a large company there could be 15 teams trying to move 20 KPIs, each team should have a strategy and a strategy should not have more than one or two KPIs.

    I’m ok with reports that have a lot of data(info) in them, so people can feel comfortable answering questions “how many people do we get to our site? how big’s the cart size?” and because its important to have a breadth of information available when you start to decide on what your strategy will be.

    Oh course few companies do strategy. Most mangage the boat, and hope the wind blows them in the right direction

  3. @Mark

    Thank you for thinking of it. And, I agree. KPI’s have context to different people, and, there are real instances where one department may be trying to maximize a given parameter and another one is trying to optimize that very same one!

    Not everyone is aligned, and, there are certain instances where it’s even good that nobody every is working towards the same parameters.

    Finally – I do believe that many people engage in satisficing decision making. That is to say, so long as a given arrangement is good enough, don’t mess with it. That type of behavior is completely different from optimizing behavior.

    Analytics has the capability of changing such behaviors. At least, that’s the general theory!

  4. Adrian P. says:

    Hi Christopher,
    I love the idea of one optimization objective. I agree KPI creep is a problem. However, something didn’t quite sit well with me after reading this post.

    I read the post this morning and after stewing over it for a while, I finally had an experience today that made me realize my hesitation (and it may be due to a misunderstanding on my part): how does this work across multiple teams? (ps–I’m coming at this from a “web analyst” context)

    Larg-ish have different teams with different interests and egos. I feel like walking into a meeting and saying “We are going to optimize for this one variable” would cause a revolt…some people can just imagine the angry hordes beating down their doors with pitchforks! Paid search wants some love, but organic search does too because well that big trend upward is just around the corner, and then the social media team is always on the defensive and really is feeling desperate, whereas the UX team feels their recent changes really drove up conversion rates, but the gal/guy that just launched the remarking campaign has data to prove they’ve really nailed that cart abandonment problem. You get the picture…

    How would you approach that? You’ve previously posted about egos…would that tie into this?

  5. Jim Novo says:

    Nice work Christopher! It’s indeed weird seeing people call something a KPI that is not directly predictive or causal of “performance”.

    The fact there is not a single definition of “performance” in a company is a management problem.

    Down the road, these companies / people will find that letting each silo optimize to their own definition of performance is very often cannibalistic, so suboptimal.

    A corollary to the above: If you can’t measure something properly in the context of performance, just say so. Go ahead and track / count it, but don’t invent KPI’s that will then give a false sense of knowledge to managers.

Comments are closed.