In the previous post, The Economics of Analytics (I), I set the stage for the complications of risk management and trust in agent-client relationships.

Assume that I’m a client. I know I have an optimization need, but, assume that I don’t know what is involved in fixing the problem. Assume that I have perfect trust in an agent.

I tell the agent that I have a problem, and I ask for a fixed-fee estimate. (Tell me how much it’s going to cost).

The agent can certainly be glib about it, and give me an estimate without probing the true nature of the problem. As a client, assume that I have some idea about the volume of work and skill that is required in fixing it, but I don’t know exactly.

The biggest problem an agent faces in providing an estimate is risk.

Like most humans, agents rely on a combination of pattern recognition (experience / familiarity) and perception of control – to provide an estimate.

For instance, let’s assume that I, as the client, has an installation of Coremetrics – and I need a pre-click analysis done on the top 1000 terms.

The agent might know how to do such an analysis with Google Analytics, easily, because they’ve done it before, but Coremetrics might have a very distinctive nuance to it.

The natural thing for the agent to do would be to take the base cost of a Google Analytics pre-click analysis, be honest about the uncertainties with Coremetrics, and quote me a range – X dollars if it’s straightforward, X * 25% if it isn’t.

The risk is managed in the range of the fixed fee, and if the agent is wrong, then the agent bears the consequence of the estimated risk is far, far off.

Given perfect trust. No problem.

Well that was easy! Wasn’t? Not so fast.

What if the agent is not aware of what they don’t know? What if I, through my own lack of knowledge (why else hire an agent) don’t know what I don’t know?

The asymetrical knowledge between the agent and the client is a significant source of risk where, even in an environment of perfect trust, it can severely damage the accuracy of an estimate.

There’s a second type of asymetrical knowledge – something that I’ll generously call “the asymetry of ‘discovered’ knowledge”.

Frequently, what starts off as a tiny project starts getting requirements added on. Some call it feature creep. Others call it scope creep. Some call it ‘shifting sands’.

Assuming perfect trust between the agent and client, it’s not unreasonable to expect that a client will acquire new stakeholders over the course of a project, and as a direct result, feature creep increases and complexity expands. This is of no fault of the agent, and, as the client, we’d make the necessary changes in a fixed-fee relationship.

What I came to understand, in the course of multiple discussions and walks, was that these two asymetries makes the hourly model more attractive. The hourly model dictates that the agent tells the client from the outset an estimate of the amount of effort for a given amount of scope, and an associated hourly rate. As requirements are either ‘discovered’, either temporally or through discussion, then the number of hours billed increases. On the surface, the hourly model provides a nice spring, and a low-thought method of managing risk.

And then we turn to agent-client ‘trust’.

The hourly model is really funny / false because, frequently, as a client, I have a fixed upper limit on my budget. My pareto-optimal choice is to maximize the effeciency of my spend, get the biggest bang for my buck. My Nash-Trap is to try to minimize cost. I’m not getting the biggest bang for my buck, and, if anything, more of my bucks are going to result in sub-standard results.

The hourly model doesn’t necessarily incent good behavior on either the part of the client, nor the agent, especially when trust is comprimised.

The client is incented to disregard the actual volume of effort in the belief that the hourly rate already has wastage built in.

The agent is incented to stuff meetings with 25 people, 20 of whom are redundant.

The Hourly Model is still subject to knowledge assymetries:

It might very well be that a client doesn’t know what they don’t know – and that they could be getting a very good deal in having an analysis done in 4 hours, by a very experienced analyst – but in the absense of any sort of yardstick by the client – the focus might become ‘why does it take so long?’. Hilarity ensues.

Of course, such events come out in the wash, and the client-agent relationships either thrive or deteriorate.

Where I’m at right now is: The Hourly Model under which I’d say the majority of agent-client relationships are still based – incents behaviors that are less than desirable, and perhaps focuses energy on the human-hour as the base unit. The Fixed-Fee model still has asymetrical problems, and pushes the risk back onto the agent, which contributes to inflation in the long term.

In the next post, I want to write about the Scalability of Analytical methods in light of both the Fixed-Fee and Hourly Models.

One thought on “The Economics of Analytics (II)

  1. Jen Day says:

    I am trying to absorb these posts and struggling, so apologies in advance if my question is a bit dull. I personally have never had to suffer with sizing something I didn’t fully know each and every step, but I am wondering why part of the sizing process isn’t a deeper dive into what would need to be done. It seems like with the difference between data vendors (in any industry) can be very vast when it comes to the detail level and there is a HUGE risk in not knowing (for instance) if the data you want is even “in there somewhere”. I get that it may not be critical to your point, but for some reason it is a point I am stuck on. Is it just that it’s not sensible to invest the up-front sizing time in a project a client may simply decline? Then all that sizing time is considered wasted?

Comments are closed.