This is a dense post.

Feldman and March, in 1981, wrote “Information in Organizations as Signal and Symbol”. And it makes good predictions about what a management scientist type would say about the purpose of information in an organization. Indeed, just last month, I hyped Carl Anderson’s 2015 original position yet again, in the framing of information as assisting learning.

Feldman and March are cited by another piece that’s been weighing heavily since February. Alvesson and Spicer’s 2012 hit “A Stupidity-Based Theory of Organizations” explains why seemingly intelligent people pretend to be dumber than they are. Please don’t misinterpret this passage. It’s not the case that everybody is stupid. Sometimes people act dumber because they have to go-along-to-get-along. Are you a team player or what? Because come on.

There’s a branch of data science that rejects bringing anything to middle managers. Indeed, they’ll argue, why the hell bother? Bring the decision that matters directly to the consumer because we simply can’t wait for organizations to bumble into learning. We simply can’t absorb all the cost of convenient reasoning and gossip fuel. You can lead a horse to water but you can’t make it drink. You can lead a man to data but you can’t make him think.

I don’t find this line of argument appealing because I believe in the power of talented managers to make great decisions. It goes against a core belief, the optimism in a founding team to discover product-market-solution fit through learning. I believe in the ability of talented people to become more talented. Institutions and organizations can be set up to learn, and accelerate their rate of learning. And this can be a key source of competitive advantage. It has worked since the Enlightenment. It’s still working. I gotta believe.


Feldman and March describe a process by which managers ask for data in order to not be surprised by things. They call this surveillance. And that term makes sense. They’re surveilling a system.

This is the stuff of dashboards. Dashboards are created with the assumption that the manager understands why the dials on that dashboard go up and down. It’s assumed that they have a working theory. And in fact, to make sure they have a working theory, you do a Goal Alignment Strategy (GAS) to make sure they got a story about why the Goals they want to see align to the Strategy. Ever get the impression that sometimes people don’t have a working theory and they’re just asking for more data because they’re executing a search for understanding? Yeah, that’s usually because they are executing a search and just not admitting it.

I think we’ve taught too many people to ask for a dashboard when they’re really asking questions about strategy.

And as a direct result, we end up getting artifacts which start out as dashboards and degenerate into reports. A relatively simple dashboard of 7 metrics can bloat out into a 500 page tome. The desire to not be surprised by anything produces artifacts by which the manager becomes overloaded with data. Managers can end up drowning in data and having absolutely no idea as to why anything is happening. In part, the desire to have easy access to everything that could matter overwhelms all. And worse, as these things inflate in complexity, quality assurance and maintenance inflate by a square of what’s going in. It erodes confidence in the underlining data. A predictable reset occurs and the cycle starts anew.

It’s as though there’s no learning about this cycle. Everybody is exceptional. It won’t happen to me. Other people drown in data, I swim in it. And yet, even the best machine learning engineers turn off from surveillance if the flow is too high. Literally not every outlier deserves an explanation.

Several products tried to address the desire to not be surprised with alerts. These systems all generated too many alerts, which had the effect of making the manager less aware of what was going on. And since digital data is really, really noisy, it becomes incrementally harder to understand what is signal, what is an expected signal, and what is merely non-relevant spam. Most of these efforts happened before machine learning really came on, so maybe there’s a training set somewhere in there? Maybe? Somewhere between the appropriate number of alerts (2 to 4 a week, probably based on the latest marketing science) and the false desire to explain the story behind every outlier (they don’t all matter) there’s some new technology. Maybe we can now teach a machine about what’s important to be alerted about?

Sometimes, managers ask for dashboards about other lines of business. Now, they might claim that they aren’t engaged in covert surveillance. But they are. You know they are.

My thesis on this was simply to turn over the keys to the system and let the managers that can editorialize the data for themselves…do so. Democratize data access. Trust them to use good judgement under the constraint of uncertainty, and good things happen. Imposters are gonna imposter. And those who have learned to be helpless will continue to be helpless. If data is essential in decision making, and managers make decisions, the ultimate onus to learning how to make decisions lies with the manager. I want to believe that the empowered will empower themselves. I gotta believe.

Surveillance, and managing yourself through surveillance, seems like a distinct job.


Learning means that you update your system of beliefs in response to information. So that means, usually, pulling a lever and seeing a result. Or looking at some stuff and updating your beliefs. Learning involves active cognitive engagement. And it involves admitting that you were wrong about something at one point in time and saying that it’s okay because you’ve learned.

Learning often means that you go into a set of decisions with a hypothesis, you try a bunch of things, you get results back, and you update what you know.

The job here is to search for new knowledge. And sometimes, like in a data driven culture, managers are able to do their job to learn. They’re empowered to try. So much the better. And the risk is controlled via sampling and the accumulation of knowledge. Data scientists would simply call this science. And it’s fine and good. Failure is managed, not feared.

To a massive extent, learning about how a machine learns is also learning. The entire world of prescriptive analytics, of learning from how a machine automates its own learning, is unto itself learning.

I fundamentally believe in the advantage of learning. And that most institutions and organizations are capable of learning how to learn if the conditions are managed right. Because of the Enlightenment. I have to believe.

It just seems like learning is such a distinct job, and it only appears to occur outside of functionally stupid circumstances.


Persuasion goes three ways (at least).

In one scenario, they’re trying to justify a decision post-hoc. This is either because something went wrong and they’re being held accountable, or something went right and they’re looking for credit. The intent isn’t to learn the truth. It’s to generate a narrative that is either skin-saving or skin-building. It’s rarely the case that a perfect piece of exculpatory evidence can found. And the shame, the terrible shame, of not finding that evidence.

In another scenario they’ve already made up their mind about what to do in the present, and they’re only looking for evidence that only supports that decision. Exclusively. This often happens when somebody in high authority says something on the fly and a hundred documents magically show up justifying that position. Sometimes a manager can authoritatively repeat the same question with authority over and over again until they nag the institution in submission. They just know the answer. It’s your fault you can’t find it. And any evidence to the contrary is just obstruction. How do they know the answer? They just know. Okay? And any difficulty in rallying the perfect evidence is in no way indicative that their idea could be wrong.

In a third scenario, they’re trying to persuade themselves, and others, of something to do in the future. Using evidence to plan, and to persuade others, that there is some good course of action. This is the stuff of predictive analytics and planning. The organization wants to predict the future so that they may select the best future. In a big way, people are trying to persuade each other during the decision opportunity about what is best course.

These seem like three distinct jobs. One is a search for evidence that probably isn’t there, to the exclusion of a bunch of gossip, hearsay, conjecture, or even a piece of confirmatory evidence. One is a search for evidence in support of an anchored bias, to the exclusion of contrary evidence. And the third is a search for reason about the future, and none of that data really exists – it has to be created through a web of causal statements that generate a set of projections about the future.

A Dense Post

It could be the case that there at least five distinct classes of jobs:

  • Surveillance;
  • Learning, including Learning about how Machines themselves Learn, and how people learn;
  • Post-hoc persuasion;
  • Hierarchical Persuasion / Hierarchical Pandering / Convenient Reasoning;
  • Prediction, evaluation, and selection;

They involve different stances towards reality – from curiosity to doublethink. They involve different tools — from dashboards to Causal Random Forests! They involve very different experiences – from shame to triumph.

One idea to leave you with – institutions tend to demand so much of design thinkers in terms of justifying their judgement using prediction, evaluation and selection. Perhaps a better way of managing the artists is through the system designed for learning, instead of trying to jam them into prediction? It doesn’t work for the business and it doesn’t work for the artists, so perhaps we should just set up the sandboxes for them to learn and manage them?

What do you think? Do you think these are separate jobs? Do you think learning as the accumulation of knowledge is distinct from the persuasive activities of prediction, evaluation and selection?

Let me know on twitter — @cjpberry