The Data Driven Culture: The Role of Learning
Data scientists spend so much time focused on learning: both machine learning and human learning.
A machine can learn. A data scientist spends a lot of time just trying to persuade a machine to learn. It just takes a lot of labelled data.
What about collections of people?
Organizations can learn too. It’s just that the data isn’t all labelled well.
Why Organizational Learning is Important
I was so impressed with Carl Anderson’s synthesis two years ago, about Data Driven Cultures, that I unpacked it and applied it to startups and strategy.
Coming back to it now, in 2018, a lot of what he was saying is purely about learning.
Carl Anderson, 2015, described a data driven culture as on that:
- Is continuously testing;
- Has a continuous improvement mindset;
- Is involved in predictive modeling and model improvement;
- Chooses among actions using a suite of weighted variables;
- Has a culture where decision makers take notice of key findings, trust them, and act upon them;
- Uses data to help inform and influence strategy.
Consider that set of bullets as though it’s a strategy – from an obvious machine intelligence perspective:
- Why test? To learn.
- Why learn? To improve.
- Why predict? To improve the future before it happens.
- Why make better choices? To make better strategy.
- Why make better strategy? To compete for scarce resources (and maximize the valuation/equity position of the founders or return to society/institution/organization)
As leaders of an organization (startup, institution, company, nonprofit, committee, department), we train people on some fairly small, unlabelled datasets (at time of writing) to make choices at decision opportunities. We rely on humans to be intelligent.
Learning
Silicon Valley has learned that learning fast matters to the competitive effectiveness of a firm. Those that learn faster and execute faster than others have a better chance of winning than those who don’t. So they invested a lot of time and effort in learning how to get faster at learning. And they do it well. As a Canadian writing these words, that is just evident.
Removing impediments to learning, and then to executing, is a strategic choice. Because the alternative is to maintain the present level of learning, and sometimes even, to unlearn things and regress.
How Organizations Learn
Organizations are collections of people, so they don’t learn the same way that an individual person learns.
Organizations:
- Decide to do something;
- Get some response from the environment;
- Different groups remember a different story about the response;
- Cite a version of that story when making another decision.
There are four kinds of rate limiting factors in how organizations learn:
- The decision to do something may require facts, resources, social buy-in and/or permission;
- The response from the environment may require facts to be collected, assembled, synthesized and communicated;
- The decision liquidity event may produce valuable social capital opportunities (there are reputations to credit and debit all of which takes time);
- The knowledge that is used in the next decision may be contested at the next decision opportunity;
And there are four responses, among many, from a data scientist:
- Decision speed can be increased by making learning central to strategy formation, to the strategy, and using that strategy as a way to get truth aligned;
- Responses from the environment may be processed in real-time if the strategy is disciplined the pipeline aligns with the decision pipeline;
- Social capital can be managed by recognizing teams and individuals when they engage in positive learning outcomes;
- Separating social capital concerns from learning concerns enables the organizations to learn faster (because there’s less time fighting over feelings about memories).
Having solid ground truth is essential for training a machine, just as it’s essential for training an organization. The strategy, the instrumentation, the social capital, and the memory can be optimized to improve the rate of learning.
First, Second, Third Order Learning
When it comes to First Order Decisions (Hall 1993) and First Order Learning, where the choices are narrow and the constraints known, we assume that those checklists and interfaces are relatively isolated and the individuals are empowered. And we tend to think of these jobs to be prime candidates for automation. For instance, moving the dials on paid media spend is a good candidate for that sort of activity. We assume that most of these problems are solved or are solvable. (These decisions boil down to ‘we tried this common thing and it worked, let’s do more of that.’)
When it comes to Second Order Decisions (Hall 1993), and Second Order Learning, the choices are less clear. You’re dealing with people who are trying to shove different problems and different solutions into the mix in an effort to get something decided or done. I assume that most of these problems are not solved. Truth is contested depending on which silo one is conveniently reasoning from, there are feelings involved, and there are memories that are tampered with.
When it comes to Third Order Decisions (Hall 1993), and Third Order Learning, the choices are intensely ambiguous. I assume that the reason why there’s such a disparity in the literature about it that these problems are not even close to being solved. In many ways, when it comes to the Third Order, you have to have figured something out during your Second Order life to really succeed. (What terrifies me is that there is no systematic way to bring about Third Order change.)
The Cultures we Build
Where I do not have a verdict is the extent to which variance in truth is acceptable or good or bad or tolerated.
In some contexts, in particular during strategy formation, it’s a very good idea for people to contest the future and draw from multiple sources of experience and evidence. Superior learning during strategy formation ought to generate a superior strategy. Strategy formation isn’t the time for functional stupidity (Alvesson and Spicer, 2013, 2016).
In some contexts, in particular during strategy assessment, it’s a very bad idea for people to contest the outcomes and draw from sources of social credit and feelings. It doesn’t contribute to learning, and worse, damages the rate at which future learning can be realized. And then that has an implication for the formation of future strategy, as those responsible are now working with false memories (at worse) or ambiguous feelings (at best) about the last effort.
Do we, as data scientists, want to build cultures that privilege continuous improvement in the Second Order Decision layer, and encourage faster learning — or do we want to privilege something else? Because that’s a choice.
Do we tolerate deviation in truth among First Order decision makers, our optimizers, because we need to get things done? If so, how much? At one end of the spectrum we get the Borg. At the other end we get a failed state. There’s a balance there. Ideally, I want to argue that sometimes we need everybody to be aligned, but that doesn’t generate enough variance to make for good Second Order decision making.
What are some ways to induce collections of people to learn faster – to manage both states of the formation-evaluation cycle – and to manage the talent portfolio?
Posts in this series include: