Artificial Perspective Coordination
The activity of perspective coordination is unreasonably effective at producing fantastic outcomes. In addition to creating legitimacy, it can offer a wider array of choices from which to select. The more diverse and courageous the set of people engaged, the wider array of choices. There’s a processing cost though. So, I’d like to believe that recent advancements in the processing power underlining artificial decision making can assist in helping people coordinate their perspectives faster and more completely.
There’s a kind of brutal paradox embedded in all of this. It’ll take a bit of unpacking.
Back in 2020, in The Humanity of Productive Meetings, I used an objective-based segmentation to explain the frustration each felt when experiencing ORID facilitation. Briefly: some people just want to get on with the decision, and some people just want to explore all the options. There are people in between. The hazard for the team is that if a decision is derived without ownership, that is, without a person taking personal accountability for a set of actions leading to an outcome, then, something fundamental has gone wrong with the decision making process. Decisions are going to seemingly fall through the organization, uncaught, with resultant poor or disasterous execution. If, however, discussion is pressed to the point of actual or imagined filibuster, then the team is paralyzed and there is flight from the decision opportunity. People just stop showing up. And all of the benefits of a great decision greatly executed comes at the expense of time spent arriving at a great decision with the legitimacy and ownership of great execution.
My personal experience with the perception of filibustering is ambiguous. There have been times that I believed that the efforts to slow-roll a decision, to rag the puck, was a deliberate effort to frustrate action. In retrospect, perhaps some individuals were merely trying to unpack the full implications of a proposed course of action. And, even more generously, some individuals possessed risk perception functions that were n-shaped as opposed to u-shaped, such that, they wanted the assurance that nothing horrific was going to happen.
Conversely, my personal experience with the perception of false-urgency is ambiguous. There have been times that I believed that efforts to roll me into a decision, to overwhelm me, as deliberate effort to cause me to suspend my disbelief and drive compliance. In retrospect, perhaps some individuals were merely reactive pleasers, obeying orders and displaying compliance as a social good. More generously, perhaps they didn’t understand risk, or know the difference between an n-shape or a u-shape risk perception function.
There are contexts in which speed is the only competitive advantage that an organization has. As a result, a diversity of perspective is unwanted. Functional stupidity and unanimous consent is desired in these circumstances. Don’t ask, just do.
Such stories are common in the autobiographical hero business pop-lit books that contaminate airport bookshelves. The First-Person accounts tend to be…unconscious about how they went about manufacturing outsiders and scapegoats and single-handedly trounced them. And since there’s no second voice to balance the story, their recounting goes unchallenged. It’s curious that you never read of ignored perspectives that turned out prescient in any of these retellings. Memory is curious that way. Almost as though there’s an invisible selection bias.
If a leader chooses decision speed over decision quality, and the choice is conscious, it would be advisable that this choice be communicated. Whether or not the leader accepts that judgement substitution is more likely to generate inferior outcomes is a function of that leaders development. Therein lies a real horror. The feedback cycle is muffled not only by the ambiguity of experience: it’s distorted by the narrative told to oneself, potentially reinforced by what their imagined peers in the pop-lit are telling them. It’s all too easy to fall into the muck without understanding what the muck is.
Aren’t there processes to prevent this?
Yes!
If business is private government, then what form of government do most businesses operate under?
Democracies are slow to act because perspective coordination takes a lot more time when everybody is involved. However, the quality of democratic actions are far more durable. Authoritarian regimes are relatively quick to act because they don’t factor as many perspectives into their decisions. Often, it comes down to the opinion of just one … man’s … perspective. However, the quality of the actions are far less durable. Examples and counter-examples abound. What of the private sphere?
Like feudal monarchies, startups tend to live and die by the quality of the leadership. Massive firms are much more resilient catastrophic judgement precisely because of processes intended to catch catastrophic judgement before it may be executed.
Massive firms, those that aim for immortality, tend to take their time with major decisions for another reason: it simply takes a lot of processing power for everybody to align their perspectives. They also enjoy the benefits of greater alignment and legitimacy.
Could it all be more efficient?
Information technology has already accelerated the speed of perspective coordination on large scales. The town crier, the speech, the ballot, writing, the book, the printing press, the newspaper, the ad, the telegraph, the radio, the television, the website, the BBS, IRC, e-mail, the feed, and productivity software all assists in the efficient diffusion, and recently, genuine dialogue. Social cohesion technologies reduces the cost of cooperation and expands the ability of groups of humans to control more territory.
Have we been almost too effective at generating cohesion?
Memory itself is a lock-in mechanism. Look down at your keyboard and notice the emotion that runs through you when I suggest that the keys should be re-arranged for greater efficiency. Doesn’t feel great does it? It’s as though you want to conserve the layout, because you’ve invested muscle memory into that layout. And besides, why should you change? Is the marginal benefit really worth rewiring all of your memory?
Culture is a lot like that keyboard your fingers spend time touching. People have invested in the way things are done around here. It doesn’t matter what the following text is: “something something something, whatever, get over it, welcome to France!” Substitute the word ‘France’ for any other culture you can name, and the sentence would still compile. It’s the way things are. We accept it. To be accepted as an insider you must accept it. Suck it up. So it goes with all the features of a culture: morals, the way we do technical documentation here, and the RFP process.
For any non-routinized decision to be accepted, memory has to be adjusted. The source of the inertia, the lock-in, has to be confronted. Minds have to change. And, the process of changing ones mind varies by the individual. Might recent advancement in narrow machine intelligence may help people reason through their preferences more efficiently?
If self-discovery is at the root cause of meeting duration inflation and subsequent decision delay, then that is where the application must focus for greatest effect.
What if an individual could spend time talking with an artificial agent to both emotionally feel their way through a decision space, and, eventually, rationalize their way to a utility function they could express in a decision making context? Even if the individual isn’t aware that the artificial agent is helping them learn how to think structurally about their unstructured networks, they’d benefit from the exhaust of the process.
It’s a rather brutal perspective: if people spent time understanding their utility function prior to a meeting about aggregating utility functions, about how change makes them feel, then they’d spend less time in meetings discovering their own utility function and more time enumerating options and consolidating onto a decision.
There’s a nugget of optimism embedded in there. Maybe the technology can liberate more people from their anchors, enabling them to generate more positive narratives to exchange during the meetings about change?
And yet it neglects a core feature of perspective coordination: some people value the experience of relatedness as a social virtual good. Some may value the time spent engaging in collective-discovery, and as such, artificially-assisted pre-processing would be an unwanted technical solution. Some people, it would appear, enjoy the attention they receive from others while they talk, as a social good unto itself? In that context, such technology would deprive them of the very good that they either consciously or unconsciously seek.
Even the emotional exhaustion exhaust from such events could be valued as a good unto itself, as it increases the susceptibility of some to rhetoric. It changes the reality on the ground by changing the utility functions themselves. Rather than a meeting which mechanically sorts alternatives and weighs alternatives in an effort to select a better future, such meetings dynamically changes commitment to alternatives, with, perhaps, a conscious nod of creating a better future.
In this way, we’re right back to a source of friction in the generation of decision: differing values and the relative (in)flexibility of changing them.
For those who value speed to a decision, the time spent generating attention is unwanted. For those who value attention, the speed to a decision is unwanted. And to what extent does anybody know what they’re valuing?
As it always does, it falls to those in between to find enough common ground. I’d like to believe that for these people, recent advancements in artificial intelligence could be used to discover more creative ground, faster.
It’s a hope.