The Risks of Product Development: What’s the best that can happen?
Marty Cagan lists four big risks in product development:
- value risk (will they buy it?)
- usability risk (can they figure out how to use it?)
- feasibility risk (can engineers build it?)
- business viability risk (can the business work with it?)
Cagan’s framework is a great read.
I’d like to build on and acknowledge Cagan’s ideas here.
The intimate relationship Canadians have with risk
Water is to fish as risk is to Canadians. If you aren’t from here, maybe you’d be in a better position to see it.
There’s a skill, called inversion, that I think Canadians are pretty good at. You imagine the worst that can happen, and then you write plans to avoid those nightmare from coming true. It’s a good skill. I have to write that.
And it’s sort of reactive.
What if we flip the language from negative risk probabilities to positive opportunity probabilities? Would such a flip help us imagine different options?
What’s the best that can happen?
(Value Risk) * -1 = Value Creation
Value creation could be considered the core driver of all product development.
Why bother with product development in the first place? What’s the point? It has to be about value. Valuable to who? It’s a lot of fun to think it from different perspectives.
If you’re in the private sector, you may think of value creation as something accrues first to the customer, then to the shareholder. In the non-profit and public sectors, value accrues to different clusters – to the clients, beneficiaries, and to the philanthropists.
What is value depends so much on who is asking the question. Value for who? The who matters a lot. They may be in a context where they can’t think too hard about using your software. They may be particularly technology literate. They may have the Willingness To Pay, but they may not have the Ability To Pay. They may depend on the complexity of your software to justify their entire job. Can you imagine such a thing?
Usability and value could be causally linked. There have been several companies that took very complex technology and rendered it usable for consumers. Several of them have been astonishingly successful.
I can think of very few instances where a company deliberately made their products less usable, on purpose. Your Trash box is littered with the products of companies that accidentally made their products less usable over time. I bet you can think of a few.
Usability is core to product development because, more likely than not, it’s causally linked to value creation. If value is indeed created by somebody using your software. It seems like the claim has face validity, doesn’t?
Usability is almost always requisite for value to be realized.
When the who and their usability are considered together, the best that can happen is total domination of a space. Once a global usability minima has been discovered, it’s hard to imagine how a competitor could surpass it. It’s a global minima! It’s extremely difficult to imagine that a group of customers for a given product would want to substitute for another one, only to learn something harder to do.
Interfaces that are uncannily usable for their target market tend to be intensely sticky.
(Feasibility Risk) * -1 = Technological Trigger Opportunity
Star Trek: The Next Generation featured quite a few products I really wanted as a child. I wanted a laptop computer on my desk. I really wanted a little computer pad I could walk around with. Voice commands to a computer, voiced by Lwaxana Troy, was a pretty awesome idea. I reckoned I could trust Lwaxana Troy. I got some of them. I’m like Bones, in that I’m not a fan of the idea of teleportation, but, what does it say about the modern state of air travel that I’d entertain the idea of being stripped apart at the atomic level, converted to energy, and reassembled at a remote distance. I still really want a holodeck, anti-matter/deuterium power, food replicators, and a tricorder.
I imagine most people in my field worry about holographic and synthetic life. We all create the environment that nurtures nascent consciousness. There is considerable cause for concern and some cause for hope.
I imagine nearly everybody my age fret about onboarding everybody into the 21st century. A shoe on every foot, a mattress under every sleeping body, clean water in every mouth, nutritious food in every belly, waste in the right pipes, electrons flowing through every home, optimistic dreams in every mind, opportunity for every soul, and a hospitable temperature for everybody to thrive in. Each individual technology has been invented already. The technology to enable everyone to have all of those things just hasn’t diffused to the point that everybody has them yet.
Value and technical feasibility are linked by our imagination and our best dreams.
Before the hype cycle really takes off on any technology, there has to be a technological trigger. Much technology remains deep in the research stage, and we don’t typically think about developing it into a product. They always seem to be on the horizon because they have to be positioned as such in funding proposals. How long have we been thirsting for fusion, holodecks, and equal opportunity?
There is some technology that is kind of getting ready for product development and adoption by some niche segment of the population. These are typically a very small set of customers with enough capital to will a new technology into a product.
Sometimes there’s a new organizing idea that sparks a trigger, and off we go on hype. Trustless computing is one field. TCP/IP is another. You can think of dozens of ideas like that.
The best that can happen is that a great team can bring a technology over the trigger line, and enjoy all the advantages and joy of doing so. Sometimes the patent system, in association with market dynamics and a whole bunch of things going right, add up into a sustainable first mover positional advantage. (YES! It can really happen!) Sometimes the first mover wins. Sometimes the first mover becomes the last.
A lot of great things happen when new knowledge is pulled out of the void and put to use for the benefit of humanity.
(Business viability risk) * -1 = Business Transformation Opportunity
Sometimes an institution doesn’t manage technological change.
That goes for technical change that one part of the institution has discovered and is diffusing, and for technical change that is getting inflected on the institution from the outside. There are many reasons for this. Most often, it’s because the new technology creates contradictions that cannot be reconciled with what the institution believes about itself, or believes about the world.
It’s almost inevitable that institutions would have this problem. Collective beliefs are so much stronger than individual ones. Collective belief is stubborn. It’s quite possibly the strongest form of lock-in I’m capable of imagining at this time. The linkage between the way that collective belief is maintained through the group identity mechanism, how inputs and outputs are sorted for legitimacy, who has discussions and how is all wrapped up the core DNA of a firm. And that DNA is very well defended.
What if an organization invented a technology that enabled it to reduce the amount of energy it takes to change a collective belief? What if it got really good at learning?
I’ve always found this to be the most optimistic of all ideas. Everything that I was taught about businesses and institutions was rooted on how beliefs are protected and technology is locked-in. Knowledge needs to be preserved…almost sheltered, from change. A team could make these short, almost episodic, changes to a few base pairs of the belief genome. And that team may enjoy some short term success. However, it’s as though there’s a machine that’s responsible for making sure that beliefs don’t change. I wonder if that social mechanism, that guardian machine, is actively conscious of what it’s doing. Or maybe it’s an unconscious process?
If a collective consciousness was aware of what it was doing, would it have the skills to transform itself? And if it did, would it? Or would it find itself blocked?
The very best that could happen would be for such consciousness to actively manage what it believes in, what it is doubting, and what it is changing. If an organization could ever develop to the point that it could understand the why, and accept it, would it be much more open? Could it be so much more creative? Could it thrive under conditions of continuous technical change?
It’s wonderful to imagine how awesome the product that such an organization would produce, isn’t? When usability, technological change, and culture work together, without fear, it stands to reason that learning could continue at a steady rate, rather than banging up against a culturally imposed limit.
Conclusions
The beauty of Cagan’s model is in its accessibility and the gateway into the discussion. And it’s great. I acknowledge Cagan for the contribution.
The best that can happen is for an organization to learn something new, believe in it, purposely developing a technology over the trigger line, thereby creating the technical disruption. Hype and contagion follows. It learns how to make it useable for a defined group, discovering the global minima that makes it easy difficult for a competitor to displace (competitor imitators imitate, there’s no preventing that). Then more of that group discovers the incredible value it delivers. And that some form of that value is recognized by the organization that has helped enable it. The organization earns sustainable competitive advantage and achieves immortality as a result.
Is that all? Is that all there is?
Can you imagine bigger?