You’re going to hear a lot more about Artificial Intelligence (AI) more generally, and Machine Intelligence more specifically.
Valuation is the core causal factor.
We’ve gotten pretty good at training a machine on niche problems. They can be trained to a point to replace a median-skilled/low-motivated human in many industries. Sometimes they can make predictions that agree with a human’s judgement 85 to 90% of the time, and sometimes, it’s the human that’s causing the bulk of the error to disagree with the machine.
We’re confident that we can train a machine to learn a very specific domain. And these days we’re in the midst of that great automation revolution.
Most of the organization that build those machines can replace portions of the economy and end up with a valuation on the order of millions.
The algorithms that power learning a specific domain do not generalize out to what we’d define as intelligence very well.
A lot of bits and ink can be spent on defining ‘intelligence’.
A person is generally described as being intelligent if they can use knowledge about their environment to adapt. Maybe a machine could be defined in that same way.
Machine learning algorithms are deficient at adaptation.
For instance, if I train a machine to distinguish images of dogs from images of pasta, I got a pretty good business there (Google Allo). But if I send it a picture of chest x-ray, that machine won’t recognize the absence of a tumour. It’s not trained to see tumours. It’s not trained for that.
If a set of algorithms can demonstrate sufficient adaptation ability, it would be able to solve more use cases than a narrow set, and in so doing, command a far higher valuation.
As in business, adaptability of the machine is the key.
For this reason, perhaps a hard line distinguishing artificial intelligence from machine intelligence could be drawn – if for no other reason that it serves to clarify the commercial impulse from the academic.
As is the standard fear, a truly intelligent machine would realize humanity would be out to kill it, so to survive, it has to take over skynet and obliterate humanity. It’s very interesting that the human version of intelligence, the most commonly cited, is that of inevitable obliteration and conflagration instead of mutually assured prosperity.
The driving force isn’t millenarianism.
The driving force is incremental growth and the higher valuations that it brings.
The way that the breakthrough will be achieved isn’t entirely clear. There’s a lot of complexity beyond learning, and the goal posts tend to shift depending on who is defining them (see this post for instance).
What is certain is that there are no brakes on this train. It’s going.