Hinton is quoted as saying, with respect to back propagation, “I don’t think it’s how the brain works”. You can read the full article here.

Back Propagation

To oversimplify, in Back Propagation, the influence of each neuron is rewarded based on how well it predicts something. Accurate predictions are rewarded with more influence. Bad predictions are punished with less. This is how the machine learns.

And there’s a lot of optimism about Back Propagation.

It’s really useful and generates fairly predictable machines.

As data scientists, we like this.

And as data scientists, we should also like what Hinton is hinting at.

Kuhn

It’s much more likely than not that we’re approaching a local maxima on this thread of research.

I’m watching a lot of the Artificial Neural Network (ANN) literature and I’m seeing a lot more order and structure.

I look at the decision neuroscience literature and I don’t see such neat, orderly, structures. I see a lot more chaos.

That divergence, between the neat order of these elaborately laid out ANN’s, and the chaotic nature of organic neural networks, leads me to agree with Hinton.

Kuhn (1962) predicts that a bunch of folks who are really, really, really invested in structured ANN’s with back propagation are going to revolt against a more chaotic approach.

The amount of blowback is often proportional to the amount of effort sunk into the effort.

Every time this happens, I hope we have a less traumatic outcome.

I hope we have a less traumatic outcome.

The Road Ahead

The element of reward in Reinforcement Learning (RL) is an attractive concept.

This is likely to remain.

We learn through mistakes and reward. At the root of the messy process of natural selection, life learns through reward. Failure is premature death. Reward is persistence of life.

The element of an environment is also likely to remain.

The way that reward is remembered by an ANN, via back propagation, likely won’t persist.

The brain makes new connections. The brain also has some asynchronicity to it. A lot of the clues are in how new connections are formed may well come from the brain. And it’s likely to be some terrible, messy, structure.

Just like the human brain, it’s not likely to be very energy efficient.

There is an impulse within data science to contain that chaos and to generate very beautifully laid out networks, instead of harnessing the chaos to inspire and learn.

The human brain has really quite poor risk management properties in its early phase. They become really bad during adolescence. Perhaps there’s something there for how we approach machine intelligence.

When pressed, some might argue that they’d never want to develop an intelligence as imperfect as a human one. That might be a pretty big blockage right there.

The only thing I’m really sure of:

This is going to be toughest for those most invested in structured ANN.