wrote a decent summary about the Norvig-Chomsky debate.

I don’t think we have a common understanding of the science, just in general.

Proponents of an extreme Norvig perspective say that Chomsky’s ever complicated theories and models are proof that the traditional scientific method has failed, and that machine learning is the future. Those proponents may be misguided by how science works with models.

A model is representation. It’s like looking at the shadow produced by an umbrella. It’s a deliberate abstraction so that you can understand some aspect of it. And you can learn a lot about the shadow.

Why not just look at the umbrella?

An umbrella is extremely complex, and really difficult to analyze if you have no concept of metal or mechanical energy. We might not have the tools or strength to take it apart to understand it. The fabric type, and all that neat lattice structure that allows it to expand. It’s tube with a spring in it. There’s that latch that causes the umbrella to open. An umbrella is a pretty complex instrument.

Applied Prediction and Science

If there was utility involved in answering where a shadow is projected, at what time, a machine learning algorithm can do it, if given enough data and time. The shade produced by the umbrella is of particular applied relevance. It may be worth predicting. And it might do a very good job relating the position of the sun to the projection of shade on a surface.

From the point of science, that’s great. But it doesn’t, on its own, add to our understanding about why the shadow happens. The machine understands time and senses a shadow.

A scientist, like Chomsky, knows that the more he figures out about the umbrella itself, the more he doesn’t know. He might see the shadow, and, maybe, some of his knowledge about the umbrella may be used to make bigger or better shadows, but that’s not his primary objective. He might not even focus on the umbrella, but the thing casting the shadow.

His model grows ever more complex as he tries to explain shade by understanding why sometimes the sun shines, and sometimes it doesn’t. To say that he’s failed because now he’s talking about the exception of clouds is contemptible.

Paradigms grow ever more complicated with models over time. And then we get scientific revolutions. Those are just great. That’s how science work, at least, the best model we have for how science works right now. 


There’s nothing wrong with Norvig contending that machine learning algorithms and artificial intelligence are useful. I agree. They’re incredible pieces of engineering. And, they’re remarkably useful for scientists to make predictions about the future and update their understanding of their models.

Machine learning isn’t a substitute for human ingenuity in framing the questions or building the models. (And we’re a long ways off from that being so.)

Machine learning is as a power tool in the belt. It isn’t a substitute for the objective itself.


I’m Christopher Berry.
Follow me @cjpberry
I blog at