Paul Graham wrote on May 9

“Observation suggests that people are switching to using ChatGPT to write things for them with almost indecent haste. Most people hate to write as much as they hate math. Way more than admit it. Within a year the median piece of writing could be by AI. … I warn you now, this is going to have unfortunate consequences, just as switching to living in suburbia and driving everywhere did. When you lose the ability to write, you also lose some of your ability to think. I don’t have the slightest hope of averting this switch. I often tell startups it’s safe to bet on laziness, and this is one of the biggest bets on laziness in history. The switch is going to happen, and we won’t know the consequences till it’s too late. I’m not warning about the switch to AI in the hope of averting it, but to warn the few people who care enough to save themselves, or their kids. Learn to use AI. It’s a powerful technology, and you should know how to use it. But also learn how to write. ”

It’s worth reflecting on.

Tools

Roger Martin once argued that your stance (Who am I in the word and what am I trying to accomplish) guides what tools you use (With what tools and models do I organize my thinking and understand the world?), which often guides your experiences. (With what experiences can I build my repertoire of sensitivities and skills?)

I’ve long railed against definition by tool. You define your stance. You use tools to realize your stance. You simply can’t let the tools you use to define your stance. While feedback may run the other way (your experiences informs your tools which informs your stance), you have to start with both feet planted on ground of your choosing.

Generative Pre-trained Transformers (GPT) and Large Language Models (LLM’s) are tools.

Beliefs

There’s evidence of a few beliefs in what Paul wrote:

  • It’s safe to bet on laziness;
  • Writing and thinking are linked;
  • The future is locked-in;
  • And this technology will have consequences.

I’ll unpack each one.

It’s safe to bet on laziness

In a world of unlimited ambition and limited resources, we have to be choiceful, strategic, about what we decide to do and what we decide not to. There’s always the decision to engage in a search for alternatives, and search, itself, how much creativity we’re going to put into generating options, is a decision unto itself.

People trend towards satisficing decisions. Take something as mundane as mustard selection. Chances are, by now, you go by habit. You pick up mustard when you’re buying other things, and, in spite of the multiple containers staring at you on that shelf, and you usually pick up whatever you picked up before. Typically, you only ever start exploring your options if you’re dissatisfied [2]: maybe the price has gone up too much, the package has shrunk, or you retain a memory that the previous mustard just wasn’t cutting it.

Given the expense of changing a habit, and our satisficing tendencies, betting on laziness is a pretty safe.

Writing and thinking are linked

I find it difficult to keep more than seven things in my mind at the same time. I rely on lists, diagrams, and writing to make connections and understand complexity. They’re great tools. And so, I really only know what I’m thinking about something complex when I organize it into something, understand the patterns, and draw something simple enough out of it.

I suspect that some people are able to hold a lot more than seven things in their minds at the same time, and that some people might only hold a handful of things. Most things in nature are a lot more complicated than seven things, so, writing, as a tool, is pretty useful.

Writing is a tool for thinking.

The future is locked-in

Hard lock-in is often unintentional [3]. An entrepreneur isn’t often aware that their fingerprints will become the labyrinth that employees will continue to stumble through decades later. Most English speakers still use the QWERTY Keyboard, don’t we?

I can trace a general pattern in the #emNLP community from the mid-2010’s to today. Either a researcher had access to a Large Language Model (LLM), or they didn’t. The names of the platforms and number of parameters they used changed over time, but the core research questions didn’t seem to evolve nearly as rapidly. If you had access and funding to use it, you engaged in optimization, because that’s the path of least resistance. If you didn’t, you had to get creative, because the easiest path was closed. Scientific paradigms [4] create their own insidious lock-in.

I’ll over-simplify the present paradigm as Add-More-Layers. There’s benefit to the approach. More layers has certainly optimized (often convenient) metrics. Adding more layers was always just inevitable. It’s the obvious path.

Off to the side though, anomalies are accumulating. And, there are a few researchers over there, studying those pieces and whispering at each other. It takes awhile for them to synthesize a new paradigm. But it’s inevitable that they will. They always do. And there’s always a consequence.

This technology will have consequences

You take the good with the bad.

The discovery that a little bit of diesel fuel vapour will push a piston changed our lives. And yet, the source of that fuel and the byproduct has had a consequence. It isn’t too difficult to see that each new technology brings greater benefits, and greater consequences. LLMs are a bit like that. I don’t think they’re to information processing as the diesel engine is to industrialization. They remind me a bit of the first engines before we learned good metallurgy and material science.

With LLMs, you’ll have to take the good with the bad.

Paul’s apparent concern is that the consequence of the technology is that it’ll make people less capable of thinking. That’s probably valid. Writing is hard work.

Natural General Intelligence is quite a bit harder to quantify than Artificial General Intelligence. That quantification challenge sure does seem central to so many problems, doesn’t it?

What to do?

Paul’s advice is good.

Learn the tool, but also learn how to write on your own.

Do what you need to do. It’s in your own best interest to make your mind.

So make it.

[1] Martin, R. L. (2009). The opposable mind: How successful leaders win through integrative thinking. Harvard Business Press.

[2] March, J. G. (1991). How decisions happen in organizations. Human-computer interaction6(2), 95-117.

[3] Thelen, K. (2004). How institutions evolve: The political economy of skills in Germany, Britain, the United States, and Japan. Cambridge University Press.

[4] Kuhn, T. S. (1962). The Structure of Scientific Revolutions. Chicago (University of Chicago Press) 1962.