Paul Graham wrote on May 9:  “Observation suggests that people are switching to using ChatGPT to write things for them with almost indecent haste. Most people hate to write as much as they hate math. Way more than admit it. Within a year the median piece of writing could be by AI. … I warn you now, this is going to have unfortunate consequences, just as switching to living in suburbia and driving everywhere did. When you lose the ability to write, you also lose some of your ability to think. I don’t have the slightest hope of averting this switch. I often tell startups it’s safe to bet on laziness, and this is one of the biggest bets on[…]

The whole thing, all of it, depends on optimism. Optimistic expectation is a natural force generated by humans and amplified by the networks that humans create. At the core of the entire traditional liberal paradigm, since the enlightenment, is the expectation that things will be better in the future. If things get better at a rate of just 2% per year, compounded annually, things get twice as good in just 35 years. If things get better at just a little bit more than that, 3.6%, things get twice as good in just 20 years. We’ve come to expect things to become better, dependably, and predictably. The enlightenment is an important event to call out. It’s way easier to shrug ones[…]

The other I likened the process for taking apart a Job To Be Done to taking a part a lobster. There’s a very effective way to decompose any problem with enough energy. And then I watched The Founder on Netflix and admired the McDonald brothers using a classic technique in management science to refine a system on a tennis court. And I loved it. They really refined hamburger and frenched fry delivery. And then this morning I read that Andrew Ng in working on a new coursera course for AI. And I’m thankful for his initiative and optimism. Out of those three threads, this one post. The Assembly Line The assembly line was an American invention for Americans. It could[…]

You’re going to hear a lot more about Artificial Intelligence (AI) more generally, and Machine Intelligence more specifically. Valuation is the core causal factor. Here’s why: We’ve gotten pretty good at training a machine on niche problems. They can be trained to a point to replace a median-skilled/low-motivated human in many industries. Sometimes they can make predictions that agree with a human’s judgement 85 to 90% of the time, and sometimes, it’s the human that’s causing the bulk of the error to disagree with the machine. We’re confident that we can train a machine to learn a very specific domain. And these days we’re in the midst of that great automation revolution. Most of the organization that build those machines can[…]