Generative Pre-trained Transformers (GPTs) have captured the public’s imagination. There’s a lot of fear. The relevance of that fear, as always, depends on who you are.

The technology first caused a surge of panic in December 2022 among some communications professionals. They fear a surge of supply, a massive increase in synthetic media, along with all of the misinformation that goes along with it. Because attention is inelastic, the price for content will collapse, and it’ll take their wages down with it.

They aren’t the only ones likely to be affected [1]. It’s kind of curious that you aren’t reading too much publicly from developers and their experiences with the technology.

GPT’s are a tool. A good GPT is capable of displacing a pretty terrible writer or a developer. Of course, those on the margins are always under threat from substitutes and are more likely to experience the consequences of minor macroeconomic tremors.

The technology, presently, is not capable of displacing an excellent writer or an excellent developer. Could it someday? Maybe. Perhaps GPT-5 could? I haven’t tried it, so I don’t know. What about the effect of GPT on those who are excellent?

Learning is accelerated with feedback. All things equal, the faster the feedback, the faster the learning. This applies to humans and neural networks alike. A good writer using GPT can become excellent. A bad writer using GPT can imitate into a decent one. I don’t know what a GPT in the hands of an excellent writer does? Maybe it makes them more productive?

I’m not sure how many people are using GPT’s as a learning tool, and how many are using it to outsource their job to a machine. It’s really hard to tell and I don’t have the statistics. The truth table on self-improvement is the same today as it was before December, 2022 though. The only way you’re going to succeed is if you learn, so you might as well try to learn. I just can’t estimate how large the cohort of learners truly is given readily available data.

I’m certain that you’re about to experience an increase in advertising personalization [2].

I’m certain that this technology will accelerate the creation of synthetic audiences.

A lot of personalized, generative, advertising will be based on your behaviour – what you click, what you follow, and what you create. And, all of those signals about what you are, your age, gender, location, household income, and marital status will be used. Marketers that are able to make good predictions and learn from their mistakes will excel. Those who fail will fail. And in the very short run, a few companies will surge ahead. And then, in the middle run, it starts to get adversarial.

What if those signals were to become contaminated? Not merely obfuscated Apple style [3], or encrypted, but active contamination that is injected into GPT’s?

What happens when many of the signals about you are generated by GPT’s? What happens if you hired a layer of protection for your attention generated by bots with a GPT at their core? The training data that feeds GPTs ends up poisoned. Welcome to the wonderful world of adversarial machine intelligence, in which consumers and companies compete to obliterate the commons. After all, if firms are going to leverage all the data regardless of copyrights, then, it’s a state of nature. It’s all against all.

And it gets more interesting.

Consider facts. A fact is a string of text that represents a truthful description of reality.

Is there still a demand for facts? Well, I still demand facts. So there’s at least a segment of one (Yippee?). I know one at least other person that demands facts, and we talk, so that meets the definition of an addressable market. And, further, there has to be at least one supplier of facts for there to be an operating market. So, maybe it’s safe to assume that there will be strong demand for facts in some form or another.

Facts are expensive to create. No, really, there’s a lot of work that goes into creating a fact. Somebody has to go organize reality enough to be able to record reality, and usually cross-reference it with somebody else who has organized reality or has recorded reality. A fact has to be thought about for it to be proven. I can’t see how GPT’s change the supply side economics of creating a fact. Maybe they help a journalist compose a story or a scientist write a grant, but they don’t fundamentally, unto themselves, generate facts.

GPTs create content. All facts are content, but not all content are facts. GPT’s are very unlikely to make fact generation any cheaper. And here lies the crux. If there is a demand for facts, then there’s a demand for trust. Which technologies enable trust? Who do you trust with your attention?

The classical response by centralized institutions is to increase the centralization. After all, whoever controls the information networks controls the levers of power (This has been true for as early as 1876!) [4]. The effort to centralize is predictable. And it’ll fail because public trust in most of those institutions is already extremely low [5] and capital intensive to reverse. And there is widespread denial, within centralized institutions, that trust has eroded so much. The route to the freedom to make up your own mind with facts won’t ever run through centralized institutions, authoritarian, quasi-independent, or otherwise.

The information itself will need a trail – from the point of origin to your device – that can’t be altered. And, it’ll have to be auditable, meaning, anybody can independently inspect it. While it could be useful for contemporary institutions to participate as nodes, their days of centralized gathering and distribution are long gone.

In sum, as the costs for the production and distribution of mistruth, for political and privacy purposes among many others, collapses, and the costs for producing accurate facts remains static, one can predict the explosion of synthetic media chasing synthetic audiences. And this might be okay.

Or we might find another way.

[1] Eloundou, Tyna., Manning, Sam., Mishkin, Rock., Pamela, Daniel. (2023) GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models

[2] Urban, G. L., Liberali, G., MacDonald, E., Bordley, R., & Hauser, J. R. (2014). Morphing banner advertising. Marketing Science, 33(1), 27-46.


[4] Wu, T. (2011). The master switch: The rise and fall of information empires. Vintage. Pp. 320.