I want to believe that the current generation of applications powered by Large Language Model (LLM) don’t represent the height of the state of the art for prediction machines and that no single firm will reach 80% market share and go onto dominate the generative era.

I want to believe that the future is quite open, and that these early returns we’ve made in applied machine learning can compound.

It’s because I want to believe so much that it’s worth questioning the assumptions.

What might cause the future to become closed?

Does OpenAI scan its API logs for good ideas?

In 2023, a surge of founders developed skins for OpenAI’s ChatGPT. Some based their startup on enabling a user to upload a PDF and get a response from ChatGPT. Their strategy was to integrate a commodity technology, in this case PDF upload and extraction, and structure a post request to OpenAI’s API. It’s all very curious. Was it a blunder? Is there a Judas Goat somewhere in that flock? What’s going on there?

It doesn’t seem to be a great strategy given its vulnerability to ChatGPT’s capital, attention, and tenure. The team at OpenAI could pretty easily observe the unmet need by noticing API’s use patterns, and just imitate the feature, meet the need, and take the margin. It’s a convenient way to do it. I suppose maybe convenience is almost always a great bet and maybe there’s some kind of a longer tail arbitrage play here on this? It’s likely that since because the internal incentives at OpenAI are aligned for surveillance, that they would engage in surveillance. I’d guess, based on the incentives, that they scan to imitate.

There’s an issue of trust. If one participates in the ecosystem, invests capital, is surveilled and imitated out by OpenAI, what is the incentive to participate. On the other hand, should OpenAI not improve its product for the benefit of its shareholders and customers? This is the crux of the dissonance. In the short-run, OpenAI needs an ecosystem of developers to wick complexity away from its core, accelerate adoption and dominate the sector. But there are many entrepreneurs that do not appear to anticipate what OpenAI’s logical roadmap. It doesn’t appear to be defendable strategic position in the short-run.

What’s the short-run anyway?

Many didn’t believe that DropBox was particularly defendable either in the short-run either. But they played and won for awhile. That’s how it works, doesn’t it? Lower a barrier and get a cookie. Somebody observed that large file transfer had several barriers. They looked at how regular people moved files using a graphical user interface. They adopted that design pattern. No command line interface required. Orchestrating that on the back end wasn’t trivial. A lot of effort goes into creating elegant products. They saw an unmet need and met it.

The critics argued that they could transfer large files with just a few command lines. It was easy and trivial. And what DropBox had done was easy and trivial. And yet…the general public wasn’t learning those command lines. Most people live their digital lives without ever encountering a terminal. There’s a barrier.

That’s the thing with barriers – it can’t be just any barrier. A need has to exist beyond the barrier. The bigger the need and the bigger the barrier, the bigger the reward for lowering it. DropBox lowered the barrier to moving large files. Lots of entrepreneurs lowered the barrier for uploading PDF’s into ChatGPT.

And within a few quarters, OpenAI dropped that barrier. Was that fast? Or was that strangely slow? It depends on your beliefs about markets.

I still can’t shake the belief that markets are hyper-efficient. To this day, in spite of all the evidence to the contrary, I still over-estimate how fast they are. This belief is countered daily when I’m disturbed to see seniors citizens rummaging through public trashcans for carelessly discarded recyclables. Not all of them have to it. Some are motivated by independence. Just as there are sticky margin in those trashcans, there are sticky margins at the margins of ChatGPT. Markets may be efficient, but not so much so that margins sublimate instantly. Somebody has to throw away the bottles. Those bottles have to remain in the trash can long enough for the independent elderly to pick them up. There’s lag, inefficiency, built right into the fabric of spacetime itself.

In some ways, ChatGPT’s advantage could be just as sticky as the soda soaked bottles in the trash. For right now, in November 2023, OpenAI lowered the barriers for engaging a Large Language Model. Before ChatGPT, only a few people had the skills and capital to engage it. Now, anybody that can afford Netflix can afford ChatGPT.

Some people use it as a thinking tool, and other people rely on it to think for them. Leaning on ChatGPT to do their thinking for them, to filter information, is pretty convenient. Many convenience-value users aren’t investing a whole lot of effort into figuring out how to use the tool. Convenience is a great bet because customers that prioritize optimizing effort aren’t likely to optimize an effort to search for an alternative.

Those alternatives are flooding in. Some are from other big tech giants. There are competing solutions that run in the cloud. And then there’s everybody who isn’t in big tech. In the short-run, OpenAI is in a good position.

I want to believe that Open Source Software (OSS) is too.

Open Source Software and the networks it fosters

Then there is a lot Open Data that’s training a lot of Open Source Software (OSS) that’s running on local machines. The models are getting cheaper to train and are getting small enough to run on smaller machines. Moreover, raising an intelligence is better with a large, supporting, community. So, there’s an organic advantage to raising an LLM in a community. ChatGPT is vulnerable to the dual forces of radical distribution and decentralization of machines and OSS.

One perspective holds that OSS is a great social good because it accelerates the commodification of a novel technology. It’s a catalyst that make costs lower, and lower barriers, for everyone. Another perspective holds that OSS is a great weapon against competitors. Why not attack opponents by accelerating the commodification of the technology they’ve invested in? There’s truth in both perspectives. It wouldn’t be the first time that Google and Meta engaged in nurturing OSS for both reasons at the same time. Google may have acted for both reasons at the same time before by using Android to reduce Apple’s market share on mobile (and to preserve search advertising).

Meta may have both motivations with respect to LLM’s. Llama is a part of the LLM OSS ecosystem — but it isn’t alone. Meta’s broader effect in influence may be magnified, as a relatively small amount of investment can have multiplier effects as the capital circulates in the ecosystem. A lot of good happens when OSS is supported and grows [1].

Tight Money, OSS, and Competition

This is the era of tight money and fragmentation: when war, climate pressure, and de-globalization is driving higher interest rates. Money is more expensive. The fragmentation of the Internet is a decentralizing force. Inflation has prompted central banks to raise interest rates to foster demand destruction. The re-industrialization of North America as a desire for certainty and secure supply chains competes for capital that would ordinarily flood into tech.

I don’t know how tight money will affect OSS activity. I couldn’t find anything on the relationship between OSS activity and interest rates. Even the term activity itself seems nebulous as a measure. It likely varies on each contributors’ individual motivation and there’s are likely many conflicting effects. It could be that the most curious, most passionate, best people in tech are contributing to OSS because they’re the most curious and passionate people often end up becoming the best. How that population ebbs and flows, how communities emerge from the foam of waves, forming beautiful weaves, only to dissolve quickly. How big are the waves?

OSS is a fantastic accelerator of networks, network density drives specialization, and specialization drives productivity growth — so there’s a possibility here that OSS contribution rates are both a cause and a effect, circular, with confusing paths woven within the loop.

I want to believe in a pattern of OSS becoming the dominant method for technical advanced reinforces towards the positive — that its wobble are akin to tossing a brick into a running washing machine. That’s a lot of words to explain why I don’t know the relationship between tight money and OSS activity.

There are many counter-responses that are interesting. A reaction to watching decentralizing forces accelerate the commoditization of Artificial Intelligence would be to use government authority to restrict competition. Regulatory capture would be too blunt. Successfully steering collective choice towards collectively beneficial decisions given the perception of existential risk would be a fluffier way of stating it. There may be genuine concern about how AI might be used. The results compound, setting up a way to fund OSS activities with a flow of those returns has taken on a few things — if you could use it.

OSS is not convenient. Running your own LLM is still more complex than running your own server. The barriers to running your own continue to come down, though, perhaps not at the same rate as they did in the nineties. The barriers are still not nil. It hasn’t been DropBoxed as of yet. And so, the general public generally doesn’t run their own web servers. The vast majority hire a major platform as their primary publishing platform. Mobile apps are simply more convenient for the job they’re hired to do. The barriers to self-hosted and self-managed LLMs are coming down, to be sure. It’ll be part of the future, but in the mid-run, it isn’t the future.

Convenience is a mega-trend and it produces its own inertia.

Are LLM’s as good as it gets?

There’s a belief that LLM’s, on the current generation of technology, are about as good as they’re going to get. OpenAI is using its capital advantage to brute force the solution. Brute force is a strategy. It may even be a strategy that is locked into OpenAI’s operating culture.

Given the volume of publications explaining different aspects and inventing new metrics to justify publication, LLM’s, as a class of models, it’s very likely that LLM’s will get better at a slower pace. There are still a few surprises left and a few hyper-parameters to be tuned. LLM’s are likely to be with us as a tool in the medium-term.

LLM’s are unlikely to be as good as narrow machine intelligence gets. There’s another paradigm waiting to be discovered. The environment isn’t ready for some of the ideas that are already in print. We’re tripping over the concept of consciousness.

I have yet to read a great theory for how that awareness is manifested in anything. I’d love to read a compelling one. There’s something curious happening in your brain as you read these words. Unlocking that mystery could be at the root of artificial consciousness. At the same time, humans fly all the time and they didn’t grow wings. Machines may be pushed to evolve consciousness someday but they probably won’t grow skulls. There’s something beyond our knowledge, so in the short-run, we’ll have to keep looking for that paradigm.

A Desirable Future

In a way, the winner-take-all dynamic of big tech is unfortunate. Every ecosystem attracts predators and we’re better off when there’s competition. I remain to be convinced that two decades of the domination of information retrieval by a single firm has been optimal. I’m not convinced that the domination of generative by a single firm will be great. I’d like to believe that a better path runs through OSS, but it’s awfully inconvenient. I don’t yet understand how macroeconomics affects OSS talent and capital flows, so it’s hard to forecast how this era of tight money is going to affect Generative AI development.

Wouldn’t it be great if the future was more decentralized and developed faster?

Sources

[1] Ye, Y., & Kishida, K. (2003, May). Toward an understanding of the motivation of open source software developers. In 25th International Conference on Software Engineering, 2003. Proceedings. (pp. 419-429). IEEE.