
Filter foam
Last month, in Free speech and free deliberation, I argued that information rich free speech is the best kind of free speech because it enables free deliberation.
The core tension is that in the short-run, the return on misinformation is great. In the medium-run, the cost of misinformation is ruinous.
Given that most social media and information retrieval platforms have embraced misinformation contrary to their own medium-run interests, how might we, as individuals and in communities, remain open to receiving information while protecting our freedom of deliberation in the short-run?
In this post, I’ll argue that one potential set of measures in the short-run involves the creation of filter foam. Rather than experience the externalities from misinformation, amplified by centralized algorithms on centralized distribution networks, what if a self-empowered federated approach, a set of filter bubbles, a foam, could be a better response?
Centralized Distribution: It’s never different this time
There’s an entire supercycle that goes like this: inventors create a new medium like print, film, radio, Internet, then a bunch of innovative enthusiasts go in and open it up and there’s an explosion of democratic, decentralized activity. For awhile, anything seems possible. Then, inevitably, distribution is consolidated, regulated, and the medium is throttled (Wu, T. (2011)). Every time the innovators say that it’ll be different this time. Every single time, the usual suspects show up and consolidate it. The state is only too happy to participate for their own reasons (Scott, J. C. (2020)). It’s never different.
Later, a group of inventors create a new medium, and they swear that this time, it’ll be different.
In many countries, media outlets in particular and creative industries in general are regulated by the state, the audience, the marketers, the insurers, and the tangle of norms, incentives and understandings that they weave together. It isn’t quite all, entirely, censorship. The definition of censorship is when the state intervenes to prevent distribution of some content through a channel. The American federal state did this through various boards for film and print, or how the USPS would censor the mail for content. The state doesn’t even need a dedicated organ to monitor and censor the channels: often the threat of consequences is enough to enable self-regulation.
Not all filtering is censorship. There are ways that the content through distribution channels are modified and controlled by participants. For example, in Advertising Video On Demand (AVOD) contexts, the advertisers shape what is distributed and what isn’t. In Subscriber Video On Demand (SVOD) contexts, the audience shapes it either affirmatively through their engagement or negatively through their churn. The audience produce the signals and executives are supposed to be able to read their responses and make decisions accordingly.
Both are their own form of control.
This frustrates the creative side of the creative class because they’re triply throttled: first by access, then by content, and then by attention. The business side of the creative class experiences it differently: they like the margins that only centralized audience attention can generate, but they sure don’t like paying those margins to buy attention from other centralized platforms.
Centralized distribution
One response to a power gradient is to grade it. You level it as you would any gradient. You chop it up and spread it out.
A social distribution platform has five fundamental faces: one for consumers, one for advertisers, one for creators, one for its operators, and, in most countries, one for the state. Consumers experience a list of content and perhaps a few controls, creators get tools to assist them in publishing, advertisers get options for placement and targeting, the state gets to see like a state, and the operator gets to arbitrage it all.
Let’s take it from the consumers’ perspective.
If, as a consumer, you want to reduce the amount of misinformation, abuse, and noise in the sequence of content that you see in the window, what are your options?
You can ask the centralized platform to place better controls or shape better incentives for creators. They’ve chosen to go a different direction with that. So that’s out.
You can starve the centralized platform of your attention by joining another platform. It’ll be different this time. And some people are doing that. That’s fine. Let’s see how that works out.
The key conflict here is the misalignment between your interests and the operator of the centralized platform. Because they’re either unable or unwilling to align, you have to align them for yourself. So it follows that you’d need to separate the platform that you use from the platforms themselves. That would enable you to make decisions that are aligned with your interests. Your own, owned platform, would be aligned with your own interests.
The application of that alignment is a filter.
Because your interests are always there, and to some degree or another, always filter what and how you understand. If you’re like 1 in 10 Americans who doesn’t understand satire, then the temporary state of confusion you feel when you encounter something that isn’t real is what you know about satire. You’re learning that sometimes people deliberately generate fiction for some reason you don’t quite understand just yet, but you’ve definitely learned to mistrust the fake news at The Onion. You aren’t interested in satire. You’ve set up a filter for satire.
If you’re like a lot of people, you might not want to experience distress, or feel manipulated into feeling worse, by any content creator or platform. So you learn to ignore certain programs and even mediums. You aren’t interested in manipulation.
The fact that you have interests isn’t novel. They’ve always been there.
The technology to align decisions about the filter with your own interests is novel. And because you contain multitudes, the number of filters ought to reflect that. As a result, I’m talking more about a foam than a singular filter bubble.
Your filter
You’d have an app that you’d own. It would be on the devices where you consume newsfeeds. It would have a technology, potentially an agent or a community of agents, that would monitor content sources, construct a feed from it, and then watch your responses. It would learn.
You could, potentially, enable it to signal back to other platforms your engagement. Or, you may instruct it to be entirely silent. Or, to deliberately engage in deception in an adversarial manner. More on that later.
We know from a lot of user testing that the obstacle isn’t the technology itself. People hate dials, knobs, switches and forms. They don’t want to fiddle with a recommender engine to get it all just right. That’s what they were arguably hiring TikTok, Facebook, Youtube, and Twitter for: to know what they want. We don’t want a lot of friction. That hasn’t changed.
What has changed now is that it’s comparatively more convenient, though not necessarily easier, to train a set of agents over time than it is to spend a lot of time onboarding a recommendation engine. The chat box design pattern is attractive to consumers because it’s how they engage other people already. All the tensions and nuances of working with a computer are still present, perhaps aggravated, by using a chat interface. However, it is a more convenient interface for consumers to use. The friction is still there. It’s just graded over a lot longer time so you don’t abandon the process.
Moreover, you aren’t static, so the idea of co-evolution seems better aligned with how you grow. We tend to get to know one another through conversations over time, so it follows that we’d get to know ourselves, reflecting through an agent, over time.
The most basic of the filters, a content filter, could be built this way. It’s a gradual accumulation of filters that you’re applying, yourself, in conjunction with the agent. So, if you don’t want to hear about a specific topic, there’d certainly be a reduction in seeing that topic. If you don’t want to hear from a specific source, there’d be a reduction in seeing that source. If you don’t want to read long form articles after 10pm, just infographics and video, then you wouldn’t see long form after 10pm.
That would be fine for a synthetic monologue.
Congratulations, it’s just a layer on top of RSS.
This would control the one-way flow of information – and for the majority of newsfeed users – who do not engage with the content using likes, comments, or subscribes – this is sufficient.
Federated filtering
What about the social aspect of social media? Does this mean that you get to see what your friends are seeing? Does this filter the signals of your engagement from content creators (should a centralized platform want them to flow back to the content creators in the first place — after all).
I prefer to see whole people because interesting people tend to be interesting. I like hearing about zoning bylaw issues in San Mateo and about the latest developments in Large Language Models, and their opinions about American Primeval on Netflix, all in the same account. Most don’t. They only want to hear about American Primeval, or Entertainment, or Large Language Models. It’s about OR selection, not AND selection.
This is to say, most content consumers align somewhere between paying attention to the content attached to personalities, and paying attention to specific content of a quality attribute. Quite a few consumers simply want to know what’s popular and trending because they engage in trend-following. They need other people to make their minds up for them. Some creators simply want to know what’s popular and trending because they too want to be popular and trending. Many consumers care deeply about such signals. The essence of commercial creativity is to get that balance between relevance and divergence just right. Majority audiences want the same story told again and again, just the tiniest bit different each time.
A centralized platform aggregates and amplifies these common signals. The social scientists at ByteDance understood this better than anyone. After all, there’s utility in fitting in.
There’s also the relationship between the audience and the creator. Sometimes this is mediated in meatspace, through direct contact. One day you’re anonymous and the next there’s a mob of fans excited to see you. And sometimes, you only know you’ve created something of value because there’s an amplified response. You go viral!
As a consumer, I may want a person, artist, an author, a developer, a researcher or a commentator to know that I enjoyed what they created.
But I may not want the publishing platform to know that I saw what they did and approved of it because I do not trust that platform to use that information as leverage against me. In fact, I may have such a relationship with a centralized social media platform that I want them to have a fun-house mirror image of my interests so as to interfere with their monetization efforts. This is the kind of misinformation that is truly weaponized and a predictable end-state of trust erosion: it becomes adversarial.
In this way, through the manufacturing of false interests and deliberate oversampling of the content space, it may be possible to route contextual information to friendly agents while routing misinformation to unfriendly agents.
Brand Safety
Won’t somebody think of the sponsors?
Remember the way that marketers affect influence over what gets distributed and what doesn’t? As a marketer, it isn’t that I don’t want entire kinds of content to exist. It’s just that I don’t want my brand to be placed alongside certain content, because that association generates the opposite effect of the outcome I’m pursuing. That isn’t censorship, because it isn’t the state that’s regulating creative content. But it is an expression of commercial interest, mediated by my estimation, as a marketer, of what my target audience wants. There’s that tyranny of the audience again. They decide.
Not all marketing is pollution.
Some people do indeed opt into marketing! No really, some people love specific brands, and they invite them into their attention window. I believe in this approach, of consent-marketing, right into my bones. So much in fact that I had built an entire startup around that premise in the early 2010’s. And, even now, I’m not that amenable to revising my stance on this.
I’d argue that brands that appeal for attention, that position well as opposed to position entitled, would thrive in such a disaggregated environment over the short and medium term. In the short term, they wouldn’t be as vulnerable to the market concentration and pricing power that centralized platforms leverage over them. In the medium term, an inevitable reconsolidation would leave those who thrived with much better dynamic capabilities to compete more effectively, in particular, by viewing platforms properly as rivals as opposed to partners.
You could direct your agent to invite the brands you like hearing from into your attention, while, potentially, directing your agents to punish the bad actors who violate your trust with a stream of misinformation.
Moderated Consensus
To be maximally generous: moderation policy is an exercise in learning. The motivation to learn vary by individual.
Some people create for the status of reach. So they learn how to appeal to a specific audience, and, to differentiate from others. Meta acquired a rather unusual stance in the early 2020’s that it was news that was driving the polarization, misinformation and disengagement. Is it possible that the pressure to differentiate drove a lot of the extremism? It’s to the point that celebrities purity test each other constantly in an effort to one-up out-hard-core one another. And, there’s an audience for that. Extremity has been growing those audiences because valence gets attention. It isn’t just in politics. You can see it in a range of arts. In way, it’s part of an ongoing trend.
Recall, you saw this trend on mass media television long before amplified social media was invented: Extreme Makeover. Ancient Aliens. Kitchen Nightmares. The Chamber.
Some people have already learned that there is no market for their content and don’t care. They will continue to create content that is intended to cause societal change regardless of the harm it creates. Indeed, for some, harm is the entire point. Some people are cruel. Hurt people hurt people.
And some people are test the lines. Comedy in part is a process of plotting the boundary between an acceptable violation (so as to evoke thought, surprise, joy, and delight), and what is not regarded as a violation. That boundary varies from culture to culture, depending on what you can and cannot say. It’s linked to moral reasoning and attitudes towards sanctity, authority, order, fairness, and liberty – among other dimensions.
Where are the lines for you?
See, those lines aren’t static or even in a two-dimensional space. They’re dynamic and depend on who, when, and where you’re sending or receiving them. Without feedback signals, some content creators have less to go on. Without feedback signals, you have less to go on.
The Lines
There is a kind of regret in all of this. Arguably, all a centralized platform had to do was manage the lines between the consumer, creator, marketer, and the state, then extract any surplus. Which, if you’ve been reading the case against Google, it isn’t even that hard to do. It’s a fantastic business model because it has sharp gradients and deep network moats. Why do so many fail?
Maybe, just as farmers individually took control of the diesel engine and put it to work, maybe we should be taking control over our own attention engines and putting them to work?
Information rich free speech is the best kind of free speech because it enables free deliberation.
What happens when we get to draw more of our own lines, for ourselves?
References
Wu, T. (2011). The master switch: The rise and fall of information empires. Vintage.
Scott, J. C. (2020). Seeing like a state: How certain schemes to improve the human condition have failed. yale university Press.