The Internet After Politics

Black, azure, and hyaline.
December 2021

For the last 150 years, the focus of technology has been to reduce distance as much as possible. This is because the entire point of technology has been war, and war benefits a lot from making distance as irrelevant as possible. It means that soldiers can be further away from the slaughter, or that soldiers don’t even have to be present for the slaughter, or at least that the few who are present are backed up by a huge number who aren’t. War has become a psychological nightmare, at least from the stance of the Five Eyes, who have been fighting unpopular, boring, and yet still quite dangerous wars where the likelihood is high that those you kill are not soldiers or even agents of the enemy, but random civilians in the wrong place at the wrong time. Having fewer people on the ground there is necessary for mere psychological well-being.

While drone war rages on across the world, the most important development of the latter half of the 20th century happened quietly in the 1970s under the purview of the Advanced Research Projects Agency and was called ARPANET. ARPANET eventually became the Internet (at the time, big-i). This would change the world forever, in ways that no one really wanted but pretty much everyone predicted. We’re now stuck at the tail end of 30 years of internet culture, and questions are starting to be asked that no one really has technological answers to. It’s quite possible we don’t have technological answers to those questions, but what we do know is that any answer from now on, to any problem, involves a technological component.

At this tail end, we find that politics as such has failed. Occupy was a failure, populism delivered nothing, and now even the formal and well established end of the system seems to have slid into decay. Democracy doesn’t seem to function anywhere, and no one’s really sure if that’s a good or a bad thing1. The shadow of censorship looms large, but the conversation is at best confused, and it always seems to fall back to a no-true-Scotsman. Huge corporate interests exist and everyone seems to be sure they’re bad, but the solutions are floundering and mostly unworkable tire fires. The technology of the future must work around this new environment, providing solutions not just to current problems, but changing the material conditions of communication enough to generate an entirely new landscape of problems.

What I propose here is to discover the technology of creating, rather than destroying, distance.

What the Cloud(s) Said

There is what you might call a “founding myth” to the web startups that currently administer our digital landscape. Realistically, this myth traces its origins further back — Back to the 1970s and the Mont Pelerin Society, the monetarists, and the birth of neoliberalism. But we’re primarily interested in the themes that these companies play by, and more importantly, how that changes the technology stack that people are by now accustomed to using.

First, I want to clear up what I mean by “myth”. I don’t want to sound dismissive to people who believe in this idea, in part because some of it is true. When I say “myth”, I mean something like “mythology”, a popular folk-history of how a thing started, which is then used as a trope in media, and becomes something that people aspire to.

The myth goes something like this: A few plucky misfit guys write software out of a garage in a suburb somewhere off of San Francisco (or, alternatively, Seattle), living off of ramen and coffee and energy drinks, and eventually if they’re lucky they get to move the entire operation to an office. The office might be a literal hole, but they’re moving up in the world, and eventually they become some of the first trillionaires. These are the kinds of stories told about companies like Amazon, Google, and Paypal, but not really Apple, Microsoft, or Intel, all of which tend to emphasize technical, creative, and business acumen, along with extremely high standards, over can-do maverick spirit. While Steve Jobs dropping an iPhone in a fish tank to see if there’s room left in there makes sense for a hardware company, the web startup — The company of the cloud computing era — is better summed up by catchphrases like “move fast and break things” or “don’t be evil”. To Microsoft, offices are a perk and giving one to every employee is a way to show off a culture of respect. To Facebook, offices seem stifling and backwards, a sort of “boomer” way to do business, and they’re replaced with an open floor plan. Then spoke the clouds: Open, transparent, liberty. Do what thou wilt.

“Cyberspace” is often represented as a space, a physically separate reality with different basic laws of interaction. While people might be there, they often act somewhat indirectly, hence their use of “avatars” to interact with the digital world. Consider Neuromancer’s matrix, or for that matter the matrix from The Matrix, or the digital world, OASIS, inhabited by players in Ready Player One. All 3 of these are represented by the user being transported, or cut off from the real world. Case’s nervous system is wired in, Cypher starts killing Trinity’s comrades as they’re helpless to do anything but watch, and when you put on the VR headset you can ignore the horrid world of trash that you live in. This kind of cyberspace, even when dangerous, perverted, or dystopic, represents an escape into terra nullius: A place where anyone, even freaks, weirdos, and misfits (represented by dyed hair, or failing that, awkward digressions about masturbation) can find or create a place to fit in. This dream is not so much a break with the culture that preceded it, but rather contiguous with it. It accelerates liberty, creativity, even exit2, everyone has a similar level of access, and no one is really left out. If your part of the world is filled with trash, you can escape into a beautiful world of images that feels just as real as the stench of garbage, except everything is beautiful, and nothing hurts. Disabilities are rendered irrelevant by the new powers of technology. Even in the case of The Matrix, you can wake up, leave, and demand to go back in rich. When Sadie Plant talks about cyberfeminism or Laboria Cuboniks talks about xenofeminism and the power of technology, this is the kind of technology they mean, the kind that liberates rather than captures, builds rather than destroys, and equalizes people rather than reinforcing existing hierarchies. This kind of cyberspace is what the cloud computing era promised in its mythos.

A good example of a contrasting view of cyberspace is the one presented in Unfriended, a B movie in which a computer ghost kills off — one by one — the people who bullied her in life. In Unfriended, cyberspace isn’t separate from the real at all. Laura, the ghost, was bullied into committing suicide after an embarrassing video of her goes viral. As a ghost, she causes each friend to commit suicide, usually graphically (and somewhat comically) on camera to the horror of her former friends. Campy as it is, Unfriended gets at something that these previous works very much do not: That actions on the internet are real in the sense of being continuous with actions in real life. While certainly one could die in the matrix, the bodies are still separate, a mistake in the interface kills you, rather than something real reaching out through the wire. Going further, one of the characters killed by Cypher is male in the real, but female in the matrix, further underlining the separation between this virtual (and not-so-subtextually aspirational) world and the boring, dystopic real. Compare this to Unfriended’s ontology, where entities “from” cyberspace are in continuous interaction with the real world, and aren’t even from cyberspace in the first place, but instead are the results of actions within the real world. Cyberspace is just a carrier rather than a new world. And after 4 years of a president who seemed to live and die on Twitter, cyberspace has started to look a lot more like Unfriended’s video-phones than bringing heaven down to Earth. It’s time to admit that we didn’t build the kind of cyberspace from Sadie Plant’s dreams. We built the kind from Nick Land’s nightmares.

Translucent Black Mirror

On January 6th, 2021, in the wake of a tumultuous narrative around America’s electoral systems, a lot of people got themselves arrested in a very, very funny way. By January 8th, Trump had been suspended from both Twitter and Facebook, with the rationale that he had encouraged violence on the 6th. Over the next few days, he and his associated media websites (such as Parler) would be banned or otherwise removed from almost every platform on the internet. He would be relieved of his office as president on the 20th. On its own, this might be seen as an isolated incident. The president goes berserk, and in the process gets himself fully cut-off. A necessary exception that proves the rule of generally unconstrained free speech.

However, this is not an isolated incident, and especially not for Twitter and Facebook. Twitter and Facebook have both had to deal with the exact same problem in the last 10 years: Beheading videos. ISIS realized years ago that social media, and Twitter in particular3 was an excellent recruiting and propaganda tool. The Al-Qassam Brigades, the military wing of Hamas, have also taken to using social media4. Both groups were banned, but the task is quite difficult in both cases due to random users who happen to be associated with ISIS, Hamas, or the Al-Qassam Brigades posting things that the US would consider threats to its national security. And while Twitter might like to play at post-nationality, it is an American company, with American interests, and so down the accounts went. Perhaps you consider this a form of American imperialism, or perhaps you assume that this is simple morality, but it is clear that censorship is necessary if you want to keep your platform afloat.

Beheading videos, along with school shootings, animal cruelty, and similarly awful things to see, are a threat to your business model. Users, in general, don’t like to be subjected to incredibly graphic depictions of violence. They like cute videos of dogs, not videos of people beating dogs. These videos have to be removed as quickly as possible, necessitating an army of human moderators working in conjunction with 24/7 automated systems and diligent user reporting to do that. All of this costs money, and yet nonetheless Facebook, Twitter, and even TikTok employ huge numbers of people to look at flagged videos all day and figure out what to do with them. All have similar complaints from workers, namely that it’s incredibly traumatizing5, and yet the companies continue to pay out tons of money for these services, indicating that such an effort is important to them. Let me repeat in case you didn’t catch that. If you don’t spend huge amounts of money and traumatize hundreds of people by putting them on an IV drip of videos of people getting stabbed, social media doesn’t work. If you’re not censoring, you’re not going to stay afloat, because people won’t assent to an uncensored experience. Why is that? What got us here?

Byung-chul Han observes that a society obsessed with transparency is also obsessed with exhibition, and that exhibition encourages communication through something a little bit like the spread of a virus6. He calls in particular on Facebook’s decision to include a “like” button, but not a dislike button. Since aesthetic judgement takes time, and there’s no way to communicate that you dislike something, there’s really no point to evaluating most things you see on social media. This forces content to be direct, unambiguous, and immediately appealing — In his words, pornographic6. He also calls this kind of communication “anesthetic”, that is to say that it eliminates feeling, rather than causes it. Successful arguments (what Dawkins, but not us, might call “memes”) in the society of transparency are thought-terminators, cliches, rather than anything provoking. Provoking doesn’t sell ads. Transparency, which we just established is a founding value of the moral universe here, demands that everyone be able to see everything. In practice, the other thing it seems to demand is that everyone see as much of everything as possible, implying that everyone must consent to seeing everything presented, or else they leave. And to avoid people leaving, you have to make sure no one objects. The larger your community is, the faster the scope of allowable content approaches vaguely inoffensive vlogs, by conventionally attractive people7, and nothing else.

Even if you don’t want to actively moderate and remove posts from undesirable groups of people, the algorithm will quickly sort them into irrelevance. It’s much easier to show ads to people using an “algorithmic” feed, predicting what they like, rather than force them to do it themselves. This kind of thing is what the successes of TikTok and YouTube are based on. And while it definitely helps sort the wheat (i.e. the cutest dogs, the most photogenic apartments in some neighborhood you can’t afford, and the least interesting political opinions you’ve ever heard) from the chaff (i.e. anyone who has ever so much as seen a zit or cellulite, or a concept that takes more than 90 seconds to explain), it also homogenizes culture even faster than moderation can. The global village turns out to have a gossip problem.

Future Perilous

Optimism hasn’t died though. Half-solutions abound. While all have serious flaws, I’m hardly the only one thinking about this, and the solutions that exist are at least interesting promises.

The most obvious solution, and the one that has recently become near universal even among “normies”, is to retreat to a smaller, more curated group of people who you know. This is the realm that Discord, Whatsapp, and even Twitter DM groups inhabit. They constitute, to some extent, “little dark webs”. While “dark web” might sound scary, dark webs are just online spaces that can’t be easily searched, and thus allow for a little less filtered views of the world. Importantly, these spaces are still moderated, sometimes quite harshly. This moderation allows the communities that develop within them to set boundaries that make sense to them, without needing to completely submit to a larger culture. While all of this sounds utopian, it also allows from some nasty developments. In an American context, this would be Charlottesville8, but you also get the exact same problems you get anywhere people are allowed to build out their own forms of authority: Lynch mobs9, accusations of witchcraft, and straightforward bullying. These things were of course the impetus to put everything in the public sphere in the first place, in the hope that transparency would force people to behave. And while it didn’t really do that, putting everything back into purely private spheres doesn’t either. In such occult circles, misinformation compounds into something deadly very quickly, leading to society-wide meltdowns and paranoia.

Perhaps the problem isn’t merely information visibility, perhaps it’s simply centralization. Maybe, the logic goes, people simply have to tolerate the animal abuse videos and the calls to mass violence, transparency is worth it to avoid the shady dealings that secrecy entails. This is the problem pointed to by entities like the Pirate Party, Sci-Hub, and now by blockchain advocates. To evaluate this seriously, we should take a look at two case studies where organizations tried to make all information free through decentralization, namely IPFS and the Pirate Bay. The problems faced by IPFS represent the practical end of things, and are typical of what was discussed earlier. Being a giant, unmoderated, censorship-resistant file store attracts the worst kinds of people, and while IPFS has taken considerable steps to reduce the likelihood of sharing illicit or simply broken content, blocklists still have to be implemented, and the network is somewhat conservative about how it replicates files10, meaning that unpopular files will eventually be pruned. This means that knowledge persists as long as its useful to someone, and not a minute longer11, so it falls somewhat short of our imagined solution of totally open knowledge forever. While the Pirate Bay has this problem too, since torrents without seeders are useless, it also has the problem that it’s a physical system, which means that it’s vulnerable to censorship by law enforcement physically seizing the servers.

Now, in contrast to IPFS and the Pirate Bay, blockchain systems actually do host information forever and are censorship resistant. And while there are severe latency problems, I think we should briefly put those aside (technology can always improve to fix these) and talk about a much more fundamental problem here: Do we want an immutable, censorship resistant network in the first place? The truth seems to be that we don’t. The situation of the Pirate Bay, Trump’s removal from Twitter, and the constant cat-and-mouse game of removing beheading videos demonstrates that the internet is not a Gibsonian escape into the matrix, but instead an extension of the body, always subject to externalities. Not all information is created equal, some of it deserves to be suppressed, and sure, maybe blockchains will one day help us do that, but the current trend with them does the opposite of this, and exacerbates the same problems that cause modern social media to act as a bloodbath. In truth, the next phase of internet begins not with the question of how we build networks resistant to censorship, but rather how we cope with its necessity.

Byzantine Games

Cultural values lead bottom feeders into profitable trenches, and the rest of the ecosystem grows out of the backs of the bottom feeders. Economic, cultural, and even moral constraints push the commons into panoptic holes. What kind of ship do we need to get out of here? What does getting out of here even mean?

A space is only a space insofar as it is not another space. What this means is that spaces that don’t distinguish themselves — Terra nullius — fail to make an impact on anything, and that the only way to shape them is to have them be meaningfully different from other spaces. As we’ve seen, the thing that really makes spaces on the internet different from one another are not just the forms of their usage, but also who uses them, and that bias in who uses your technology is not possible to avoid, lest you slide straight back into homogenized, formless garbage. We have also seen that total privacy, the retreat into backrooms and dark webs, doesn’t create meaningful communities but rather insular cults that fail to produce anything but masturbation and violence. What this points us towards is what we could call located communities, or spaces that are connected to other spaces, but not necessarily the whole world. They understand themselves in relationship to each other, rather than in relationship to a broader national or world culture. They resist categorization as subculture, because they’re a little bit closer to warring secret societies. The people in them matter more than the principles they’re founded on, and they exist not to push forward an idea, but to advance a conversation. Maybe most importantly, these groups are things people feel loyal to, spaces which take on identities as objects beyond their memberships. They provide a context. The coils of a serpent are even more complex than the burrows of a molehill12.

To better understand this, let me introduce a notion of “information regimes”, which you can think of as a worst-case analysis for the spread of information within a group of nodes. These could be computers, or or people, or whatever you want to call them, what matters is that some things know a thing and others do not. Roughly speaking, there are 3 kinds:

  1. Public. In a public information regime, anyone can know anything without permission. There is no mechanism for restricting the flow of information, and theoretically everyone might know everything. In practice, this might not be the case — For instance, you might choose not to look in a place, and therefore miss some information. But you could fix this by simply looking in the right place.
  2. Privacy. In contrast to totally public information, a privacy-focused regime allows a single choice at each node of whether or not to share information that they have. Once information is shared once however, it is effectively public since every node’s decision is completely indpdenent of the decision of the node that it got the information from. The node might broadcast, or it might send it to a single other node, or some collection. But once the information is sent, it’s beyond the sender’s control.
  3. Secrecy. A secrecy-focused regime is like a privacy-focused one, but allows for much more complicated “contracts” between nodes, governing how information is to be shared. For instance, a simple regime would involve a “vendor” who has access to some information and may share it freely, whereas clients can’t share it at all. Other, more complicated regimes might exist: You could send a message to a friend, and they might be capable of sharing it with other friends selected by you at write-time, but only for a set amount of time13.

We have, at the moment, reasonable expectations of privacy. In fact, end to end encryption largely delivers on this promise, since as long as you keep your key private you can be certain that information goes to one and only one person (but after it gets to them, it’s up to them whether or not to keep the secret). You can keep a shocking number of things private online, at least within certain circles. While there are certainly threats to privacy, such a thing is always already a game of cat-and-mouse, not just because of technology or the current political situation, but because of the nature of how information moves around.

Secrecy, on the other hand, is much harder. This is in part technical, but also cultural. Secrecy is by nature an act of collusion, and collusion means power away from the center, but within the palace. This throws the powers that be off-balance, and therefore can be dangerous to a political or cultural regime. It’s not that there aren’t existing systems which enforce some level of secrecy, Widevine on a TEE or TPM14 is effectively the “one-hop” regime discussed above, and Signal makes a good attempt at enforcing self-deleting messages. But as you can see, many of these still fall short. Signal is leaky since your friends can simply screenshot or copy-paste what you say, and Widevine is only used as a toy to make sure people don’t watch Marvel movies without permission. We can do better.

DRM is not a good way to build out technologies that need to be resistant to cracking, because there are legal threats associated with breaking it. This is a strong enough deterrent that when someone broke Widevine L3 (the lowest level of security that Widevine uses, with an in-software content decryption module) they only claimed they had a working download of Stranger Things as proof, rather than simply posting it15. In particular, they state:

We obviously have not, and will not distribute the [copy of Stranger Things], and will not be publishing Proof-of-Concept code due to the difficulty in fixing the vulnerability in the cryptography of the system.

This is obviously bad for the continued developments of such systems. If concepts can’t be shared readily, these things will never advance. There are things which openness and transparency are incredibly helpful with, and this is one of them. However, the fact that anyone is working on these things at all is a good start, and many technologies are hidden away in proprietary labs until someone discovers a way to make them actually useful.

This problem shares much with other problems in how censorship is done on the internet. Currently, one of the lines you’ll notice running through every example of censorship is that the state plays a part in precisely none of it. When Trump was banned from virtually everywhere, he was the state, and yet still he was powerless to do anything about it. And while maybe that was fine for Trump, we shouldn’t expect other things to go similarly. When MasterCard, Visa, and Discover banned their users from paying PornHub, the accusations they made were serious, in that they accused PornHub of hosting illegal content. This is not the kind of dispute that credit card companies should be handling. If PornHub is hosting videos of people who never consented to being in them, that is very clearly criminal, and investigations should be made by 3-letter organizations rather than payment processors. This failure mode is repeated elsewhere, such as BlackRock promising to shift towards sustainability and therefore setting large, overarching economic goals despite being a private company devoted entirely to making money16. These organizations have begun to see like states, because the state has abdicated its job of providing the kind of authority that users clearly demand. The long-term consequences of this mean that enforcement is unclear and tenuous, and that the primary question people begin to ask before they upload shock videos (or open new fracking operations to power a power a bitcoin farm) isn’t whether or not they can afford the consequences, but whether they’ll pay the consequences at all. While certainly BlackRock attempting to stop climate change and Visa taking a stand against illicit porn are noble acts, it’s also a signal of a desperate position when your political power begins to come almost entirely from finance capital.

At the outset, I mentioned that what we have to do is find ways to create distance, rather than remove it. This is precisely the kind of technology I mean. Distance, or walls, don’t merely serve to separate us from each other, they also serve to delineate old and discover new lines of flight by locating our notions of who we are within a rich, multifaceted cultural context. Since the current culture is feverish, we will break it not by deciding on answers to our current questions, but rather designing tools to mark which answers are accepted where. To advance the internet, we must commit to one last escape from the cultural death machine, the last throes of its logic, kicking off a great becoming-clandestine that reintroduces complexity and cult value. We must look beyond crude consensus to new and practically minded conceptions of authority (which we are already accidentally using now!) in order to organize the world around us. Information abounds, our job is to index it, contextualize it, and change it. A world of secret rooms, complicated half-truths, and not so much walled as unseen gardens tended by impossible machines. We can commit to untangling the mess of responsibilities through the technologies we build. While the world we leave for those that come after us will still be permanently in flux, we stand a chance to give them tools to manage this complexity rather than useless exhortations to keep it simple, stupid. The world of the future is an ecosystem, not a machine, and our task in it is to carve out places in it, not abandon it for something built in our image.


  1. For instance Zizek, decidedly on the left, argues that democracy produces clearly antidemocratic results a shocking amount of the time. Trump was happy enough to accept his victory after losing the popular vote. At the same time, these opinions appear to be the minority. After losing the popular vote again, Trump’s justification for his reinstatement wasn’t that democracy got it wrong, it was to claim he actually did win the vote, so his losing was antidemocratic, and a recurring theme for the American left is that democracy is under attack. Lots of opinions, little consensus. ↩︎

  2. In the sense that neoreactionaries mean it ↩︎

  3. https://smallwarsjournal.com/jrnl/art/primer-terrorist-usage-twitter-and-social-media ↩︎

  4. https://www.timesofisrael.com/hamas-armed-wings-twitter-account-shelved/ ↩︎

  5. https://www.theverge.com/2019/2/25/18229714/cognizant-facebook-content-moderator-interviews-trauma-working-conditions-arizona ↩︎

  6. The Society of Transparency, Han. In particular the essay “Society of Exhibition”. ↩︎

  7. https://theintercept.com/2020/03/16/tiktok-app-moderators-users-discrimination/ ↩︎

  8. https://www.theverge.com/2018/8/7/17660308/white-supremacists-charlottesville-rally-discord-plan ↩︎

  9. https://www.bbc.com/news/world-asia-india-44856910 ↩︎

  10. In particular, as per this thread, content is never “pushed” to other nodes, you only replicate what you visit. While this is more than sufficient for IPFS’s use-cases and even constitutes a feature, it’s not quite what we discuss here. ↩︎

  11. Well, not a garbage collection cycle longer anyway. ↩︎

  12. For context, see Postscript on the Societies of Control, Deleuze ↩︎

  13. Think Signal’s self-deleting messages. ↩︎

  14. Widevine is a DRM standard. You can read about it in maybe more detail than you’d like here. The gist of it is that a TPM will stop you from recording content encrypted through Widevine by completely bypassing your GPU and collaborating with your monitor to make sure that the content reaches your eyes, but not your hard drive. While this doesn’t stop you from simply recording the screen, it does go a long ways to keeping movies off of piracy websites since you need at least a cracked device (which is difficult to find) to record them in good quality. While this might sound insidious, I want you to ask what you could do it with it for yourself rather than seeing it as merely another toy for Netflix to play with. ↩︎

  15. https://fidusinfosec.com/breaking-content-protection-on-streaming-websites/ ↩︎

  16. https://www.blackrock.com/corporate/investor-relations/2020-blackrock-client-letter ↩︎