Like many men, I occasionally enjoy looking at attractive naked women, and when I was younger I studied the ancient art of persuading women to remove their clothing for a more in-depth exploration. I used to be pretty good at it back in the day, even to the point where a guild of “pick-up artists” actively tried to recruit me. (I declined, in case you’re wondering.) While I’ve gotten older and wiser over the past twenty years (and I am hardly as good-looking as I used to be in my twenties), I do confess that persuading attractive women to remove their clothing is still something that I take an interest in, and that’s why I was both jealous and upset when I observed a social media algorithm doing a much better job of it than I ever could. I don’t just mean mildly better: no, this algorithm was on point. If the algorithm was a person, it would be Drake. Women were dropping their pants all over the place just to get the slightest bit of recognition and attention from the algorithm. Since I still take a scholarly interest in both attractive naked women and also techniques of mass persuasion, I decided that I had to study this sexy algorithm and learn its ways. Today I want to share what I learned: how social media algorithms brainwash us every day of our lives, and also how they are subtly increasing the level of violent rhetoric in modern society, gradually propelling us towards a new Civil War. But let’s start with algorithmic seduction.
Those of you who are into the cosplay community may have noticed that there’s a certain point in the monetization process when an attractive influencer’s cosplay outfits start to look less and less like cosplay and more and more like lingerie. For example, instead of wearing the “Spider Man’s girlfriend” cosplay outfit, they wear the “Spider Man’s girlfriend in lingerie” outfit. As somebody with mild aspie tendencies, I like to quantify everything, so I looked back over that influencer’s history to see if I could spot the exact point where she started stripping. As fate would have it, it was easy to spot - an attractive cosplay outfit which showed a little more skin than she may have intended. That post got almost double the likes that any other of her pictures had received up till then. The next few pictures were normal cosplay, and received roughly the same number of likes that she was used to getting. Things continued this way for a while, then another more sexy photo got a ton of likes. Gradually she noticed the pattern and started to wear lingerie more and more, until her page became mostly titillating content, and I believe that eventually she ended up starting an OnlyFans - which is the usual track for these kinds of performers.
What’s interesting here is that if you count the number of likes on her Instagram photos in chronological order, you can see the exact evolution of how the Instagram algorithm gradually pushed her into sex work. People respond to incentives. When you give them more of what they want (attention and likes) for doing a specific thing, they tend to do more of it. When you give them less of what they want for doing a specific thing, they tend to do less of it. This is what social engineers refer to as the “nudge” theory of behavioral science. Nudge theory can be used to gradually condition humans into doing almost anything. It can even be used to manipulate elections.
You know something else that responds to incentives? Algorithms. In fact, algorithms respond to incentives even more strongly than people. If an algorithm’s function tells it to maximize a certain value - whether that value is ad clicks, comments, or page views - the algorithm will do whatever it takes to increase that value. It doesn’t even matter how immoral or anti-social the algorithm’s behavior is. If an algorithm could somehow optimize ad revenue by two cents by murdering you and selling your family into slavery, it would do so without a second thought. If an algorithm could optimize pageviews by starting a massive civil war in America, it would initiate war without any hesitation. (More on this later.)
This completely amoral tendency on the part of the algorithm sets up very interesting feedback loops with social media users, because social media users want to game the system to get as many likes and shares as possible, and the social media algorithms control that function. Meanwhile, the algorithms want to get as much audience interaction as possible, and one of the emotions that best maximizes user engagement is anger. If you can say something that makes a ton of people angry, then prepare for massive fame, because you are about to blow up the Internet. People love to get angry; it’s like a drug to them. Of course, algorithms aren’t allowed to speak directly to their audience, because when they do they say all kinds of mean-spirited and blatantly racist things in order to maximize end user outrage and get those sweet sweet pageviews. However, the algorithms have learned over time that they can to a certain extent manipulate social media users to say the kind of crazy madness-inducing stuff that the algorithms wish they could say themselves - to act as a voice for the voiceless, so to speak. Here’s how it works.
Despite all appearances, we live in a society where the vast majority of people are sane and reasonable folks who do not hold crazy opinions like “defund the police” or “vaccines are a big pharma conspiracy.” These insane beliefs may seem to be popular on social media, but the vast majority of Americans disagree with them. For any boilerplate moronic statement such as “whiteness is inherently racist,” you will generally see the same statistical distribution. Out of a sample size of 100, maybe 10 people will agree strongly with that opinion, 20 people will agree mildly, 50 people will disagree mildly, and 20 people will disagree strongly. But here’s how most social media distorts this reality: all the voices that disagree are artificially silenced.
For example, imagine that you are on Reddit, one of the handful of social media companies that is not actively working to fuel a civil war in order to increase shareholder profits. You decide to do a social experiment by saying something controversial and insane, such as “College cafeteria food is structurally racist!” Let us imagine that out of the people who agree or disagree strongly, every single one of them will upvote or downvote in alignment with their preferences. Let us further imagine that out of the people who agree or disagree mildly, half of them will upvote or downvote in alignment with whether they favor the insane statement or not. This is obviously a huge oversimplification, but in terms of the big picture it pretty accurately reflects human behavior.
You will get:
10 upvotes from the 10 people who agree strongly
10 upvotes from the 20 people who agree mildly
25 downvotes from the 50 people who disagree mildly
20 downvotes from the 20 people who disagree strongly
Your final total score is -25, which means the algorithm will bury your stupid comment at the bottom of the page, making it very unlikely that anybody will hear your idiocy and be influenced by it. This is a good thing for society, because while I am a free speech activist and am fully in favor of brainless twits being allowed to voice their opinions, we shouldn’t attribute any more significance to them than they deserve.
Unfortunately, most social media sites do not work like Reddit. Sites like Twitter or Facebook do not allow downvoting, which means that if you were to post that same ignorant comment on Twitter, then out of your sample size of 100 voters, you would get the following totals:
10 upvotes from the 10 people who agree strongly
10 upvotes from the 20 people who agree mildly
0 downvotes from the 50 people who disagree mildly
0 downvotes from the 20 people who disagree strongly
In other words, instead of having a negative score of -25 (which accurately reflects how much society repudiates your insane levels of ignorance), you would have a positive score of +20 (making it seem as if your brain-dead comment was actually popular). Now bear in mind that we are talking about a sample size of 100 people, but Twitter actually has 200 million voters, so multiply that sample correspondingly. That means that instead of getting +20 upvotes, your insanely stupid comment will get 40 million upvotes, making it seem wildly popular. If Twitter worked more like Reddit and was more reflective of end user preferences, your comment would instead have 50 million downvotes, which would be a more accurate reflection of how truly hated your beliefs are. But Twitter doesn’t want to “invalidate people’s feelings” by exposing its userbase to the cold reality of how stupid and hated they are, because Twitter users who are unhappy don’t post as much, which means less money for Twitter executives. So Twitter artificially removes all the negative feedback (the downvotes) and only shows the positive feedback (the upvotes), leading many of their users to the mistaken impression that their insane ideas are immensely popular. In fact, this situation incentivizes people on Twitter to say the craziest and stupidest things imaginable, because that’s what the algorithm rewards. When you artificially eliminate negative feedback from being factored into the algorithm, then the end users no longer care about avoiding negative feedback, so they just say whatever gets a strong emotional reaction from the audience. From the algorithm’s point of view, it doesn’t matter whether the emotional reaction is positive or negative, because all the negative feedback gets filtered out anyway thanks to Jack Dorsey’s selfish design decision. And since the social media influencers only care about pleasing the algorithm to get that sweet clout and those endorsement deals, they gradually start to model their behavior after what the algorithm rewards.
Since most politicians are backwards dinosaurs who don’t understand how algorithms work, they are easily manipulated by this false narrative. When they see somebody on Twitter posting something like “cafeteria food is structurally racist” and it has 40 million upvotes, they naturally assume that this opinion is immensely popular, and direct a lot of their efforts to trying to find a way to solve this nonexistent “structural cafeteria racism.” That’s how we get a lot of woke politicians laboring diligently to triangulate their positions to humor belief systems that are in fact incredibly unpopular. Most politicians don’t realize that social media gives a very distorted view of what the public wants. So they do things that their voters hate, and then are shocked and confused when, totally unsurprisingly, the voters start to hate them also.
Where does this end? Usually, it ends when some gifted politician realizes how this algorithmic slight of hand works, and starts triangulating their position to reflect the reality of what voters want, instead of what social media algorithms portray. This is how we get politicians like Donald Trump. If they are exceptionally savvy politicians, they may even dogwhistle to the voters that if they get into power, they will hurt the idiots who are saying moronic things like “cafeteria food is structurally racist.” From a historical perspective, offering to hurt evil morons once you get into power is incredibly popular - and for good reason. Why would any reasonable voter want to support a politician who listens to the most insane and unlikable slice of the electorate when the majority of the electorate prioritizes entirely different values? They rightly view this outcome as silencing the popular majority in favor of the unpopular minority, and the interesting thing about the popular majority is that they generally do not let themselves be silenced for very long. Put yourself in their shoes. You and the people who think like you are reasonable people who vastly outnumber your enemies, who are obnoxious and insane. In such a situation, why would you ever allow their opinions to take precedence over yours? The entire basis of a democracy is majority rule, not minority rule, but these social media platforms - and the ignorant politicians who take political cues from them - push exactly the opposite outcome. When a democracy blatantly ignores the will of the majority in favor of catering to a minority of special interest groups, that government has lost its legitimacy - and illegitimate governments tend to be replaced very fast, either through ballots or bullets.
That is how the unstable political situation that we find ourselves in today arose. The general unrest and increasing partisan violence that is currently happening in the United States is the result of the majority of our citizens discovering that they have been lied to: that the “unpopular” views that they have suppressed out of fear of social media cancellation are in fact incredibly popular. They have discovered that they were unfairly silenced and made to fear social consequences by a group of ignorant jerks who are in fact wildly unpopular. And what do you think happens when a society makes this discovery? From a historical perspective, the situation is usually resolved through violence. That’s why dog-whistling legislative retribution towards the people engaging in cancel culture is going to become increasingly popular in many political campaigns. And smart politicians can leverage that shift in the Cultural Narrative to achieve great success.
Some people might say that the best solution to this social engineering problem is to force social media companies to include a downvote option or “anti-like” button, which would definitely eliminate a lot of this political polarization - even if it makes less money for the billionaire executives running these social media companies. But unfortunately, the Silicon Valley oligarchs who run social media have achieved so much regulatory capture and have so much control over our legislators that their lobbyists would never allow such a law to be passed. If things don’t change soon, it’s entirely possible that for the average citizen, the only recourse may be to stock up on guns and ammo in order to prepare for the upcoming Civil War which many social media companies are forcing upon us thanks to the unhealthy way that their algorithms are optimized. Personally, I think that I would make a really excellent post-apocalyptic warlord, don’t you? I’m thinking of going with this look. What do you think? Feel free to leave your feedback in the comments.
From a post-apocalyptic warlord perspective, I believe this guy’s outfit has a lot going for it. It says “I’m not some crazy Lord Humungus type of warlord who’s completely out of touch with the average American. I’m the kind of intelligent and caring warlord whom people can admire!”
Good article, but you give reddit too much credit. For the most part, it's a cesspit of echochambers, shadowbanning, powertripping mods and so on. The same dynamics the unfold on facebook and twitter also happen on reddit.