This needs to be more understood by the population.
Our world-destroying-AI paperclip maximizer is here. It's called a "news feed".
The thing that gets the most clicks is outrage, as the AI has discovered. We're setting people against each-other in a more and more efficient fashion.
The result has been clear since the Arab Spring. Good things don't come from helping people hate each-other in the most efficient way possible.
Banning AI-driven-click-maximizing news feeds would be a healthy start. Right now, they're doing serious damage to our world.
We are, but the blame has to be shared, too. In many cases the algorithm is being reinforced by your and everyone's actions. It may not even necessarily be specifically trying to prioritize polarizing content: that just might be the content you engage with the most, and the algorithm blindly follows your whims and preference.
I've used YouTube for many hours per day for several years. I've almost never seen a single thing appear on my home page that was polarizing or even clickbaity. The very rare times I do (always after I watch something that's kind of adjacent), I just click "Not interested" and I never see it or anything like it again. It's done a pretty good job of predicting what I would and wouldn't be interested in.
Same with Twitter. I just unfollow anyone I find tweeting polarizing or charged things. My Twitter feed looks pretty close to the HN front page.
Many people crave these things, whether they want to or even realize it. I think it's going to be this way for a very long time.
Sorry but this is a very naive view of human psychology. The Social Dilemma on Netflix does a good job of explaining why asking people to just exert more willpower is not the solution.
Maybe it is. I just am repulsed by stuff like that and try to get it out of my sight whenever possible. I figure most of HN is the same way. It pretty much never appears on any of my feeds, since I don't hesitate to click the "Not interested" button on the rare occasions it appears.
The new AI is different than the algorithms of old. The old algorithms are one size fits all and everyone get exactly the same schlock. The new AI knows you individually and shows you as an individual the kinds of things that you are most likely to engage with. It doesn't matter at all if the kinds of things you as an individual are more likely to engage with have less inflammatory headlines. The system is still doing the same thing to you as it's doing to everyone else.
That is, until you click even accidentally on a single outrage clickbait video, then youtube would immediately suggest at least 20 other videoa of the same kind and you have to click “not interested” to videos for several weeks until your feed is sane again.
And IME, it doesn’t even have to be videos watched by most of the people, but a subset of “power users” who binge certain kind of videos are over represented on the recommendation engine.
>That is, until you click even accidentally on a single outrage clickbait video, then youtube would immediately suggest at least 20 other videoa of the same kind and you have to click “not interested” to videos for several weeks until your feed is sane again.
For me it takes a single round of a few "Not interested" clicks. Definitely not weeks, or even days.
No, it’s being driven by the lowest common denominator for engagement, which is distinct from personal preferences.
For a self-contained example, spam in large Facebook groups always rises to the top, because many people comment asking it to be deleted, causing the algorithm to show it to more people, some of which comment, until a moderator finally kills the post.
These kinds of side effects do not happen in a voting based system or a straight chronological feed.
For Facebook groups, yes, this is a big problem. That's one reason why I don't use Facebook. For one's personal feed, I don't think the same issue necessarily applies.
> just might be the content you engage with the most, and the algorithm blindly follows your whims and preference
No, it's the content that other people engage with. Disregarding the whole "engagement is a good metric of how much you want to see something" bullshit, if you served me food based on what other people like to eat, it would be a weird mix of gluten free vegan stuff, and also coke, pizza and doritos.
I don't want any of that shit. I'm not "many people". I don't need to be fed the same irrelevant garbage as them. But the only way to achieve that is to unfollow everyone and not get many useful updates. Which is what I'm doing, but it's just barely useful, and the popular crap still seeps in at every opportunity.
That just hasn't been my experience with Twitter or YouTube. (I don't use Facebook or other things.) I agree that would be very annoying if it happened to me, but for some reason it just doesn't seem to. I probably watch so many videos that it's just picked up my preferences well.
I do want to see what other videos are watched by large numbers of people who've watched things I've already watched. That's how I discover new and interesting things. I've found tons of great content that way, and pretty much no clickbaity or LCD / popular crap seeps in (maybe once every few months, but the "Not interested" immediately takes care of it). I don't know how common my experience is.
Joe Edelman wrote a nice article about algorithms that are driven by metrics:
"This talk is about metrics and measurement: about how metrics affect the structure of organizations and societies, how they change the economy, how we’re doing them wrong, and how we could do them right."
Yugoslavia was well before the Arab Spring. Of course, that was broadcast, not clicks, but outrage was stoked.
(In Lonnie Athens' theory of violentisation, there's a step where the to-become-radically-violent says to themselves, "I'll become the biggest badass and this will never happen to me again." Milošević has a famous TV speech where, after a spot of "spontaneous violence", he tells ethnic serbs he isn't going to let them be beaten anymore.)
Yes, this is something that I have realised of late. Very few people except maybe the most psychopathic, start by believing that they are evil. Most of us believe that we are being victimised. Maybe I am an unemployed youth, outraged at the fact that some minority got a job while I am unemployed. I completely ignore the fact that I am not as qualified as the other person and comfort myself with some victimhood myth.
These minor grievances are later amplified by demagogues. The reaction to this perceived victimisation is so out of proportion that the initial grievance looks minuscule in retrospect.
The main difference these days is that we don't need a visible demagogue. The various "engagement algorithms" play the role of an echo chamber which sends opinions into a positive feedback loop until it enters the regime of social collapse.
You could also argue that the original rise of Fascism was enabled by mass propaganda becoming possible where it wasn't really previously, but that's so overdetermined it's hard to attribute to particular causes.
Then there's the original SARS of bad memetic propaganda, the Protocols of the Elders of Zion. Piece of leafleting from the early 1900s that's contributed to the deaths of millions.
And social media is actually less dangerous in this regard then past media like radio and books, because it makes it easy to comment in a way that reaches the consumers of the original media. You can run a radio channel where you rail against ethnic prejudice, but the listeners of radio rwanda are unlikely to listen to your channel. You can write a book refuting the "Protocols of the Elders of Zion" but the readers of the original are unlikely to read your book. Social media is different. If someone incites hatred against ethnic groups on facebook you can respond and reach the readers of the original post.
> If someone incites hatred against ethnic groups on facebook you can respond and reach the readers of the original post.
"Just tell people that racism is bad"
Somehow this seems to make the problem worse not better, because if you're one of the outsiders you just get added to their mental list of conspirators. And if you've commented under your real name you're at risk of reprisals.
Effective debunking is hard.
Oh, and algorithmic social media makes this worse - your act of commenting on a post tends to raise its profile and cause it to be shown to more of your followers.
>> If someone incites hatred against ethnic groups on facebook you can respond and reach the readers of the original post.
> "Just tell people that racism is bad"
What do you think "inciting ethnic hatred" means? Telling people that ethnic hatred is good? So that the only possible reply is that actually ethnic hatred is bad? No it involves conspiracy theories about an ethnic group, accusations against them, in short, empirical statements that can be refuted.
They can be contradicted, but it's very hard to definitively refute them in a way that will convince somebody who wishes to believe it. Even people without a stake in it will often conclude, "I heard X, I heard Y, sounds to me like it could go either way." If anything, they're taken in by how short and empirical those statements are: it makes them seem more true because after all, if it were false I'd see the evidence myself.
There is no statement of fact so definitive that it can overcome a vast barrage of inflammatory conspiracy theory. If that's done in front of a crowd looking for a simple solution to their problems, they'll pay no attention to refutation of "short, empirical statements".
I'm not so sure it is, when there are so many well-funded, concerted efforts to deliberately induce ignorance in people. Radio and books have at least some bottleneck where you can at least try to cut off deliberate falsehoods.
Obviously that doesn't always work, but social media makes it effectively impossible. Not only do you have official sources of propaganda, you live in a swamp of anecdotes that are impossible to refute or contextualize. The radio says "Group X is bad", and you yourself may never have had any trouble with X, but a flood of "X did [bad-thing] to me" stories on social media can make you think that you've got personal exposure to it.
Ultimately, I believe the problem rests with the individuals who are willing to be manipulated. The human organism has a massive security hole that has been very well exploited, and I've got no idea how to patch it. No amount of coherent, cogent argument will change the mind of somebody bent on having a target to hate, and people have gotten very good at insinuating those targets.
> I'm not so sure it is, when there are so many well-funded, concerted efforts to deliberately induce ignorance in people. Radio and books have at least some bottleneck where you can at least try to cut off deliberate falsehoods.
I don't see how you can believe these things at the same time. If these efforts to induce ignorance are indeed so well-funded and concerted, how can bottlenecks be an obstacle? Indeed why couldn't these well-funded, concerted efforts use those bottlenecks to cut off people disagreeing with them?
I meant to suggest that if you have a few large, expensive propaganda organs, you have some hope of running your own counterpropaganda campaign: disprove it at the source, and maybe you can convince people.
Of course if the money can shut you down entirely, you can't succeed. But it's hard to completely control radio and books. It can be done, but only by making your totalitarianism clear. It's more effective if people think they came to their conclusions on their own, and have been exposed to "all sides".
Today, they can bolster their organized propaganda with a sufficiently effective astroturf campaign (magnified by social media). The two provide confirmation for each other, and appear to be independent. They may even actually be independent; they don't need to formally coordinate if they roughly agree on the ends.
So people believe they're getting "all sides" of the story, and there's no need to shut down disagreement. Instead, people hear the message from two sources that affirm each other, and disregard any disagreement willingly.
> You could also argue that the original rise of Fascism was enabled by mass propaganda becoming possible where it wasn't really previously
Genocides and other war crimes have been a staple of humanity since Ancient Greece. Authoritarianism, of which fascism is a subset, only depends on one thing, and that is masses willing to be led.
...and technologies, including mass propaganda, have extended their reach and scale. They didn’t have trains delivering millions to gas chambers in Athens, helmed by people listening to Hitler’s speeches on the radio.
Saying “x has always existed” is different from the scale at which x is present. That is, in fact, one way that we measure progress. Inarguably, we will never be able to stamp out our baser instincts and behavior, but we should strive to reduce their presence and impact.
Our world-destroying-AI paperclip maximizer is here. It's called a "news feed".
The thing that gets the most clicks is outrage, as the AI has discovered. We're setting people against each-other in a more and more efficient fashion.
The result has been clear since the Arab Spring. Good things don't come from helping people hate each-other in the most efficient way possible.
Banning AI-driven-click-maximizing news feeds would be a healthy start. Right now, they're doing serious damage to our world.