Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Personally I think that social media companies should be at least in part liable for content posted by their users. There is a tendency to treat social media as a mere conduit, like the post office, that should not be responsible for content but the post office doesn't decide who sees what or profits from advertising inserted into mail.

I thought about this a few months ago and came up with this <strike>rant</strike>completely reasonable proposal[0] that tries to balance internet freedom with assigning limited responsibility for user content published by web sites.

Summary: under some conditions based on the number of views, whether any money changed hands, and whether the post was widely shared or mostly private, a publisher should be liable for some of the damages caused by a post.

[0] https://sheep.horse/2025/3/section_230_and_internet_freedom%...



I think it’s easy to implement this, but the flip side of making platforms responsible is that they become much more restrictive in what’s allowed to be discussed. They start banning topics preemptively, just to limit their exposure. And if you’re thinking “good”, it will make the public discourse sterile.

Then the same companies will be penalised in other jurisdictions for being overly censorious. There’s no way to simultaneously follow all the rules.

And if you’re thinking “good, I just want to see those companies fined”, that’s fine too. But then that’s just about feeling good, rather than setting good rules for discourse.


> if you’re thinking “good”, it will make the public discourse sterile

Maybe those topics shouldn't be discussed on general-purpose online forums?


Even today YouTubers and TikTokers go out of their way not to use certain words that lead to being demonetised or having their reach limited. They use euphemisms like unalive or grape instead of suicide and rape. These are terrible things which we'd like to see less of, but we can't discuss how to make things better without discussing them at all.

If we force videos to avoid mentioning that could offend anyone anywhere, we're not going to be able to discuss very much at all.


> YouTubers and TikTokers go out of their way not to use certain words that lead to being demonetised or having their reach limited. They use euphemisms like unalive or grape instead of suicide and rape.

I'm with you on finding this personally annoying. But the question is whether a dedicated forum for discussing suicide or rape, one where the incentives of an unqualified influencer paid by views and product endorsements are better considered, is superior for these matters.

We don't, by analogy, randomly launch into suicide and rape in the middle of a cocktail party. Instead, we naturally seclude ourselves with the people we want to discuss it with, people we tend to have chosen thoughtfully, and usually with some warning that what we want to discuss is weighty. Not doing any of that online strikes me as, if not a problem, a legitimate concern.


Who is going to join a forum dedicated to discussing rape? Absolute weirdos, that's who. But you're not going to enact any kind of broad societal change by talking only to those weirdos. You need to reach a broad audience and convince them this is a problem worth tackling.


> Who is going to join a forum dedicated to discussing rape? Absolute weirdos

The folks who go to these [1] and these [2].

We are way into that at least I am not knowledgeable about. I’d be curious about an expert’s take on the value of unmoderated YouTube and TikTok content on this issue.

[1] https://www.nationalsexualassaultconference.org/

[2] https://www.survivorsofsexualassaultanonymous.com/


These guys are fine, but they can’t drive broader societal change if the words “sexual assault” or “rape” is scrubbed from mainstream discourse. Imagine if HN autodeleted any comment with these words, we couldn’t even have this conversation.


We have the postal system and telephone system, which are in theory (and I think most of us believe) content-neutral. You can say whatever you want over these channels, and, as far as we know, the USPS and phone company don't investigate the content and block naughty thoughts, nor are they held liable if we say naughty things or conspire to commit a crime over their channels.

Newspapers, magazines, and TV are at the other end: If they publish naughty stuff, they're going to be held accountable, and therefore they exercise editorial moderation and selection over what they publish.

Social Media and Internet forums are in this weird separate bucket that was simply conjured up by Section 230. They get to have their cake and eat it too. They can both 1. editorialize and moderate their users' content but 2. dodge liability over what they publish. What a great deal!

I think whether you are liable for what your users post -should- come down to whether or not you editorialize and put your thumb on the scale of what gets posted and shown. If you're truly a "dumb pipe" that allows everything, then you should not be liable for what your users send over the dumb pipe. But the minute you exercise any moderation or curation, you are effectively endorsing what you are publishing, and should share liability over it.


The USPS does investigate content.

You can draw a swastika and a machinegun for sale on your regular mail envelope and it will show up in informed delivery, but as black and white.

If you try to get it displayed more prominently in an advertising campaign, it violates their second set of 'guidelines' that stop what you can put in the more prominent colored advertising image.

They use this mechanism in a matter different than most social media curation, but it's still a form of curation, and favoring the particular kinds of speech they like, using two different sets of guidelines -- one guideline for de minimis B&W presentation and a second set of 'guidelines' (which even at USPS are a bit vague) about whether you can get the pretty color image in informed delivery.


Surely this curation by the USPS doesn't extend to content inside of envelopes, though! I guess my overall point is that Social Media and forums are "opening the envelope" and making moderation decisions based on what they find inside.


The problem is, if everything has to be reviewed, almost nothing will be posted. Why? Because the social media companies don't make enough off of each post to hire that many people.

(Do I trust AI to do a first-pass review? No, I don't.)

You can reduce that to some degree by putting a threshold of number of views. But that just moves the problem. Then you won't have many posts that exceed the threshold. (Though it reduces the problem somewhat, because social media posts that exceed the threshold will be the one that they make the most money from.)

But the worse problem is, who decides what's "damaging"? The politicians do. That means that posts that are damaging to the politicians are going to be among the first things removed. That makes this a very dangerous path.


> But the worse problem is, who decides what's "damaging"? The politicians do.

It is the courts that decide damages.

My argument in the post is that it is really only the widely distributed "viral" posts that cause enough damage to for liability to be an issue. Since the social media companies have a lot of say in what goes viral and closely monitor popularity for advertising reasons, they are in a position to fix the problem and a liable if they do not.

They don't even have to remove posts - just stop pushing them.


USPS profits from advertising inserted into mail. https://www.usps.com/business/every-door-direct-mail.htm


Sorry, I was imprecise with my comparison. A better analogy would be that the USPS doesn't scan your post to figure out which pieces of mail you are likely to actually look at and then affix stickers to those letters advertising related products.


USPS scans your mails and then includes advertising with this scans when you get the feed in informed delivery.


Yes, but the advertising images are directly related (a T-mobile mailpiece gets a T-mobile png)


I'm definitely not saying it's the same as facebook, but it's an example of content curation based on 3rd party input. Selective advertising reaches a place of prominence, ty6853 doesn't get a special image when I send a letter to Santa.


No, it's buying a service. They aren't curating, they're taking anyones money and sending anything that fits in their guidelines. There's nothing stopping you from displaying your image to a bunch of people.

https://www.usps.com/business/informed-delivery.htm

The image that goes with the mail is required even.


Yes it's all very different when you use different rhetoric, right?

"Buying a service" is how advertisers get placement of certain content curated.

If it fits in the guidelines, you get the special placement and image, if it doesn't fit the special guidelines, you can draw it on your mails and it gets scanned and all they see is the scan. I can draw a swastika on the front of my envelope and it will show up on the feed (but only in black and white), but can I get the swastika on the advertisement image in color? IDK because the link you sent was literally just tossing back what I already mentioned which is informed delivery, not a link to their policies (the policies themselves are a bit vague, but under them it appears not, and they definitely have stronger 'guidelines' than the black and white for instance regarding weapons).

If your content on Facebook 'fits the guidelines' and the guys 'buying the service' benefit from it, then it gets curated more strongly. If it fits other less strict guidelines, I can still see it. But there's nothing stopping you from paying facebook lots of money and getting something that fits their guidelines displayed more prominently, so that wouldn't be curation!

Your argument is highly specious. "Buying a service" is a total red herring, and "guidelines" is just a hack here so you can pivot around it's a mechanism by which the curation happens.

As for the image, you claim it's required, but my mails don't get it, it appears to be 'required' as part of a particular 'campaign'. I have no trouble believing that some services might require an image, but this doesn't somehow disprove curation.


Sure, but it's the same business model for everyone. Give USPS money, get it delivered to a door.

Social media companies are actively curating what you see, or don't see which is the stem of all their problems.


Because they were forced to be unreasonably profitable for a public service.


> There is a tendency to treat social media as a mere conduit, like the post office

I don't think anyone thinks of them like a government service (post office). Rather, when they talk about social media being a conduit/provider, what they have in mind is a variation on a search engine.

Do you think search engines should be liable for content? That's a slippery slope.

A stronger argument is that they are claiming to be like a search engine, but actually being nothing of the sort. So they should have a choice - make all algorithmic content opt in, with tunable, open algorithms, or be made liable for what the algorithm shows to users.


Social Media is more like the old newspaper "Letters To The Editor" than it is like the post office or phone system. On social media, you send your "post" to the S.M. company, the company reads it and decides whether or not to accept that "post" and then some time later, they publish it to the site's readers. The review process is done by a computer instead of an Editor, and takes no time at all (measured in milliseconds) but that's what's happening under the hood. They are absolutely curating and deciding what to display and what not to.


They should be liable for manipulating the feed. Show everything the person is following and nothing else, and show it in chronological order.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: