Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's important to note that this article has its own biases; it's disclosed at the end that the author is on the board of Bluesky. But, largely, it raises very good points.

> This is the part that 230 haters refuse to understand. Platforms rely on the immunity from liability that Section 230 gives them to make editorial decisions on all sorts of content. Yet, somehow, they think that taking away Section 230 would magically lead to more removals of “bad” content. That’s the opposite of true. Remove 230 and things like removing hateful information, putting in place spam filters, and stopping medical and election misinfo becomes a bigger challenge, since it will cost much more to defend (even if you’d win on First Amendment grounds years later).

The general point is that Section 230 gives companies a shield from liability for manual curation, automated curation, and algorithmic recommendations alike, and that removing 230 would result in a wild west of we're-afraid-to-moderate-so-you'll-get-unmoderated-content that would be far worse than the status quo. But it's unfair to say that the NYT article is completely wrong - because, in such a case, recommendation algorithms would be made more carefully as well.

Realistically, the entire web ecosystem and thus a significant part of our economy rely on Section 230's protections for companies. IMO, regulation that provides users of large social networks with greater transparency and control into what their algorithms are showing to them personally would be a far more fruitful discussion.

Should every human have the right to understand that an algorithm has classified them in a certain way? Should we, as a society, have the right to understand to what extent any social media company is classifying certain people as receptive to content regarding, say, specific phobias, and showing them content that is classified to amplify those phobias? Should we have the right to understand, at least, exactly how a dial turned in a tech office impacts how children learn to see the world?

We can and should iterate on ways to answer these complex questions without throwing the ability for companies to moderate content out the window.



> The general point is that Section 230 gives companies a shield from liability for manual curation, automated curation, and algorithmic recommendations alike, and that removing 230 would result in a wild west of we're-afraid-to-moderate-so-you'll-get-unmoderated-content that would be far worse than the status quo. But it's unfair to say that the NYT article is completely wrong - because, in such a case, recommendation algorithms would be made more carefully as well.

It's not "recommendation" that's the issue. Even removing offensive content resulted in liability for user generated content prior to Section 230. Recommendation isn't the issue with section 230. Moderation is.

Chubby Inc. vs. CompuServe established that a non-moderated platform evaded liability for user generated content. https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.

Stratton Oakmont vs. Prodigy Services established that if an internet company did moderate content (even if it was just removing offensive content) it became liable for user-generated content. https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....

If we just removed Section 230, we'd revert to the status quo before Section 230 was written into law. Companies wouldn't be more careful about moderation and recommendation. They straight up just wouldn't do any moderation. Because even the smallest bit of moderation results in liability for any and all user generated content.

People advocating for removal of section 230 are imagining some alternate world where "bad" curation and moderation results in liability, but "good" moderation and curation does not. Except nobody can articulate a clear distinction of what these are. People often just say "no algorithmic curation". But even just sorting by time is algorithmic curation. Just sorting by upvotes minus downvotes is an algorithm too.


I guess most people that think section 230 is excessive are not advocating for its complete removal, but more like for adding some requirements that platforms have to adhere in order to claim such immunity.


Sure, but I find that few people are able to articulate in any detail what those requirements are and explain how it will lead to a better ecosystem.

A lot of people talk about a requirement to explain why someone was given a particular recommendation. Okay, so Google, Facebook, et. al. provide a mechanism that supplies you with a CSV of tens of thousands of entries describing the weights used to give you a particular recommendation. What problem does that solve?

Conservatives often want to amend section 230 to limit companies' ability to down-weight and remove conservative content. This directly runs afoul the First Amendment; the government can't use the threat of liability to coerce companies into hosting speech they don't want to. Not to mention, the companies could just attribute the removal or down-ranking to other factors like inflammatory speech or negative user engagement.


IIUC, most large ad providers allow you to see and tailor what they use in their algorithms (ex. [1]).

I think the big problem with "Should every human have the right to understand that an algorithm has classified them in a certain way" is just that they flat out can't. You cannot design a trash can that every human can understand but a bear can't. There is a level of complexity that your average person won't be able to follow.

[1]: https://myadcenter.google.com/controls


Yes but it _appears_ that there are very different algorithms/classifications used for which ads to recommend vs what content to recommend. Opening up this insight/control for content recommendations (instead of just ads) would be a good start.



Yeah there’s some controls, but they are much less granular than what ad-tech exposes. I’ve just never really been sure why Google/meta/etc. choose to expose this information differently for ads vs content.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: