> The general point is that Section 230 gives companies a shield from liability for manual curation, automated curation, and algorithmic recommendations alike, and that removing 230 would result in a wild west of we're-afraid-to-moderate-so-you'll-get-unmoderated-content that would be far worse than the status quo. But it's unfair to say that the NYT article is completely wrong - because, in such a case, recommendation algorithms would be made more carefully as well.
It's not "recommendation" that's the issue. Even removing offensive content resulted in liability for user generated content prior to Section 230. Recommendation isn't the issue with section 230. Moderation is.
Stratton Oakmont vs. Prodigy Services established that if an internet company did moderate content (even if it was just removing offensive content) it became liable for user-generated content. https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....
If we just removed Section 230, we'd revert to the status quo before Section 230 was written into law. Companies wouldn't be more careful about moderation and recommendation. They straight up just wouldn't do any moderation. Because even the smallest bit of moderation results in liability for any and all user generated content.
People advocating for removal of section 230 are imagining some alternate world where "bad" curation and moderation results in liability, but "good" moderation and curation does not. Except nobody can articulate a clear distinction of what these are. People often just say "no algorithmic curation". But even just sorting by time is algorithmic curation. Just sorting by upvotes minus downvotes is an algorithm too.
I guess most people that think section 230 is excessive are not advocating for its complete removal, but more like for adding some requirements that platforms have to adhere in order to claim such immunity.
Sure, but I find that few people are able to articulate in any detail what those requirements are and explain how it will lead to a better ecosystem.
A lot of people talk about a requirement to explain why someone was given a particular recommendation. Okay, so Google, Facebook, et. al. provide a mechanism that supplies you with a CSV of tens of thousands of entries describing the weights used to give you a particular recommendation. What problem does that solve?
Conservatives often want to amend section 230 to limit companies' ability to down-weight and remove conservative content. This directly runs afoul the First Amendment; the government can't use the threat of liability to coerce companies into hosting speech they don't want to. Not to mention, the companies could just attribute the removal or down-ranking to other factors like inflammatory speech or negative user engagement.
It's not "recommendation" that's the issue. Even removing offensive content resulted in liability for user generated content prior to Section 230. Recommendation isn't the issue with section 230. Moderation is.
Chubby Inc. vs. CompuServe established that a non-moderated platform evaded liability for user generated content. https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.
Stratton Oakmont vs. Prodigy Services established that if an internet company did moderate content (even if it was just removing offensive content) it became liable for user-generated content. https://en.wikipedia.org/wiki/Stratton_Oakmont,_Inc._v._Prod....
If we just removed Section 230, we'd revert to the status quo before Section 230 was written into law. Companies wouldn't be more careful about moderation and recommendation. They straight up just wouldn't do any moderation. Because even the smallest bit of moderation results in liability for any and all user generated content.
People advocating for removal of section 230 are imagining some alternate world where "bad" curation and moderation results in liability, but "good" moderation and curation does not. Except nobody can articulate a clear distinction of what these are. People often just say "no algorithmic curation". But even just sorting by time is algorithmic curation. Just sorting by upvotes minus downvotes is an algorithm too.