The main point the author is making is that algorithms represent the opinion of the corporation/website/app maker and opinions are free speech. That is, deciding what to prioritize/hide in your feed is but a mere manifestation of the business's opinion. Algorithms == Opinions.
This is a fine argument. The part where I think they get it wrong is the assumption/argument that a person or corporation can't be held accountable for their opinions. They most certainly can!
In Omnicare, Inc. v. Laborers District Council Construction Industry Pension Fund the Supreme Court found that a company cannot be held liable for its opinion as long as that opinion was was "honestly believed". Though:
the Court also held, however, that liability may result if the company omitted material facts about the company's inquiry into, or knowledge concerning, the statement of opinion, and those facts conflict with what a reasonable investor would understand as the basis of the statement when reading it.
That is, a company can be held liable if it intentionally mislead its client (presumably also a customer or user). For that standard to be met the claimant would have to prove that the company was aware of the facts that proved their opinion wrong and decided to mislead the client anyway.
In the case of a site like Facebook--if Meta was aware that certain information was dangerous/misleading/illegal--it very well could be held liable for what its algorithm recommends. It may seem like a high bar but probably isn't because Meta is made aware of all sorts of dangerous/misleading information every day but only ever removes/de-prioritizes individual posts and doesn't bother (as far as I'm aware) with applying the same standard to re-posts of the same information. It must be manually reported and review again, every time (though maybe not? Someone with more inside info might know more).
I'd also like to point out that if a court sets a precedent that algorithms == opinions it should spell the end of all software patents. Since all software is 100% algorithms (aside from comments, I guess) that would mean all software is simply speech and speech isn't patentable subject matter (though the SCOTUS should've long since come to that conclusion :anger:)
> That is, a company can be held liable if it intentionally mislead its client
But only in a case where it has an obligation to tell the truth. The case you cited was about communication to investors, which is one of the very few times that legal obligation exists.
Furthermore, you would be hard pressed to show that an algorithm is intentionally misleading unless you can show that it has been explicitly designed to show a specific piece of information. And recommendation algorithms don't do that. They are designed to show the user what he wants. And if what he wants happens to be misinformation, that's what he will get.
Yeah, that's the motte-and-bailey argument about 230 that makes me more scrutinous of tech companies by the day
motte: "we curate content based on user preferences, and are hands off. We can't be responsible for every piece of (legal) content that is posted on your platform .
bailey: "our algorithm is ad-friendly, and we curate content or punish it based on how happy or mad it makes out adverts, the real customers for our service. So if adverts don't like hearing the word "suicide" we'll make creators who want to be paid self-censor".
if you want to take hands on what content is allowed on that granular a level, I don't see why 230 should protect you.
>I'd also like to point out that if a court sets a precedent that algorithms == opinions it should spell the end of all software patents.
I'm sure they'd word it very carefully to prevent that, or limit it only to software defined as "social media".
This is the actual reason for s230 existing; without 230, applying editorial discretion could potentially make you liable (e.g. if a periodical uncritically published a libelous claim in its "letters to the editor"), so the idea was to allow some amount of curation/editorial discretion without also making them liable, lest all online forums become cesspools. Aiding monetization through advertising was definitely one reason for doing this.
We can certainly decide that we drew the line in the wrong place (it would be rather surprising if we got it perfectly right that early on), but the line was not drawn blindly.
I'd say the line was drawn badly. I'm not surprised it was drawn in a way to basically make companies the sorts of lazy moderators that are commonly complained about, all while profiting billions from it.
Loopholes would exist, but the spirit of 230 seemed to be that moderation of every uploaded piece of content was bound to not represent the platform. Enforcing private rules that represents the platforms will seems to go against that point.
Remember your history for one. Most boards at the time were small volunteer operations or side-jobs for an existing business. They weren't even revenue neutral, let alone positive. Getting another moderator depended upon a friend with free time on their hands who hung around there anyway. You have been downright spoiled by multimillion dollar AI backed moderation systems combined with large casts of minimum wage moderators. And you still think it is never good enough.
Your blatant ignorance of history shines further. Lazy moderation was the starting point. The courts fucked things up as they wont to do by making lazy moderation the only way to protect yourself from liability. There was no goddamned way that they could keep up instantly with all of the posts. Section 230 was basically the only constitutional section of a censorship bill and was designed to specifically 'allow moderation' instead of opposed to 'lazy moderator'. Not having Section 230 means lazy moderation only.
God, people's opinions on Section 230 have been so poisoned by propaganda from absolutely morons. The level of knowledge of how moderation works has gone backwards!
You say "spoiled", I say "ruined". Volunteer moderation of the commons is much different from a platform claiming to deny liability for 99% of content but choosing to more or less take the roles of moderation themselves. Especially with the talks of AI
My issue isn't with moderation quality so much as claiming to be a commons but in reality managing it as if you're a feudal lord. My point it that they WANT to try to moderate it all now, removing the point of why 230 shielded them.
And insults are unnecessary. My thoughts are my own from some 15 years of observing the landscape of social media dynamics change. Feel free to disagree but not sneer.
This is a fine argument. The part where I think they get it wrong is the assumption/argument that a person or corporation can't be held accountable for their opinions. They most certainly can!
In Omnicare, Inc. v. Laborers District Council Construction Industry Pension Fund the Supreme Court found that a company cannot be held liable for its opinion as long as that opinion was was "honestly believed". Though:
(from: https://www.jonesday.com/en/insights/2015/03/supreme-court-c...)That is, a company can be held liable if it intentionally mislead its client (presumably also a customer or user). For that standard to be met the claimant would have to prove that the company was aware of the facts that proved their opinion wrong and decided to mislead the client anyway.
In the case of a site like Facebook--if Meta was aware that certain information was dangerous/misleading/illegal--it very well could be held liable for what its algorithm recommends. It may seem like a high bar but probably isn't because Meta is made aware of all sorts of dangerous/misleading information every day but only ever removes/de-prioritizes individual posts and doesn't bother (as far as I'm aware) with applying the same standard to re-posts of the same information. It must be manually reported and review again, every time (though maybe not? Someone with more inside info might know more).
I'd also like to point out that if a court sets a precedent that algorithms == opinions it should spell the end of all software patents. Since all software is 100% algorithms (aside from comments, I guess) that would mean all software is simply speech and speech isn't patentable subject matter (though the SCOTUS should've long since come to that conclusion :anger:)