> Without Section 230, any moderation - even if it was limited to just removing abjectly offensive content - resulted in the internet service taking liability for all user generated content
Cubby ruled in the opposite direction -- that a service should not be held liable for user-posted content.
Stratton did rule that the Prodigy was liable for content posted, but it was specifically due to their heavy-handed approach to content moderation. The court said, for example:
> It is argued that ... the power to censor, triggered the duty to censor. That is a leap which the Court is not prepared to join in.
And
> For the record, the fear that this Court's finding of publishers status for PRODIGY will compel all
computer networks to abdicate control of their bulletin boards, incorrectly presumes that the market will refuse to compensate a network for its increased control and the resulting increased exposure
It is a tough needle to thread, but it leaves the door open to refining the factors the specific conditions under which a services provider is liable for posted content -- it is neither a shield of immunity nor an absolute assumed liability.
Prodigy specifically advertised its boards to be reliable sources as a way of getting adoption, and put in place policies and procedures to try to achieve that, and, in doing so, put itself in the position of effectively being the publisher of the underlying content.
I personally don't agree with the decision based on the facts of the case, but to me it is not black and white and I would have preferred to stick to the judicial regime until it because clearer what the parameters of moderation can be without incurring liability.
> Cubby ruled in the opposite direction -- that a service should not be held liable for user-posted content.
Because Cubby did zero moderation.
> Stratton did rule that the Prodigy was liable for content posted, but it was specifically due to their heavy-handed approach to content moderation. The court said, for example:
What gives you the impression that this was because the moderation was "heavy handed"? The description in the Wikipedia page reads:
> The Stratton court held that Prodigy was liable as the publisher of the content created by its users because it exercised editorial control over the messages on its bulletin boards in three ways: 1) by posting content guidelines for users; 2) by enforcing those guidelines with "Board Leaders"; and 3) by utilizing screening software designed to remove offensive language.
Posting civility rules and filtering profanity seems like pretty straightforward content moderation. This isn't "heavy handed moderation" this is extremely basic moderation.
These cases directly motivated Section 230:
> Some federal legislators noticed the contradiction in the two rulings,[4] while Internet enthusiasts found that expecting website operators to accept liability for the speech of third-party users was both untenable and likely to stifle the development of the Internet.[5] Representatives Christopher Cox (R-CA) and Ron Wyden (D-OR) co-authored legislation that would resolve the contradictory precedents on liability while enabling websites and platforms to host speech and exercise editorial control to moderate objectionable content without incurring unlimited liability by doing so.
What are you reading in that decision that suggests Prodigy was doing moderation beyond what we'd expect a typical internet forum to do?
This is the relevant section of your link:
> Plaintiffs further rely upon the following additional evidence
in support of their claim that PRODIGY is a publisher:
> (A)promulgation of "content guidelines" (the "Guidelines" found at
Plaintiffs' Exhibit F) in which, inter alia, users are requested
to refrain from posting notes that are "insulting" and are
advised that "notes that harass other members or are deemed to
be in bad taste or grossly repugnant to community standards, or
are deemed harmful to maintaining a harmonious online community,
will be removed when brought to PRODIGY's attention"; the
Guidelines all expressly state that although "Prodigy is
committed to open debate and discussion on the bulletin boards,
> (B) use of a software screening program which automatically
prescreens all bulletin board postings for offensive language;
> (C) the use of Board Leaders such as Epstien whose duties
include enforcement of the Guidelines, according to Jennifer
Ambrozek, the Manager of Prodigy's bulletin boards and the
person at PRODIGY responsible for supervising the Board Leaders
(see Plaintiffs' Exhibit R, Ambrozek deposition transcript, at
p. 191); and
> (D) testimony by Epstien as to a tool for Board \Leaders known
as an "emergency delete function" pursuant to which a Board
Leader could remove a note and send a previously prepared
message of explanation "ranging from solicitation, bad advice,
insulting, wrong topic, off topic, bad taste, etcetera."
(Epstien deposition Transcript, p. 52).
So they published content guidelines prohibiting harssment, they filtered out offensive languages (presumably slurs, maybe profanity), and the moderation team deleted offending content. This is... bog standard internet forum moderation.
"additional evidence" your quote says. Just before that, we have:
> In one article PRODIGY stated:
> "We make no apology for pursuing a value system that reflects the culture of the millions of American families we aspire to serve. Certainly no responsible newspaper does less when it chooses the type of advertising it publishes, the letters it prints, the degree of nudity and unsupported gossip its editors tolerate."
The judge goes on to note that while Prodigy had since ceased its initial policy of direct editorial review of all content, they did not make an official announcement of this, so were still benefitting from the marketing perception that the content was vetted by Prodigy.
I don't know if I would have ruled the same way in that situation, and honestly, it was the NY Supreme Court, which is not even an appellate jurisdiction in NY, and was settled before any appeals could be heard, so it's not even clear that this would have stood.
A situation where each individual case was decided on its merits until a reasonable de facto standard could evolve I thing would have been more responsible and flexible than a blanked immunity standard which has led to all sorts of unfortunate dynamics that significantly damage the ability to have an online public square for discourse.
Cubby ruled in the opposite direction -- that a service should not be held liable for user-posted content.
Stratton did rule that the Prodigy was liable for content posted, but it was specifically due to their heavy-handed approach to content moderation. The court said, for example:
> It is argued that ... the power to censor, triggered the duty to censor. That is a leap which the Court is not prepared to join in.
And
> For the record, the fear that this Court's finding of publishers status for PRODIGY will compel all computer networks to abdicate control of their bulletin boards, incorrectly presumes that the market will refuse to compensate a network for its increased control and the resulting increased exposure
It is a tough needle to thread, but it leaves the door open to refining the factors the specific conditions under which a services provider is liable for posted content -- it is neither a shield of immunity nor an absolute assumed liability.
Prodigy specifically advertised its boards to be reliable sources as a way of getting adoption, and put in place policies and procedures to try to achieve that, and, in doing so, put itself in the position of effectively being the publisher of the underlying content.
I personally don't agree with the decision based on the facts of the case, but to me it is not black and white and I would have preferred to stick to the judicial regime until it because clearer what the parameters of moderation can be without incurring liability.