> If you don't police enough, then users have an expectation that anything goes. When you do decide to moderate behavior, you experience a backlash because you've changed stances.
You experience a backlash from some users, sure. If most users consider it a positive change, then no big deal.
> When you're a moderator, there's often very little that you can do that won't generate backlash unless the user in question is so toxic that the community as a whole agrees they need to go.
This seems to be assuming ban-like tactics. They're a poor option and I don't favour them.
> Your idea of random promotions to mod status w/ limited mod terms is an incredibly bad idea as well, because the moment a bad faith actor gets promoted to mod your entire community will quickly go up in flames.
You're making a lot of assumptions:
1. There's an innate equilibrium between the cardinality of the mod set, the probability that it contains a bad actor, and the probability that actor can cause appreciable disruption. The larger the mod set, the less likely a bad actor can do anything meaningful. You can likely make this probability arbitrarily small if that's a likely threat model. Tandom appointments work quite well to increase efficiency in various consensus-driven systems [1].
2. Mods shouldn't have absolute power. There is simply no way that a single mod should be able to destroy a whole community, any more than a bad judge in the legal system, or a single bad politician could destroy a city, county or state.
3. A transparent appeals process is always needed, in which other mods and community members review mod decisions. It took us millennia to develop our robust legal systems. Technology can eliminate some of the bureaucratic inefficiencies of the legal system in this setting, but it still contains robust patterns that should be copied.
4. You're assuming an open signup process which is vulnerable to DoS/brigading tactics. Maybe there's a way to allow open signups too (with reputation systems), but it's not strictly necessary.
5. Various reputation systems can be overlaid on this, and this interacts well with transparent judgments/appeals process, ie. someone with a long record of violating conditions and losing appeals would be less likely to be given mod power (but never 0%).
In general, today's moderation systems are intentionally vulnerable to a number of problems, because sites optimize for growing a user base rather than fostering community, because that's how they raise money.
Consider something akin to stack overflow, which randomly shows you messages to review or triage. Every now and again you get 5 messages or mod decisions to review, and you vote your approval/disapproval. This narrows the gap between traditional mod status and users status, where true mods would be relegated to reviewing illegal content that places the whole community in jeopardy.
Of course, there might also be considerations for avoiding the tyranny of the majority, but my point is only that the space of possible moderation strategies is considerably wider than most seem to think.
I mean what you're arguing is to essentially turn forum administration into miniature governments. And I would hope that as reality has proven, government is very easily gamed by people seeking power. You seem to make a lot of assumptions which, as someone that has acted as a moderator in the past does not bear out.
Have you acted as a moderator before? What are some communities which you believe have the idealized form of moderation? Because even Stack Overflow as you've referenced has issues with high toxicity among users chasing off moderators all the same.
> I mean what you're arguing is to essentially turn forum administration into miniature governments.
Forum administration already is a limited government, typically authoritarian in current incarnations. Mods are the police, judges and juries. Works fine if you have the resources and mods are fair, and maybe that's typical for minor infractions, but the conflict of interest is clear.
Authoritarian moderation doesn't scale though, and you disenfranchise a lot of people with every a mistake, particularly since a) there's rarely a transparent appeals process, and b) people don't typically like owning their mistakes. Doubly so when "it's my platform, so I can do what I want with it". Maybe that's not something you care about, but given the increasing importance of social media to democratic government, it's a problem that will likely worsen.
> And I would hope that as reality has proven, government is very easily gamed by people seeking power.
Government is a system of rules for governing people's interactions. A moderation system is a system of rules for governing people's interactions on a specific site. You can't speak of these things as if they're that different. Either a system of rules is vulnerable to exploits, or it's not.
> Because even Stack Overflow as you've referenced has issues with high toxicity among users chasing off moderators all the same.
I mentioned stack overflow specifically for the unintrusive and random review process and nothing else. SO doesn't feature any of the other ideas I listed.
Finally, I have no "idealized form of moderation" in mind, I have knowledge of where existing systems fail, and how other systems have already addressed very similar problems. Designing a secure and scalable moderation system is a big task, so if you want to hire me to research and build such a system, then I will be happy to address all of your questions in as much detail as you like.
You experience a backlash from some users, sure. If most users consider it a positive change, then no big deal.
> When you're a moderator, there's often very little that you can do that won't generate backlash unless the user in question is so toxic that the community as a whole agrees they need to go.
This seems to be assuming ban-like tactics. They're a poor option and I don't favour them.
> Your idea of random promotions to mod status w/ limited mod terms is an incredibly bad idea as well, because the moment a bad faith actor gets promoted to mod your entire community will quickly go up in flames.
You're making a lot of assumptions:
1. There's an innate equilibrium between the cardinality of the mod set, the probability that it contains a bad actor, and the probability that actor can cause appreciable disruption. The larger the mod set, the less likely a bad actor can do anything meaningful. You can likely make this probability arbitrarily small if that's a likely threat model. Tandom appointments work quite well to increase efficiency in various consensus-driven systems [1].
2. Mods shouldn't have absolute power. There is simply no way that a single mod should be able to destroy a whole community, any more than a bad judge in the legal system, or a single bad politician could destroy a city, county or state.
3. A transparent appeals process is always needed, in which other mods and community members review mod decisions. It took us millennia to develop our robust legal systems. Technology can eliminate some of the bureaucratic inefficiencies of the legal system in this setting, but it still contains robust patterns that should be copied.
4. You're assuming an open signup process which is vulnerable to DoS/brigading tactics. Maybe there's a way to allow open signups too (with reputation systems), but it's not strictly necessary.
5. Various reputation systems can be overlaid on this, and this interacts well with transparent judgments/appeals process, ie. someone with a long record of violating conditions and losing appeals would be less likely to be given mod power (but never 0%).
In general, today's moderation systems are intentionally vulnerable to a number of problems, because sites optimize for growing a user base rather than fostering community, because that's how they raise money.
Consider something akin to stack overflow, which randomly shows you messages to review or triage. Every now and again you get 5 messages or mod decisions to review, and you vote your approval/disapproval. This narrows the gap between traditional mod status and users status, where true mods would be relegated to reviewing illegal content that places the whole community in jeopardy.
Of course, there might also be considerations for avoiding the tyranny of the majority, but my point is only that the space of possible moderation strategies is considerably wider than most seem to think.
[1] https://www.sciencedirect.com/science/article/pii/S037843711...