I only use Hacker News and Reddit. And I always manage to get banned every few months, so I guess I'm a "bad actor".
Usually I start off OK. I can rank up 1000 points on HN or 20k karma on Reddit quickly because some of the discussions are interesting and non-inflammatory and I have interesting things to say.
But then some topics veer into domains that make me angry (hello politics), or I find a comment inappropriate or unfair, or mod behavior hypocritical. And I share that in a post that gets flagged or downvoted, and I get banned.
It is hard to detect because I think anyone has potential to become a bad actor in the eyes of a mod. There is no such thing as an unbiased mod. They will rub some people the wrong way with their comments or actions.
Some personality type's are just not compatible with what mods want to see in their well-tended gardens. I've never stalked or hounded anyone, and I've disagreed with dang's assessment of my posts once or twice, (FYI DanG is a paid mod, not a volunteer, which is probably why he's so calm about it, even though he get's riled too: read the New Yorker article about him) but this appears to be a problem techies cannot solve. Why do I say this? Because it's been going on since the 80's with Usenet. Almost 40 years of trolls, and tech hasn't solved it, but it has created some systems that have worked to keep the weeds out of the gardens. But gardens still need weeding.
> But then some topics veer into domains that make me angry (hello politics), or I find a comment inappropriate or unfair, or mod behavior hypocritical. And I share that in a post that gets flagged or downvoted, and I get banned.
Kurt Tucholsky gave great advice on how to write a letter to a government agency that is applicable to all potentially heated discussions:
* write letter.
* put letter in drawer.
* wait three days. don't look at letter in drawer, write a new letter and send that one.
This is absolutely true, and it's unfortunate that the current format of sites like Reddit and HN discourage this kind of behavior. The comments that get the most votes are the ones that have been present the longest, which incentivizes early and fast, "shoot from the hip" commenting so your comments get into the flow early where people can see and vote on them. In most large subreddits, it's basically not worth your time to comment on anything more than 12 hours old or so: no one but the original poster (and sometimes not even them!) will see or read your comments.
I'm not sure how best to fix that. Some subreddits like /r/scenesfromahat use a timed-release system that only displays vote scores on comments after a fixed period (12 or 24 hours, I forget which). That at least helps reduce the first-comment effect, but it still means that anything after the votes are revealed is basically not going to be seen.
At a certain scale, I think threaded conversations are incredibly difficult to follow without a voting system to sort it out, and once you introduce a voting system you end up with voting system problems as you mentioned.
I honestly think that flat forums are more fit for purpose. Multiple concurrent conversations end up being a bit messy, but there was at least a reasonable chance that someone would reply to any given post, since everybody in the thread was on the same page.
Heck, you can even gamify engagement with a flat forum - one of those that I still frequent allows you to "react" to any given post. It doesn't actually do anything except have a number beside the post tick up, but people still use it.
> I'm not sure how best to fix that. Some subreddits like /r/scenesfromahat use a timed-release system that only displays vote scores on comments after a fixed period (12 or 24 hours, I forget which). That at least helps reduce the first-comment effect, but it still means that anything after the votes are revealed is basically not going to be seen.
Isn't that a standard subreddit setting? I don't think that actually affects the problem you were describing. I think it's only meant to help keep the vote tallys from influencing user behavior (e.g. bemoaning how many up/down votes some comment got, being extra motivated to karmawhore).
In my experience, quick-feedback scores seem to have a negative influence on people's behavior and emotional experience. IIRC, HN used to show comments scores, but stopped some years ago. I personally try to disable such "features" as much as possible.
> The comments that get the most votes are the ones that have been present the longest, which incentivizes early and fast, "shoot from the hip" commenting so your comments get into the flow early where people can see and vote on them.
Reddit's default "best" sorting algorithm is designed to mitigate that effect [1]. It does a good job of not biasing old comments in terms of sort order, but it doesn't help the related problem that older comments tend to accumulate the most replies so newer stuff can still get drowned out in terms of quantity of other content.
I guess one way to combat this is to just not worry so much about whether a comment gets lots of upvotes or whether or not it is even read. They’re fake internet points, don’t mean anything, and are not worth anything. If you feel you can contribute to the conversation, do so. Don’t worry about whether you’re commenting early enough or whether you are betting on the right “lucrative” thread. Your comment could be one out of a thousand—who cares? HN karma is not some kind of competition.
Instead of getting warned or banned, a user is placed in a timeout box where all of their posts get a three day lag and remain editable, and when the user logs in, they must re-read their posts before proceeding to the site. Kind of an 'in-your-face' reminder not to be a troll.
It wouldn't stop the insane trolls, but maybe it would give the borderline trolls time to think.
I like this idea too. Also temporary bans are good for offering a cool-down period for users who generally are constructive but tend to lose their tempers from time to time.
> (FYI DanG is a paid mod, not a volunteer, which is probably why he's so calm about it, even though he get's riled too: read the New Yorker article about him)
I've long argued that the downvote is terrible for civil discourse, I've had to tread carefully and be warned many times when I comment on it - including writing in-depth constructive criticism and not simply whining.
It's strange to me that especially on a technology forum discussing the negatives or pitfalls of certain mechanisms is being suppressed, censored - just like with politics, as politics is inherently intertwined with everything including non-action.
The upvote/downvote thing was way, way worse when vote counts were shown per-post. That went away years ago and I think the current system seems to work pretty well when viewed through a macro lens. (In the sense of "is this a good community overall?" and "would it likely be worse if the up/down voting were entirely removed?")
I'm on another forum that opted for an up/down vote system and showed counts per-post much like news.yc used to. It created so much drama and anxiety that people were openly antagonistic towards each other over what amounted to fake-internet-points.
The solution the admins came up with over there was "keep the counts shown publicly" but also "make public the specific votes that were up/down on any given post" (with an anonymization of all historical downvotes before the policy change, but not afterwards). Within days, the community adapted and it was a huge net benefit, IMO.
What benefits do downvotes provide over just an upvote sorting the best content to the top and ability to report a link or comment for some greater infraction than difference of understanding?
Downvoting for being blatantly inaccurate or blatantly anti-community seems a worthwhile community-sourced signal for the posts of bad actors (whether temporary or permanently bad actors). This should absolutely factor into the sorting and eventual hiding of sufficiently negative posts.
For people who think of themselves as good actors, I will reflect on posts that attract downvotes and try to figure out if I should have been more constructive on a given post. (I don't "care" about the score per-se, but I do care about the community that I'm part of and if people are telling me that I'm being an asshole on a given post, I should reflect on that, decide if I agree with them, and if I do, to change next time. (Often, I think my content was fine and someone just disagreed and used a downvote to express that. That's not how I use downvotes, but if they do, so be it...))
I like the stackexchange approach, where upvotes count 10x for karma, because no matter how obviously wrong and indecent, we still care for the share of good answers, and more fine grained controls may take care of the rest (sometimes manually). This isn't perfect, 1:10 is an arbitrary proportion, and it might give a wrong impression of being succesfull, and take too long to register as a downward slope; but very importantly, this detaches voting on content that is still immediately visible from karma, that is immediately personal and maybe not that important to other users.
I had a funny incident, where I misunderstood a comment, thinking that "all your answers are negative" was very true, because I often disagree and try to be contrarian. SE has a policy that discourages discussion, so if users find it uncomfortable having their views challenged, when they really wanted an easy answer, howbeit simplistic, that may be lawful.
The situation is different when discoussion is the aim of the game. And it's a bit disingenious to say that SE doesn't have "discussion"; rather, they avoid controversy, heated arguments, or open ended debate.
Anyhow, regarding your comment that downvoting is terrible for civil discours, I agree inasmuch as it's often hard to tell what the downvote signalizes, as it might lump up many different opinions into one false agreement. Say, five different opinions compete, but only one takes on all of the answers and will accrue downvotes for the attacks, whereas the others get basicly ignored by competitors for being irrelevant, but yield feedback so that the vote is really one of popularity, as opposed to ...
Really says something about diplomacy. And I hate it so much.
If you look at hacker news however, a lot of nonsense makes it to the front page. One word titles, clickbait, old links to wikipedia articles, submarine advertising, something no one has heard of announcing a new version, something few care about announcing they are shutting down, a point release of rust, a program that is only relevant because it was written in a certain language, pointless, trivial side projects that only seem relevant to people unaware of the the same idea being done before, etc.
I think a lot of that wouldn't make up such a giant part of the site if there were downvotes on stories.
The downvote/upvote count thing should be informative, and the display preferences based on user's choice. I find the HN practice of dimming downvoted comments downright Orwellian.
You can always click on a faded comment's timestamp to go to its page, where the comment should be in a readable font. I'd love to know what Orwell would say about requiring an extra click, but oh well.
> I'd love to know what Orwell would say about requiring an extra click
He would say probably something along these lines: "The chief danger to freedom of thought and speech at this moment is not the direct interference of … any official body. If publishers and editors exert themselves to keep certain topics out of print, it is not because they are frightened of prosecution but because they are frightened of public opinion."
Your "click to read" today is the "buried in page 12 in pt-6 font" of the past. Same train of thought.
I was banned from r/hotsauce because I posted a coupon-code for a brand I was interested in trying. They were trying to sell stock to help mitigate the downfall of the COVID-19. The mod said I was an advertising shill and needed to be purged... As if people posting their collections all day didn't count as advertising.
A lot of subreddits end up being snakes telling their tail how it's the best tail. A lot of those can turn into ads as people start making whatever brands to get in on the circle jerk. Some subreddits I suspect go further and have companies working with moderators to do very potent marketing without anyone drawing attention to it. /r/mechanicalkeyboards, /r/gadgets, etc, the list goes on.
And my desperate attempts to become unbanned was met with paranoid skepticism and unfounded rebuttals. I had literally only posted one other time, showing off a meager 10 bottle collection. And somehow trying to help not only the community by sharing a discount code, not only starting a discussion about the aforementioned sauces, but also helping the struggling company. I didn't really see a problem with it. The community like the post and engaged, but somehow I was a first-offence marketing shill worth banning for life.
Not sure what i'm missing here. You clearly did not read the subreddit rules before posting. However the fact that you have no recourse after being banned other than debating a mod, that's a clear issue.
Another "bad actor" here. I think that it's a combination of two phenomena:
* I have accounts not just here, but also at places like Lobsters and Something Awful. In those places, because accounts are rare and can be banned so easily, discourse is constantly trying to stay much more civil than here or Reddit.
* As a former community moderator, I don't respect moderation actions on sites where anonymous signup is allowed. You asked for hoi polloi to wander in off the street and give their opinions; you can't then wonder why discourse is trash. Here, it's even worse; the moderators are paid for their work, which lends a clear bias to every moderation action. Similar happenings on Reddit led directly to user protests and revolts, and it's amazing that the community tolerates paid moderation here.
The idea of the well-tended garden is a potent one. I have had to tolerate obviously toxic but helpful people before and it is always irritating to not ban them, despite knowing that they are good for the garden.
> I don't respect moderation actions on sites where anonymous signup is allowed.
We don't put barriers to signup because we want it to be easy for authors, experts, and people with firsthand knowledge of a situation to step into a thread. Those are some of the best comments HN receives. If you put up barriers to keep out hoi polloi, you end up keeping out the likes of Alan Kay and Peter Norvig too, and plenty of lesser known people who have made first-rate contributions.
Besides that, there are legitimate cases when throwaway accounts are needed in order for a person to post on a topic, often when they have first-hand knowledge of a situation as well. How do you allow that while keeping out trash?
Obviously, if there were a way to allow the above good stuff while keeping out trolls, toxic comments, etc., that'd be grand. But as long as there's a tradeoff, I'd rather have the long tail at both ends—I think the forum would be more mediocre and stale without it.
p.s. I'm puzzled by your comment about paid moderation. It seems to me that unpaid moderation would be more likely to be biased, since people are going to extract compensation for the work in some form or other. If it isn't money, it's probably going to be power or an ideological or personal agenda, or something else that manifests as bias. In any case I'd be curious to hear what sort of bias you think is showing up in mod actions on HN.
> I have had to tolerate obviously toxic but helpful people before
I understand where you are coming from here. I struggle with this. I think there is a legit theory for it, usually given in the context of how to reconcile shitty behavior of geniuses (Picasso comes to mind: legendary artist, shitty human.)
Even if toxic people have something good to say once in a while, do the ends justify the means if they stomp all over the roses in the process?
> You asked for hoi polloi to wander in off the street
The garden analogy is potent because where I live there is a huge rose garden that anyone can wander in off the street and visit. Some people come in and do stamp on the roses. And it sucks for everyone else. Which is why I can understand the desire to keep those people out.
However, shouldn't the gardeners KNOW that there are and always will be shitty humans?
I'm truly ambivalent on this one: I want to participate, but I lack impulse control, so I'm excluded. That's not fair. And if I was tending a garden, I'd want to keep the "me"s out.
> I want to participate, but I lack impulse control, so I'm excluded. That's not fair.
Yes, it is, because the problem is not the garden, it's you. You want to participate, but you don't have a basic skill (impulse control) that is required for participation. It's like saying you want to be a concert pianist, but you don't know how to play the piano, so you're excluded and that's not fair.
That is why I said I'm ambivalent to the previous comment's statement about benefits from toxic personalities.
I think your argument mixes up things you can control (skill) with things you cannot control (impulsivity), if the latter could be controlled it wouldn't be impulsive.
And I admit that is a big gray area. There's a continuum of toxicity online, and there are going to be some moderation rules that are subjective.
Unlike a pianist, I see the argument as more akin to web developers choosing not to implement alternate or semantic constructs which in turn excludes blind people. A visitor can't get better at not being blind. Of course, the analogy breaks down because blind people aren't adding noncritical discourse (aka what one mod may consider "flamebait"), but now we are back to subjectivity and affordance as to what is noncritical. We clearly know how to make the web accessible to blind people, but we don't have a universally clear way to make discourse available to people who sometimes suck at it.
However, I can create as many accounts as I want, so I got that going for me.
> I think your argument mixes up things you can control (skill) with things you cannot control (impulsivity), if the latter could be controlled it wouldn't be impulsive.
First, we're not talking about a binary distinction; things aren't either "can control" or "can't control". It's a continuum.
Second, if it's really true that you can't control your impulsive behavior, that still doesn't change the fact that that behavior will make it virtually impossible for other people to deal with you in certain contexts. It's still up to you to recognize the impact that your behavior has on others, and to make choices about what you can realistically do or not do--or about how much work you are willing to do or how much risk you are willing to take to be able to participate in certain activities (for example, if it turned out there was a drug that enabled you to control your impulsive behavior, would you take it in order to enable you to do something you wanted to do?).
> I see the argument as more akin to web developers choosing not to implement alternate or semantic constructs which in turn excludes blind people.
Ok, so what "alternate or semantic constructs" could the programmers of HN, for example, put into their code so it won't exclude people who can't control their impulsive behavior?
> we don't have a universally clear way to make discourse available to people who sometimes suck at it.
It's not that we don't have a "universally clear way" to do this. We don't have a way at all. "Sucking at discourse" is simply not something we know how to accommodate for. The only way we know of to deal with it is for the person who sucks at discourse to learn how to not suck at it.
Perhaps at some point we'll have an AI or something similar that can mediate such discussions so all parties can participate. But we don't have anything now.
from what, playing the piano? Do you maybe see a connection here to why somebody might not know "how to play the piano"?
Or in other words: A garden without "you" is not really a garden, except in theory, if the proverbial tree makes a sound when nobody can hear it fall. That's a slippery slope argument.
Many people may lack impulse control, but preemptive judgement can't weed them all out. That's one reason why it's "not fair". It's fair to those that have "impulse control", maybe, but it is perhaps unfair that they get to decide what that is, when a moderator might act out of impulse, or experience, all the same. It is however futile to assume that life were just not fair, because then "you" have already lost.
If entry is taken to afford the gate keeper, it is not an open garden anymore, open to the public. At least not if the submission requirements are arbitrary to an uncertain degree. Maybe it's the wrong approach to take that internet discussion is not important and impulse control therefore let down too easily. But then again, the impulse to post or visit at all might be the problem to begin with, as in this post.
Really, who's aspiring to become a concert pianist in this day and age? That's a weak rhyme, unless you meant to imply that the reddit moderator cabal were playing the readers like an instrument.
I didn't; the person I responded to did, by using the word "I". They were specifically talking about themselves.
> from what, playing the piano?
From being a concert pianist. Read what I actually wrote.
> It's fair to those that have "impulse control", maybe, but it is perhaps unfair that they get to decide what that is, when a moderator might act out of impulse, or experience, all the same.
My statement that impulse control is a basic requirement for participation applies just as much to moderators as to any other participants.
Who gets to decide what the forum rules and norms are is whoever owns the forum. That's as fair as it gets.
There are some forums where lack of impulse control isn't much of a problem, because nobody else on that forum has it either. So strictly speaking, I should have restricted my comments to forums where that is not the case. I don't think that makes much difference in practice for this discussion, since as far as I can tell the forums where lack of impulse control is the norm don't have moderation problems since they don't have moderation at all.
> who's aspiring to become a concert pianist in this day and age?
Googling "how to become a concert pianist" gets plenty of hits, so it looks like plenty of people are trying to help aspiring concert pianists. Perhaps they're all speaking to an audience of zero, but I doubt it.
> unless you meant to imply that the reddit moderator cabal were playing the readers like an instrument
Let's finish the analogy. Rose gardens and other community parks are usually community-funded; my local gardens are funded with taxes. There are not only paid moderators (police), but paid curators (gardeners and arborists) who deliberately build up and cultivate an appearance for the garden. Some of the more expensive gardens, like the local zoo, also have an entrance fee, because taxes alone would not fund the garden at its given size and occupancy.
There are communities like this; Something Awful is the first which comes to mind. These communities deliberately acknowledge that money is required to fund community spaces, and use the money to improve the space.
There are also extensions to the analogy. A local park has a bulletin board. Postings to this board are generally made by community consent; anything that any community member feels strongly enough about can be removed immediately. This is also how postings on telephone poles work. Sometimes a community will lock up their bulletin board after a wave of abusive listings. This is analogous to primitive message board moderation, as seen here on HN.
Are we here to advertise to each other, like on a bulletin board? Are we here to produce a great knowledge base, like in a garden? What should the shape of conversation be?
I only use Hacker News and Reddit. And I always manage to get banned every few months, so I guess I'm a "bad actor".
Usually I start off OK. I can rank up 1000 points on HN or 20k karma on Reddit quickly because some of the discussions are interesting and non-inflammatory and I have interesting things to say.
But then some topics veer into domains that make me angry (hello politics), or I find a comment inappropriate or unfair, or mod behavior hypocritical. And I share that in a post that gets flagged or downvoted, and I get banned.
It is hard to detect because I think anyone has potential to become a bad actor in the eyes of a mod. There is no such thing as an unbiased mod. They will rub some people the wrong way with their comments or actions.
Some personality type's are just not compatible with what mods want to see in their well-tended gardens. I've never stalked or hounded anyone, and I've disagreed with dang's assessment of my posts once or twice, (FYI DanG is a paid mod, not a volunteer, which is probably why he's so calm about it, even though he get's riled too: read the New Yorker article about him) but this appears to be a problem techies cannot solve. Why do I say this? Because it's been going on since the 80's with Usenet. Almost 40 years of trolls, and tech hasn't solved it, but it has created some systems that have worked to keep the weeds out of the gardens. But gardens still need weeding.