Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Such as ... food production? "But your food is consumed by literal fascists"... water supply? "Did you know that evil people also drink the water you are providing and hydrate their bodies so they can do more evil things?"

For people who believe that everything is political, there are no projects with less moral ambiguity, it's just more or less openly visible.



Upon reading your, and GP, it occurred to me that we've lost the idea that it is the buyer that is responsible for the purchases.

It used to be that people and organizations making unethical purchases were the ones we considered, and held, responsible. For a long time we've had good, positive movements centered on informing the buyer. We added expiration dates, ingredient lists, nutritional value information, crashworthiness scores and reliability ratings, country of origin labels, even ethical sourcing labels. Perhaps too much of a good thing caused information overload and resulting numbness? Somehow, between the Prohibition, the "war on drugs", and the supply side moral regulations, we've lost the spirit of "well informed free agents making decisions".

Most of the services (FB and the likes) we're discussing here are morally neutral by their nature, and it takes concerted efforts to make them non-neutral[1]. It is the particular use they are being put to that is moral or immoral. Let's not shift vast moral powers from the wide society to a narrow cadre, shall we? The economy is a neat distributed system. It's the popular democracy before democracy became popular. Let's not give it up.

--

[1] example of non-neutrality: the current trend of algorithmic manipulation


> Most of the services (FB and the likes) we're discussing here are morally neutral by their nature

I don't think that's the case. Is it moral to exploit human psychology when developing addictive features that pull people into the site over and over? Is it moral to sell user information to advertisers so they can emotionally manipulate you into buying crap you don't need? Is it moral to design interactions that evoke outrage and disagreement in order to increase engagement? Is it moral to track user activity across the web, outside the company's site?

I don't think any of these things are moral. These practices might not be necessary for a site like FB (then again they might), but this is the model they all seem to choose. And that's what actually matters.


I hear your objections, and I should have worded my idea better.

The gist was, a bare messaging+microblogging platform is, by its own nature, morally neutral[1]. Of course if the operator starts doing editorial decisions - like algorithmic timelines, or propping up/pushing down content, or manipulating user mood - then the operator clearly is making moral judgements & decisions.

Funny how respecting user privacy does, at least partly, absolve the operator from a lot of risks related to making moral judgements on a mass scale in a hurry.

--

[1] with the only caveat that, if somebody believes facilitating communication to be evil or good, then it would be considered respectively evil or good.


I agree that the mere concept of a bare messaging+microblogging platform is morally neutral, but frankly I just don't see what the point is of making that observation, because we don't have one of those, at least not something that's wildly successful enough to matter. (By that I mean that a platform that has 100 or 1000 or even a million users can do whatever it wants; unethical behavior just doesn't move the needle on a global scale.)

It's the classic argument, "technology is neutral; how it's used determines the ethics". Well, yes, I agree with that, but here we have a company that's using it unethically, and has no desire or need to stop their bad behavior. And that bad behavior has been instrumental to their success. That's what matters.


If you provide catering services to ICE maybe you'd have people getting political about who's eating your food. If you're producing water and only allowing hate groups to consume it, sure maybe you'll have this happen, but otherwise these examples are strawmen.

Everything has some political issue around it, but Facebook has politics baked into it because it's using political issues as a means of making money. They sell advertising to politicials, when they know the ads are lies. Their platform is filled with fake accounts pushing genocidal agendas from dictators, and in many cases facebook is sweeping it under the rug.

The way their platform is built is setup to manipulate people, and that platform is being used at scale to do so in ways facebook knows is fucking up the world. Its very existence is political at this point.


> They sell advertising to politicials, when they know the ads are lies.

I don't really take issue with that, Germany even has that codified, and we're very far from being free-speech-absolutists. Media companies are compelled by law to air political ads by all political parties without checking them, judging them or commentary. Short of being obviously illegal, there's nothing they can do which lead to our center-left state media being told by the supreme court to air the far-right (actually far right, with skin heads, boots and all the stops, not just anti-low-skill-immigration conservatives) NPD's spot.

> Their platform is filled with fake accounts pushing genocidal agendas from dictators, and in many cases facebook is sweeping it under the rug.

But not really. They exist, but the platform isn't "filled" with them. The vast majority on FB is not political.

I'm sure that FB would be quite okay with not having politics at all. Sure, people are on the platform, but they'd rather have engagement around cat pictures, celebrity news and similar things, because people shouting at others about their ideology aren't buying sneakers. They're not a political advertising company that relies on political ads as their primary funding.

Banning political speech is simply not an option, because some people sometimes want to argue about politics, and you're going to have to fight your users if you don't allow that. You never want to fight your users.


What a weird example to pick...

Food and water production has VERY big ethical issues. Palm oil, mass-slaughter of animals, deforestation, Nestle taking away water from locals, CO2 emmissions etc.

So yes, there are problems in the food and water industry, but I don't really get what your point is? Should we just close our eyes, ears and mouths and say "fuck it, not my problem"?


> Palm oil, mass-slaughter of animals, deforestation, Nestle taking away water from locals, CO2 emmissions etc.

Not at all what I was aiming at. The problem people have with FB isn't how they produce the product, but who uses it.

The problem with food and water in the equivalent scenario, would be in who consumes it. If you let everyone consume it "woah, that's a political choice". But it really isn't. It's the default, deviating from it is a political choice.


> The problem people have with FB isn't how they produce the product, but who uses it.

No, the problem is that the product they produce is _specifically designed_ to be used in this manner because conflict and argument increases "engagement" and for a large portion of the employee base their bonus depends on performing work that leads to this outcome.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: