Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How about bringing the same rules to the wider FB? I just want to look at baby pictures and connect with friends. I don't want to be part of a machinery that spreads misinformation and conspiracy theories.


This is a copy/pasta of a previous post of mine, but I feel it's very fitting here.

Why would they give up control of the world by doing something silly like that? Think about how much political influence Twitter has based solely on which tweets they show the President and corporate press. Consider how much untraced in-kind donations these companies can make by tweaking which news stories you see. The crazy thing about it is these things can be tweaked by humans, but it's largely controlled by AI now, which no one person will completely understand what's happening in any of these systems. We're in the early stages of AI controlling the global political future and it will tend to create whatever kind of future generates the most clicks. It's kind of like the game Universal Paperclips, except with clicks/rage/ads.


I believe the word is "paste" and you can link instead.


"pasta" is a meme replacement of "paste" is there copy/paste context. It's relatively popular, albeit uncommon on HN.



Someone's addressed the "paste" part so I'll address the rest: the vast majority of people are not going to want to click on a link to a short post in the middle of an HN discussion. The content belongs here, not behind a link, and mentioning that it's a copy of what the user has said elsewhere is a courtesy to us.


> Think about how much political influence Twitter has based solely on which tweets they show the President and corporate press. Consider how much untraced in-kind donations these companies can make by tweaking which news stories you see.

I hope you take this as kindly as I intend it, but what you're proposing is a conspiracy theory. This is a relatively nice attribute for a theory to have, because it gives you a nice heuristic for deciding whether the theory is true!

The likelihood of a conspiracy being true decreases as the number of people with knowledge of the theory and an incentive to report on it increases.

To take an extreme example, if the moon landing was faked, tens of thousands of people have somehow held on to that secret. Tens of thousands of people who could gain overnight notoriety by telling their story, and hundreds would have the proof required to gain even more popularity. The fact that nobody has ever broken ranks is a strong sign that the moon landing was not faked.

"Twitter and Facebook are secretly tweaking which news stories Trump and the rest of us are seeing" isn't a conspiracy on nearly the same scale as a faked moon landing. It requires some pretty incredible things to be true though.

- Maybe every employee knows, and none of them have decided to say anything, despite the large incentives to reveal the secret and win their moment in the limelight.

- Maybe not every employee knows, just enough employees know to implement it and hide that implementation from the others. Maybe every employee on the Algorithmic News Feed team knows. I don't know how Twitter and Facebook are structured, the team probably isn't called Algorithmic News Feed, but as one of the more important systems both Facebook and Twitter must dedicate at least a hundred engineers. So, 200 people were quietly chosen for their ideological purity and ability to keep a secret from their peers. These 200 people write code in secret. Somehow they commit lies to the monorepo and apply private patches to the code before deploys. The SREs must also be in on it, because those private patches will still show up in traces and their bugs will show up as errors. All of this happens inside Facebook, a company notorious for employees who speak up and expect transparency. It also happens inside Twitter, a company with such lax controls that until just recently thousands of people could use the internal admin tool to take over any account.

I don't know, I guess it's possible? Maybe you have a better idea for how it could be happening, but it just doesn't seem very likely at all.


> I don't know, I guess it's possible? Maybe you have a better idea for how it could be happening, but it just doesn't seem very likely at all.

I’ve seen this kind of thought pattern a few times and frankly the way you are thinking doesn’t match reality.

I work on a 1000+ person enterprise software project.

Less than 5% of those 1000+ understand our customers requirements and use cases in any real depth. This is despite trying for years to incentivise developers to have a broader understanding of our business.

Within that core 5% most decisions are driven by the 3-5 people who care about the particular area.

So for a 1000 person+ org you would need to corrupt 3-4 people to drive a hidden agenda.

This is for a project not trying to be secretive in any way.

To relate it back to Twitter you would probably need the right 3-4 people to push hard for content moderators to be hired in San Francisco instead of Bangalore in order to push hard left views.


You don't even need to discuss your "evil plans" with anyone. Hell, it doesn't even need to be a plan. You just only hire people who already agree with you. You don't even have to do this consciously, it's the default human behavior.


> You just only hire people who already agree with you. You don't even have to do this consciously, it's the default human behavior.

Exactly - our product uses angular because two of our core engineers loved angular, helped people who were having trouble with angular, and hired people who also liked angular.

Not because angular was the best tech choice. We didn’t even do a proper evaluation.

And this is for a hundred million+/year project......


I'm not imagining an intentional conspiracy by anyone. Everyone need only respond to incentives. The AI responds to the incentives it was programmed to respond to such as engagement. The workers are responding to incentives such as profit. They tell you they censor some people because they fear it will radicalize you, will harm profits, or other similarly non-nefarious incentives. No conspiracy or under handed behavior is required.


You're making the assumption that the people running these companies are trying to take over the world with AI or whatever. They're not. They're just trying to make the most money and do what's best for the company. The AI, the political influence, etc. are all just side effects of that. There's no conspiracy because nobody is conspiring. Everyone is just doing their jobs the best they can.


And if someone even mentions the idea of creating exactly this service -- baby pictures and friends' contact information can be exported from Facebook and imported into such a service -- a torrent of HN commenters would decry "fail" before they even tried it. Tech bloggers and "journalists" would also inject doubt into the minds of their readers. To get to where you want to go, you would need to ignore the critics. To accomplish what you describe, there is no necessity that every user logs in to the same network. Each network would only need to contain family and friends. Connecting one network to another would be optional. You are not asking to be in a graph with the entire world, to be connected to total strangers. Yet that is precisely what FB is constantly trying to achieve. Every person's information collected in a single database controlled by a single entity. One "social" network for everyone. Hence you are connected to people and companies you do not know, who are not your friends or family. Your behaviour can be studied. The ads, marketing and misinformation can flow freely.

(FB == Fish Bowl)


I wonder what happened... the early days of FB you're still somewhat close to your friend circle and therefore most of the content were personal updates. At then at some point Facebook started suggested links, posts that you have not explicitly followed, news articles that may or may not have been verified. Now every time I check the comments under such posts, it's people arguing with each other, and then people share and spread whatever they see from these suggested posts looking to confirm their existing beliefs even more strongly. And then people for some reason started believing they "own" the right to write whatever they desire on their wall, and it's a platform for spreading their political opinions.

Just less than 10 years ago, it would've been considered very rude to push your religious or political opinions on to others, especially when it's a professional setting it would've been considered highly unprofessional. But nowadays that line doesn't seem to exist anymore.


Worse: Facebook decides that the "most relevant" comments on such posts are the most inflammatory comments, because their algorithms select for engagement, so they want to show me something that will anger you, so you write an inflammatory reply.


But do they decide? Can they actually read the comment and decide it's inflammatory? Or do they just expose comments that people like to engage with (filtering out only the most obvious insults)?


To not expose highly the one they predict to meet their metrics from the get-go would be a big loss to give up, given the small scale of many threads.


I'd give that about 10 minutes before someone started claiming that their baby was immune to covid, and then you're right back where you started.


But isn't that only happening because of FB allowing disinformation to spread so easily?


Curious how you think it's possible for FB to prevent the spread of disinformation? Everyone likes to pretend like Facebook has the ability to just stop disinformation, when in reality, even defining "disinformation" is basically impossible. Sure, you can bring up examples of blatant lies, but most of the effective disinformation is a lot more subtle and depends on what side of the issues you are on.


Facebook's algorithms exploit the brain's attractiveness to decisiveness. They've know this and have chosen not to take action, except for small amounts so Mark can tell congress they are improving things. Source https://www.wsj.com/articles/facebook-knows-it-encourages-di...

So besides straight up changing the algorithms to promote non-decisive content, these are a couple things I think could help:

- Limit the spread of information in general in favor of content created by the people you follow

- Un-personalize advertising


Facebook doesn't profit from divisiveness, they profit from engagement. The fact that divisive posts encourage more engagement tells me more about people in general, rather than Facebook's business model.

> Limit the spread of information in general in favor of content created by the people you follow

I don't think that's what people want from their social networks nowadays. FB, Twitter, YouTube, TikTok, Snapchat, etc all do not work this way anymore. Suggesting that Facebook revert their app to what it was 10 years ago is not a serious suggestion because there are many other apps that will fill that void. If it's not FB, another app will take its place and give people the outrage they're looking for.

> Un-personalize advertising

Advertising plays a very small part in this. Most of what you would call "disinformation" is spread through reposts, which are not affected by advertising.

Sure, there might be some hostile actors out their spending money on pushing propaganda to the masses. But from my experience, people actively seek this nonsense out, the algorithms just make it easier for them to find it.

In my eyes, the real problem is that most people aren't equipped with the right tools to identify bullshit. Simple things like an inability to gauge scale. e.g. "9,000,000 gallons of oil has been spilled from pipelines in the last 10 years" Is that a lot? I have no idea, but what I can do is compare that against other forms of oil transportation. Most people won't do that work though, they will go straight to outrage.


> I don't think that's what people want from their social networks nowadays. FB, Twitter, YouTube, TikTok, Snapchat, etc all do not work this way anymore.

They don't work this way because it makes shareholder's the most money, not because it is the best experience for the user.

> Advertising plays a very small part in this. Most of what you would call "disinformation" is spread through reposts, which are not affected by advertising.

A completely false ad about a candidate of a different political party is much less likely to be called out or reported because it is only shown to a highly targeted group of people. This lack of accountability creates disinformation. These ads could not be ran as a billboard advertisement or in a non-personalized ad space.

All of the counter arguments always come down to this: Facebook would make less money. And, yes, of course that is going to be the case because if any of these changes would make them more money they would have implemented them themselves. It requires a public corporation to accept that they are making the world a worse place, and to choose to make less money to stop doing that.


>All of the counter arguments always come down to this: Facebook would make less money. And, yes, of course that is going to be the case because if any of these changes would make them more money they would have implemented them themselves. It requires a public corporation to accept that they are making the world a worse place, and to choose to make less money to stop doing that.

And it would also require them to make a product that people desire less, and risk losing to a competitor that gave people what they want. People want to cluster in silos, chase novelty, and spout off with 100% confidence about topics they know nothing about.


> Facebook doesn't profit from divisiveness, they profit from engagement. The fact that divisive posts encourage more engagement tells me more about people in general, rather than Facebook's business model.

"Crack dealers don't profit from drug addiction, they profit from the pleasurable effects of consuming crack. The fact that very addictive drugs are pleasurable to consume tells me more about people in general, rather than crack dealer's business model."


there are a lot of pro-legalization folks on this forum that would likely be inclined to agree.


I’m fine with legalizing crack as long as producers and distributors are heavily regulated and held accountable for their impact on public health, just like the alcohol and tobacco industry.

Social media conglomerates manipulate how billions of people perceive the world around them, with disastrous effects. They should be held accountable for that.

Absolving them of all guilt, and blaming all the nefarious effects of social media on the consumers, accomplishes nothing.


Yes, and I don't see crack dealers as the problem in your example. The bigger problem is how society views drugs, addiction, and the criminalization of both. We've all seen how well that's worked for us. Thinking we could apply similar bans on speech we don't agree with is just as stupid.


Both things can be true. Some dangerous drugs are excessively criminalized and that causes enormous problems. Others are legal but heavily regulated, like alcohol and tobacco. Some are legal and insufficiently regulated, which causes enormous problems too: see for example the ongoing opioid crisis in the US.

Nothing good comes from denying the dangers of an addictive drug, or leaving its distributors free to misbehave without consequence. That is the current situation with social media companies and their enormous influence on our minds.

I don’t think we should ban social media. But not holding multi-billion dollar social media conglomerates accountable at all is lunacy.


I have seen this question asked many times on such threads but have never seen a workable solution. Usually the response is something like "I'd shut the whole website down" or "I'd employ millions of moderators" or "I'd allow people to only post once a month" or "I would remove sharing links, only baby pictures allowed". Nothing practical. If you respond to any of these with examples of positive speech that would be harmed, there is no response. For example, if you curb political speech, people wouldn't be able to organise political protests. People wouldn't even agree on what should be considered 'political'. For example, is organising a BLM event political? Should it be allowed under the proposed rules?

I'm happy to be proven wrong though. Maybe this is the thread where people will make practical suggestions.


I want people to pay. It will

- Reduce majority of bots by making them unsustainable.

- Provide direct money to improve moderation and make platforms liable.

- Make journalism cater to individuals rather than ad networks.

- Remove toxicity because trolls won't pay after getting banned regularly. No need for other fingerprinting methods.

- Reduce the number of users and silo them automatically.

Free business models are anti-competitive and result in worse service for the users by making platforms accountable to advertisers (other companies) than consumers.

Force facebook to introduce minimum payment based on purchasing power. Outlaw free/freemium models in software or limit them to a time period (3-6 months).

This won't apply to non-profit services. And open source will be fine since it will only apply to services or for-profit companies.


What is "impractical" about "shut the whole website down"?

When it was discovered that tetraethyl lead was widespread in the environment and caused neurological damage, it was banned. Yes, that materially harmed several chemical companies whose livelihood was based on producing tetraethyl lead.

So what?

If your business model harms people, I don't care if stopping harming people eliminates your business. People matter. Businesses do not.

Are we supposed to just go, "Yeah, we know Facebook is harmful to millions, but won't someone think of the poor shareholders?" Then shrug and accept it?


Banning a harmful substance, and banning a platform for communication are not comparable.

There is no shortage of sites that will take Facebook's place.

What is the legislation you propose to prevent Facebook, or the millions of other existing or soon-to-be existing apps, from doing harm to people?


> Banning a harmful substance, and banning a platform for communication are not comparable.

OK, consider gambling. That is simply a kind of software that enables people to engage in behavior that turns out to be harmful for a large number of them. And, because of that fact, it is heavily regulated.

> What is the legislation you propose to prevent Facebook, or the millions of other existing or soon-to-be existing apps, from doing harm to people?

I don't know if we know what sort of regulations would help yet. But I do know that if we assume a priori that corporations cannot be forced to change their behavior because it might hurt the poor corporation, then we will never figure out the answer.


Well that's all I'm asking for here. A practical proposal.

Removing tetraethyl lead was certainly doable. Removing every car from the road was not. One was a targeted change that improved the industry, while the latter was so impractical that they never considered it.

Here's a thought - you assume a priori that shutting down social media would be a net win. How did you come to that conclusion? Did you spare a thought for the people who's social lives revolve around spending time with friends online? You'd advocate for taking away these people's social networks because you're certain you know what's best for them?


> you assume a priori that shutting down social media would be a net win.

I didn't actually say that. I think many social media sites are net positives, like this one here. I think Facebook specifically is a net negative.

> How did you come to that conclusion?

Performing experiments on users' emotional state without their consent: https://www.theatlantic.com/technology/archive/2014/06/every...

Cambridge Analytica: https://en.wikipedia.org/wiki/Facebook%E2%80%93Cambridge_Ana...

Facebook makes users feel worse about themselves: https://www.bbc.com/news/technology-23709009

You get the idea. None of this is new. Some communities are more toxic than others. Some businesses are less ethical than others. I believe Facebook is an unethical business led by an unethical man making a product that is more harmful than good for most people.

> Did you spare a thought for the people who's social lives revolve around spending time with friends online? You'd advocate for taking away these people's social networks because you're certain you know what's best for them?

I did not advocate that.


> Did you spare a thought for the people who's social lives revolve around spending time with friends online?

I don't agree that banning social networks would be productive or even possible, but this argument doesn't make sense. People had social lives before social networking. People had friends online before social networking. Social networking is not required for these things.


I don’t think you have to shut down Facebook. Just shut down targeted advertising, algorithmic filtering of content, and a lot of their surveillance practices (e.g. shadow profiles).


Should Google's targeted advertising also be prohibited?

Should Google's algorithmic filtering of content also be prohibited?

Should Doubleclick/Google's surveillance practices (which almost certainly include some kind of "shadow" profile) be prohibited?

Also, people have been talking about shadow profiles for at least a decade now, and yet no disgruntled FB employee has revealed all. Why do you think that is?


Yes, Google's similar practices should also be prohibited. You ask as if it's obvious that someone would say no and show themselves to be a hypocrite, which is weird.


OK cool, I was interested in whether or not you were being consistent.

I don't find it weird at all, I've noticed that often people get really upset about FB or GOOG doing something, while ignoring the other, so hence my questions.


How do you define "algorithmic" filtering?


Yes of course? Because Facebook employees are highly paid, I imagine. The reason why that whistleblower got so much coverage is that usually all it takes to keep some quiet is to pay them off.


Like, there have been soooo many FB whistleblowers/leakers in the past four years, and yet shadow profiles (which would seem to be important to leak) have never had any backup here.

I don't think that this is a coincidence, and I 100% disagree with the notion that this is because FB employees are well paid.

I honestly think that it's because shadow profiles essentially don't exist in any meaningful form (there's probably some logs for non-FB users, but I don't think that they are aggregatable to a specific individual without an account, mostly because that would be super low value and really hard).


I am not concerned about hurting the poor corporation. What I am concerned about is having actual laws in place, not enforcement based on outrage. If Facebook is doing thing X, we decide we shouldn't allow thing X, we make thing X illegal. If FB continues to do thing X, we apply the law to them. We don't just dish out punishments for vague "You're hurting America" crimes. You need to define the thing you want banned, and it can't be "Facebook."


Why is it our responsibility to figure out how Facebook can fix their algorithm? They employ thousands of the smartest engineers in the world and pay them absolutely ridiculous amounts of money. If they wanted to fix it, they would. As just random people on the Internet, it’s not our job to figure out how to fix it. It’s our job to complain about it enough to make them fix it themselves.


Doesn't HN have some sort of "flame war" prevention feature where people are automatically prevented from commenting/replying in rapid succession? Rate-limiting posts/sharing and comments on Facebook seems like a good place to start. Maybe it isn't necessary (and probably not possible) that we stop disinformation entirely so much as we slow it down.


> Curious how you think it's possible for FB to prevent the spread of disinformation?

For example by doing something when they are warned for years by multiple entities that FB is used as a tool to support genocide, like in Myanmar.

It seems that only when such things blow up publicly and the stench of bad publicity gets too bad they send out Zuckerbot announcing his usual platitudes to then get back to business as usual. And that's far from the only example were their product was used for oppression by authoritarian regimes.

This company could do a hell of a lot more to counter this. But they just don't give a shit, unless publicity gets too bad.

edit : word change


How would you achieve that?


Don’t be friends with those people


To me, this is a valid response. If you can remember back when we as humans used to gather together in public places, we had lots of options on who we talked to at that gathering. If someone always talked about something you just didn't care about, you could walk away and talk to other people. People that you found yourself regularly talking to about things that everyone found pleasant were called friends. People you talked to occasionally were called acquaintances, and people you preferred not to talk to were called many things, but friend was not one of them. If some of your friends like the people you did not, they were called a friend of a friend but you would not refer to them as your friend.

In Facebook, there are only "friends". So, if you don't like what they are always carrying on about, don't have them as a friend. Just like in real life.


> claiming that their baby was immune to covid

Some people are indeed immune to covid, babies too, most probably. I've personally heard of numerous cases of persons not getting the virus at all while their spouse was in intensive therapy or worse.


My wife had the cold once and I didn't get it. Must mean I'm immune to the common cold.


Yes, you were probably immune to that particular cold strain. Or you weren't in close contact with your wife during that timeframe, but that wasn't the case for the persons I've written about.


It's hilarious that in a thread about misinformation the idea that children are immune is being treated as not true. Death rates for COVID for people under 20 are ~zero. Babies are in fact immune, perhaps due to relatively lower levels of ACE2 expression compared to adults.

HN readers seem to have totally lost it w.r.t. COVID and misinformation. It's practically guaranteed that in any thread about misinformation/FB/Twitter/etc someone will state something about COVID that's true and then describe it as misinformation, or state something about it that's false and then decry the conspiracy theorists who don't believe it.


First off, you're moving the goalposts a bit here. No one claimed that the death rate for people under 20 was anywhere near that of older age groups.

Your assertion that "babies are in fact immune" is demonstrably false: https://data.cdc.gov/NCHS/Provisional-COVID-19-Death-Counts-... shows 20 deaths for children under 1 year old. Presumably many more than that were infected but survived (unfortunately covid.cdc.gov is timing out for me right now and a quick search didn't give me infection rates for that age group).

Yes, the number is small compared to cases in older people. But "babies in fact are immune" is in fact the same kind of misinformation you're railing against.


Remember you're dealing with a very noisy, FP-prone test, and there have been millions of tests by now. At those levels there are bound to be some babies that tested positive and then died, so you can't infer from it that COVID actually killed them.

To put it in perspective, according to the UK govt's own analysis, it's very likely that all currently reported positive infections are false positives!


While it’s possible that some people might be immune, your anecdata is certainly irrelevant to determining that.


Not getting it is not the same thing as immune.


What is it, then?

Later edit: To add to my comment, what do you call sleeping in the same bed, eating from the same plate and having direct physical contact with a person who gets the virus and ends up in IC or dead while the other person tests negative for the virus?

Let's not forget that ever since February we've all known that this virus is particularly easy to transmit/get, so you cannot say "that person got really lucky, that's why he/she hasn't got it".


Immune means you won't get sick. Not getting sick just means you didn't catch it.


You have no scientific evidence to support your claims. Take your L and stop spreading misinformation.


If you're going to ask him to share sources, please show some of your own with the level of rigor you would like to see to the contrary. I'm tired of this trend of people shouting Sauce! Sauce! At each other. Collaboration is key.


I distinctly remember during the 2012 election, my friends began posting political materials extensively. Which makes sense because of the Obama campaign's unprecedented spend on social media.

Pre-2012 Facebook was awesome. Now the feed is almost exclusively bullshit from people I don't know.


Totally this, I get having ads injected into the home timeline(gotta keep the lights on!), but inundating people's feeds with 'publisher's stories' showing them news that is blatantly false/overly negative/polarizing and just not wanted for a vast majority of people.


I can't see how that is even possible at this point. You'd have to remove groups, pages, public profiles, and sharing, which would wreck the advertising and revenue ecosystem. Or, come up with magical AI which could detect politics/memes/disinformation and remove it instantly after it's been posted.


> You'd have to remove groups, pages, public profiles, and sharing

Or they could just not show posts to groups you're not in and from pages and public profiles you don't follow! Allowing something to interject into your newsfeed should be opt-in, but right now it isn't even opt-out, except for not logging on at all. It would also be cool if there were a way to opt-out from seeing shared posts selectively for people on your friends list, e.g. I want to see things that Overly Political Relative posts themselves, but not things that they share from other places.

That being said, I deactivated my Facebook account a couple years ago, so I'm no longer a user whose opinion they should theoretically care about anymore.


I don't really use FB all that much these days, because the people I care to keep up with have largely moved on from it. But when i do log in, I get mostly the experience you want with the https://www.fbpurity.com/ chrome plugin I've spent the time to heavily customize.

My timeline shows as strictly chronological, and only text and photos posted by my immediate friends. No groups, ads, publisher's bullshit, promoted things, no trending, no nothing. Just photos and plain text.


Yep, back when I still used Facebook, I used that extension. I think that installing a third-party extension is probably a lot more than the average person knows to do, though. I was mostly making a point that if Facebook finds discussion over politics and the like too divisive for their internal company chat, maybe they should consider what they can do for the rest of us to keep things similarly sane.


I’m sure they could develop a classifier that would catch most (~90%) of political content, and make it opt-in - if people want to see it they can, but it could be hidden by default. This would be my preferred approach, so that I can use it for connecting with people but avoid listening to everyone’s political outrage.


You could def make this not suck by just making it so facebook automatically 'tagged' content and users can filter out certain tags. Since it'd be public, users could decide for themselves if the tags are reasonable for their filtering needs. But they'll never do it, because ad conversion rates and engagement would likely drop significantly.


> I’m sure they could develop a classifier that would catch most (~90%) of political content, and make it opt-in - if people want to see it they can, but it could be hidden by default. This would be my preferred approach, so that I can use it for connecting with people but avoid listening to everyone’s political outrage.

Eh, I don't really like that idea. For one, it only really addresses the problem of being exposed to content you find unenjoyable.

Honestly, sometimes I do wonder if consumer-level broadcast technology is the psychic equivalent of doing something like letting everyone fly planes without any training. It might be better to adopt communication technologies with a little more friction.


I spent the entire month of September 2016 flagging every single political post, whether it was a news article, friend's status, shared post, whatever, as "See less like this". It was completely ineffective. At the end of the month I was getting seeing just as much politics as before. So I'm not sure they can, or maybe want to. I was giving them all the input they needed to make a good classifier, and it was a lot of work. Maybe classifiers have improved enough in the previous four years.


You could build this as a browser extension... Call it "de-politics", and have it scan the HTML of popular sites (facebook, twitter, etc) and simply collapse/hide all content matching some filter.

I bet a simple keyword filter for names of politicians could catch 90%.

I wonder if people would pay for it?


If you take the time to set it up you can get close using https://www.fbpurity.com/ chrome plugin.


Ah, of course, censoring political discussion is the answer!


Censoring is when an authority removes content without your consent. Installing something that lets you decide what content to see (or not see) on a site is not censorship.


It is, however, sealing one's self in a chamber of like-minded opinion.


That implies Facebook posts are an accurate assessment of opinion, and not whatever the algorithms promoted to increase engagement.


I don't think you need to do that. I think just making the timeline a reverse-chronological firehose, and not filtering any posts out or making any posts more prominent, would do wonders. That's how Facebook used to present the feed.

Giving people tools to make sub-lists of certain friends/groups/etc. in order to organize their experience better (on their terms, not at FB's whim) would be great, too.


Agreed, except maybe with an option to filter out the baby pictures. I never want to see those. Becoming a parent made me want to see them even less (my child is the best and the prettiest and that's all I need). :).


> I don't want to be part of a machinery that spreads misinformation and conspiracy theories.

You're swearing off the internet entirely?


Internet? I can get all of that from the dude sitting in a wheelchair outside my coffee shop. (Pre-Covid, at least. Hope he's doing okay.)


People use Facebook for different reasons. I use it for Groups and to follow artists but definitely not to see baby pictures.


I want an option to pay a yearly or monthly fee, that lets them still make money, but also protect my privacy.


Ha ha ha, they make too much money on you to allow that. Protecting your privacy can't happen at the platform end, because of the nature of the network itself. And there would be too few people that would pay for this feature, it would limit their ability to grow. Which is ultimately what it's about; the stock price.


I don't want to be part of a machinery...

No one has to use Facebook.


It would hurt engagement.


Simple, just remove humans from the platform


[flagged]


It's more akin to "a virus is less likely to spread if fewer people are in contact with known carriers", a.k.a. deplatforming.


In fairness, for alot of the people here it probably would


We could call it Whitebook.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: