> Do you believe the success or failure of these moderating features comes down to how accurate they are? People actually like Community Notes; they're part of the discourse on Twitter (even if most of them are pretty bad, some of them are timely and sharp). Meanwhile: Facebook's fact-checking features really do work sort of like PSA's for trolls. All the while, fact-checks barely scratch the surface of the conversations happening on the platform.
You're making a whole host of assumptions and opinions about this, with little in the way of data (I get it, you don't work at FB, how much data could you have?), just making blanket statements: "People hate Fact Checks", "People actually like Community Notes" and accepting them as accurate.
I use Facebook, a lot (again: all the politics in my town happens there), and almost nothing is fact-checked; I see one fact-check notice for every 1,000 bad posts I see. I feel like I'm on pretty solid ground saying that what they're doing today isn't working.
Meanwhile: Community Notes have become part of the discourse on Twitter; getting Noted is the new Ratio'd.
Accuracy has nothing to do with any of this. I don't think either Notes or Warnings actually solves "misinformation". I'm saying one is a good product design, and the other is not.
Not seeing fact checks likely means it's working: "Once third-party fact-checkers have fact-checked a piece of Meta content and found it to be misleading or false, Meta reduces the content’s distribution "so that fewer people see it.""
The issue with Community Notes is that if enough people believe a lie, it will not be noted. This lends further credence to a certain set of "official" lies.
You're making a whole host of assumptions and opinions about this, with little in the way of data (I get it, you don't work at FB, how much data could you have?), just making blanket statements: "People hate Fact Checks", "People actually like Community Notes" and accepting them as accurate.