I wonder about the the service used for the test, never heard of Rapidata but if it's like Amazons mechanical turk och other such services there might be a problem where the respondents simply didn't care about reading the question. If the objective for the respondents were simply "answer this question and get your benefit" vs "answer this question correctly to get your benefit" I have no problem accepting the 71.5% success rate. If getting it right had benefits and getting it wrong had none then I'm (slightly) worried.
He went backwards and started with just collecting an absurd amount of data. Later while talking to a researcher he could confirm years of research with a "simple" search in his database.
Not sure what the selection bias for this report is, perhaps that we care about code and believe in the value of static code analysis. Some interesting results in there either way.
> We do not need more ways for people to be convinced of suicide.
I am convinced (no evidence though) that current LLMs has prevented, possibly lots of, suicides. I don't know if anyone has even tried to investigate or estimate those numbers. We should still strive to make them "safer" but with most tech there's positives and negatives. How many, for example, has calmed their nerves by getting in a car and driven for an hour alone and thus not committed suicide or murder.
That said there's the reverse for some pharmaceutical drugs. Take statins for cholesterol, lots of studies for how many deaths they prevent, few if any on comorbidity.
Interesting how counter intuitive it felt to scroll up from the "landing spot". Even with the instructions right there on the screen I tried scrolling down at first.
reply