Google's algorithms generating made up facts in their search page has been a problem for years now, even before ChatGPT. Numerous defamatory cases have been brought to public light multiple times, including one where Google mistakenly labeled someone as a serial killer [1].
People around the world place enormous amount of trust in Google search results, making stuff like this all the more harmful. Why is it that they haven't stopped and reevaluated how they do things?
The funny thing is, Google products tend to have a short lifetime. Yet, this misinformation-riddled info box has, for some reason, survived for years.
People around the world place enormous amount of trust in Google search results, making stuff like this all the more harmful. Why is it that they haven't stopped and reevaluated how they do things?
The funny thing is, Google products tend to have a short lifetime. Yet, this misinformation-riddled info box has, for some reason, survived for years.
[1]: https://news.ycombinator.com/item?id=27622100