But if I say "I object to AI because <list of harms> and its water use", why would you assume that I don't also object to alfalfa farming in Arizona?
Similarly, if I say "I object to the genocide in Gaza", would you assume that I don't also object to the Uyghur genocide?
This is nothing but whataboutism.
People are allowed to talk about the bad things AI does without adding a 3-page disclaimer explaining that they understand all the other bad things happening in the world at the same time.
Because your argument is more persuasive to more people if you don't expand your criticism to encompass things that are already normalized. Focus on the unique harms IMO.
If you take a strong argument and through in an extra weak point, that just makes the whole argument less persuasive (even if that's not rational, it's how people think).
You wouldn't say the "Uyghur genocide is bad because of ... also the disposable plastic crap that those slave factories produce is terrible for the environment."
Plastic waste is bad but it's on such a different level from genocide that it's a terrible argument to make.
Adding a weak argument is a red flag for BS detectors. It's what prosecutors do to hoodwink a jury into stacking charges over a singular underlying crime.
Similarly, if I say "I object to the genocide in Gaza", would you assume that I don't also object to the Uyghur genocide?
This is nothing but whataboutism.
People are allowed to talk about the bad things AI does without adding a 3-page disclaimer explaining that they understand all the other bad things happening in the world at the same time.