Would you happen to have a source for that story? My workplace has really swallowed the AI Kool-Aid lately, so I would like to have some cautionary counterexamples to demonstrate potential pitfalls of the technology.
It's got a lot of interesting applications for our field which I am excited about, but there seems to be a tendency among non-experts to consider it a magic bullet that can solve any sort of problem. In particular, I am concerned about applications where conventional approaches have already converged on an optimal solution that's used operationally, but somebody wants to throw AI at it because they thought it might be cool without first understanding the implications.
This study’s findings suggest that skin markings significantly interfered with the CNN’s correct diagnosis of nevi by increasing the melanoma probability scores and consequently the false-positive rate. A predominance of skin markings in melanoma training images may have induced the CNN’s association of markings with a melanoma diagnosis. Accordingly, these findings suggest that skin markings should be avoided in dermoscopic images intended for analysis by a CNN.
Stolen from another HN comment about using AI on medical records.
"Finasteride is a compound that is used in two drugs. Proscar is used for prostate enlargement. It is old, out-of-patent, and has cheap generics. Propecia is used for hair loss. It is a newer, and (at the time) very expensive. The only difference is that Propecia is a lower-dose formulation.
What people did was to ask their doctors to perscribe generic Proscar, and then break the pills up to take for hair loss. Doctors would then justify the prescription by "diagnosing" enlarged prostate. This would enter the patient's health records.
If you apply deep learning without being aware of this "trick", you would learn that a lot of young men have enlarged prostates, and that Proscar is an effective, well-tolerated treatment for it.
Health records are often political-economic documents rather than medical."
What a great resource! The first example is perfect:
> Aircraft landing
> Evolved algorithm for landing aircraft exploited overflow errors in the physics simulator by creating large forces that were estimated to be zero, resulting in a perfect score
> In an artificial life simulation where survival required energy but giving birth had no energy cost, one species evolved a sedentary lifestyle that consisted mostly of mating in order to produce new children which could be eaten (or used as mates to produce more edible children).
> (Tetris) Agent pauses the game indefinitely to avoid losing
>Agent kills itself at the end of level 1 to avoid losing in level
>Robot hand pretending to grasp an object by moving between the camera and the object
It's got a lot of interesting applications for our field which I am excited about, but there seems to be a tendency among non-experts to consider it a magic bullet that can solve any sort of problem. In particular, I am concerned about applications where conventional approaches have already converged on an optimal solution that's used operationally, but somebody wants to throw AI at it because they thought it might be cool without first understanding the implications.