Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With respect to adversarial examples for random forests, that link states:

   Myth: Deep learning is more vulnerable to adversarial examples than other kind of machine learning.

   Fact: So far we have been able to generate adversarial examples for every model we have tested, including simple traditional machine learning models like nearest neighbor. Deep learning with adversarial training is the most resistant technique we have studied so far.
But the larger point is that adversarial examples are just one demonstration that algorithms are still quite primitive. Nobody should be killed on the basis of a statistical algorithm alone (and to be fair, the article does not show that anyone has been).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: