> (Apparently determining if knots are trivial is NP. This doesn’t bode well for neural networks.)
Yes it doesn't, but what do I know (not much). Another annoying property of NN is knowing when they are fully baked.
Edit: Yet they have clearly grown in popularity over the past year. Does that imply anything about the "determining if knots are trivial is NP" criticism? Or does it just mean they are popular for other reasons such as their appeal to people who love black boxes?
I don't really see the relevance. As the article itself says, you can just add more dimensions and separating the clusters becomes trivial. And empirically underfitting and local optima do not seem to be big issues for large neural networks.
Because this usually turns in to some kind of meta debate, I hope I'm not sidetracking by too much.. but:
I, for one, am glad that sometimes good things get reposted, because I've been reading a lot about neural nets lately following Google's various recent posts (and studied topology in school), but I had missed this article the previous year (when I was probably distracted by some other fascination).
Reposting interesting articles about topics that are currently in the media attention (and thus the public attention) is a form of sharing meta-data about the field: reposts on a topic on particular interest probably have some sort of underlying metadata link. (After all, someone thought it was worth bringing up in the context of the current discussion.)
Today, I learned how to integrate something I studied long ago at school and something that's my current hobby. Fucking awesome!
Yes it doesn't, but what do I know (not much). Another annoying property of NN is knowing when they are fully baked.
Edit: Yet they have clearly grown in popularity over the past year. Does that imply anything about the "determining if knots are trivial is NP" criticism? Or does it just mean they are popular for other reasons such as their appeal to people who love black boxes?