Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>> You're underappreciating how much more we are doing with neural networks now.

From your comment above, that "a handful of neuron-like components have probably been assembled here or there" I understand that you do not have any background in AI or machine learning etc. I am curious then, where all this enthusiasm in the form of "we are doing X" statements comes from. In particular I'm interested in your use of "we". Clearly, you're not doing any of that stuff, so where does the "we" come in? Is it really prudent to express such strong views, without good understanding or personal experience of the subject matter? Are you adding anything to the conversation, by asserting all those things with such impetuousness, other than noise?

I imagine that your source for all this information are articles you've read in the lay press. Unfortunately, such articles can't very well represent the real state of research in deep learning. The truth is that there has been an enormous increase in the numbers of work on deep learning being published every month- there's probably thousands of articles written in that period and uploaded on arxiv or even submitted to reputable venues- and even researches in the field have trouble keeping up. What is abundantly clear however is that the vast majority of this work is very poor quality and even the published work is not much better. It's clear also that the vast majority of this work has no lasting impact and is superseded within weeks anyway. The truth is that deep learning research is in a deep crisis and despite appearances and breathless announcements by large companies, progress has stalled and no new things are really being done. Many of the luminaries of the field, including Geoff Hinton and Yoshua Bengio, have said this in various ways.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: