Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Wish there were a bit more information here. The article's a little breathless without telling us exactly how this is practically improves on ANNs and the Turing model, and I couldn't find a more accurate representation of this paper.

Hypercomputation models that depend on things like infinite-precision real numbers have been around for a while, including in Siegelmann's work, so I'm curious to know what specific advance is being reported here in "Neural computing".



It's not an article, but a copy-paste of a press release: http://www.umass.edu/newsoffice/newsreleases/articles/149986...


Edit: never mind, I thought she was talking about a real machine until I read the paper. Even the Turing machine model assumes infinite storage space. So there must be more to her "super-turing" machine than just infinite storage.

I wonder if you could just build a ARNN and "fudge" the infinite precision with a reasonably large precision? Or with a big disk, you could compute and unreasonable amount of precision :) Or, store the infinite precision in a lazy way that only calculates as much as you need for a particular answer.


The thing is that you could do all that on a Turing machine, so any model just "fudging" infinite precision would be equivalent in power to a Turing machine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: