Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That relies on the existence of systems too complex for humans to understand. We may eventually build machines that can in turn build other incomprehensible machines, but we haven't passed that point yet. Superintelligence can't surprise us until some point after computers become self-improving.


Aren't large neural network already black boxes we don't understand built by machines we understand?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: