Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>could not generate conceptual ideas of their own

Is the most important part imo. A big goal should be some ai system coming up with its own discovery and ideas. Really unclear how we can get from the current paradigm to it coming up with something like general relativity, like Einstein. Does it require embodiment?



Why should that be a big goal? It's difficult, it's not what they are good at, and they can get a lot better at assisting in other ways through incremental improvements. I'm happy to leave this part to the humans, at least for now, especially when there's so much more improvement still possible in other directions.

It also seems like one of those things where we ought to ask whether we should, before asking whether we could. Why not focus on areas that are easier, more beneficial, and less problematic from a "should" perspective?


we don't know how to reliably produce humans who produce GR-level ideas, this might be biting off a lot more than we can chew




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: