Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are people convinced that if we throw a sufficient amount of training data and VC money at more hardware, we'll overcome the gap.

Technically, I can't prove that they're wrong, novel solutions sometimes happen, and I guess the calculus is that it's likely enough to justify a trillion dollars down the hole.



There's a guy, Ken Stanley, who wrote the NEAT[0]/HyperNEAT[1] algorithms.

His big idea is that evolution/advancements don't happen incrementally, but rather in unpredictable large leaps.

He wrote a whole book about it that's pretty solid IMO: "Why Greatness Cannot Be Planned: The Myth of the Objective."

[0] https://en.wikipedia.org/wiki/Neuroevolution_of_augmenting_t... [1] https://en.wikipedia.org/wiki/HyperNEAT



Neat (no pun intended), TIL there's a word for this


Whenever I try to tell people about the myth of the objective they look at me like I'm insane. It's not very popular to tell people that their best laid plans are actually part of the problem.


I would suspect that any next step comes with a novel implementation though, not just trying to scale the same shit to infinity.

I guess the bitter lesson is gospel now, which doesn't sit right with me now that we're past the stage of Moore's Law being relevant, but I'm not the one with a trillion dollars, so I don't matter.


I’d say it was worth throwing down some cash for, because we get cool new things by full-assing new ideas. But… yeah… a TRILLION dollars is waaaay too far.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: