Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Intelligence isn't the same as "can exactly replicate text". I'm hopefully smarter than a calculator but it's more reliable at maths than me.

Also there's a huge gulf between "some people claim it can do X" and "it's useful". Altman promising something new doesn't decrease the usefulness of a model.



What you are describing is "dead reasoning zones".[0]

    "This isn't how humans work. Einstein never saw ARC grids, but he'd solve them instantly. Not because of prior knowledge, but because humans have consistent reasoning that transfers across domains. A logical economist becomes a logical programmer when they learn to code. They don't suddenly forget how to be consistent or deduce.

    But LLMs have "dead reasoning zones" — areas in their weights where logic doesn't work. Humans have dead knowledge zones (things we don't know), but not dead reasoning zones. Asking questions outside the training distribution is almost like an adversarial attack on the model."

https://jeremyberman.substack.com/p/how-i-got-the-highest-sc...


saddest goalpost ever




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: