Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is the natural course of the hype cycle. Like kids with a new toy, infatuation and then boredom. But the underlying technology in this case is absolutely world changing. I think we are only a few orders of magnitude away from systems that are genuinely intelligent. GPT-5 or 6 will have your jaw on the floor and GPT-10 will probably be smarter than all of us.


I think we are only a few orders of magnitude away from systems that are genuinely intelligent.

A few orders of magnitude is potentially very significant.

It's worth remembering that despite the hype and fake-clever systems playing chess and constructing plausible sentences, we still have nothing remotely approaching AGI at the moment, and no reason to think it is on the horizon.

But I agree that the AI tools we have have made significant progress in the line of powerful un-intelligence, and will have significant impact in automation.


"It's worth remembering that despite the hype and fake-clever systems playing chess and constructing plausible sentences". I completely disagree. I'm a connectionist and subscribe to the "Meaning is use" mantra from wittgenstein, when it comes to language. Language is the medium of thoughts, if GPT-x understands language, it is a general intelligence. If you read into the the output of GPT-3 you can see glimmers of genuine intelligence. An intelligent machine is not something you design, it is a machine that designs itself. All that is needed is to step out of the way of the machine. All you need is gradient descent and extraordinary processing power, there's no magic sauce left to find. Being clever and creating complex learning strategies just buys you more power at the expense of generality. Transformers and convolutional layers and the other tricks we have are probably now enough to get us within shooting distance of AGI. If you look at the progress of machine learning the best predictor of performance is processing power. I see no reason why it will not continue to scale. In the minds of many there is something very un-satisfying, unsettling even, in the idea progress without understanding. But the very definition of intelligence requires that you give up the responsibility of understanding to the machine. Now there is probably an overarching theory here that we don't understand - why is it that search within the space of algorithms described by a neural network is an effective strategy when solving the problems we tend to encounter in the real world? And that probably isn't a pure theory question, it's a question about physics - why do the emergent physics of the world permit easy solutions within the space of possible functions described by neural networks? Why are these solutions discoverable? No one has a handle on these questions yet, I suspect it's a very very deep question and unlikely to be solved within our lifetime. So everyone should "stop worrying and learn to love the neural network"


In other words, shut up and calculate? :)

That's an interesting POV, "if it looks like a duck... it doesn't matter that we don't understand why".

I agree with your point that (as per evolution) there is no reason why we can't just set the initial conditions and set it running and with luck/wise choices (tho the latter implies some understanding, which we don't have) an AGI will emerge.

However, I don't see any evidence of that despite the impressive power of GPT etc.

One reason why is that while I agree that "meaning is use", the use in the human case is in our interaction with the world and our inner thoughts.

The use in the GPT case is more of a Chinese Room style affair, isolated, unthinking and without motivation or interest.

During the typing of this comment I've considered a myriad of things, some related, some not - for instance, going to get a coffee in a minute, turning away to respond to a Whatsapp message. This motivated, conscious and subconscious context switching is just one example - let alone the myriad emotions, memories, feelings, inner life and experience that drive and inform it.

Intelligence as surface is only plausible in the philosophic zombie sense.

And we have nothing yet that comes near to mimicking even that general intelligence at surface level, just (massively impressive) powerful niche-explorers and domain entity generators like GPT.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: