Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is it actually unbelievable?

It's basically what every major AI lab head is saying from the start. It's the peanut gallery that keeps saying they are lying to get funding.



Even as a layman and AI skeptic, to me this entirely matches my expectations, and something like this seemed like it was basically inevitable as of the first demos of video rendering responding to user input (a year ago? maybe?).

Not to detract from what has been done here in any way, but it all seems entirely consistent with the types of progress we have seen.

It's also no surprise to me that it's from Google, who I suspect is better situated than any of its AI competitors, even if it is sometimes slow to show progress publicly.


https://worldmodels.github.io/

I think this was the first mention of world models I've seen circa 2018.

This is based on VAEs though.


Google seems to have had the keys to changing the world years ago and decided not to.

Hard to fault them as the process towards ASI now appears to be runaway and uncontrollable.


>It's basically what every major AI lab head is saying from the start.

I suppose it depends what you count as "the start". The idea of AI as a real research project has been around since at least the 1950s. And I'm not a programmer or computer scientist, but I'm a philosophy nerd and I know debates about what computers can or can't do started around then. One side of the debate was that it awaited new conceptual and architectural breakthroughs.

I also think you can look at, say, Ted Talks on the topic, with guys like Jeff Hawkins presenting the problem as one of searching for conceptual breakthroughs, and I think similar ideas of such a search have been at the center of Douglas Hofstadter's career.

I think in all those cases, they would have treated "more is different" like an absence of nuance, because there was supposed to be a puzzle to solve (and in a sense there is, and there has been, in terms of vector space and back propagation and so on, but it wasn't necessarily clear that physics could "pop out" emergently from such a foundation).


When they say "the start", I think they mean the start of the current LLM era (circa 2017). The main story of this time has been a rejection of the idea that major conceptual breakthroughs and complex architectures are needed to achieve intelligence. Instead, it's better to focus on simple, general-purpose methods that can scale to massive amounts of data and compute (i.e. the Bitter Lesson [1]).

[1] http://www.incompleteideas.net/IncIdeas/BitterLesson.html


Oof ... to call other people's decades of research into directed machine learning "a colossal waste of researcher's time" is indeed a rather toxic point of view unsurprisingly causing a bitter reaction in scientists/researchers.

Even if his broader point might be valid (about the most fruitful directions in ML), calling something a "bitter lesson" while insulting a whole field of science is ... something.

Also as someone involved in early RL, he should know better.


The start of deep neural networks, ie AlexNet


It's akin to us sending a rocket to space and immediately discovering a wormhole. Sure, there's a lot of science about what's out there, but to discover all this in our first few trips to orbit ...


Joscha Bach postulates that what we call consciousness must be something rather simple, an emergent property present in all sufficiently complex biological organisms.

We don't inherit any software, so cognitive function must bootstrap itself from it's underlying structure alone.

https://media.ccc.de/v/38c3-self-models-of-loving-grace


   > We don't inherit any software
I wonder, though. Many animal species just "know" how to perform certain complex actions without being taught the way humans have to be taught. Building a nest, for example.

If you say that this is emergent from the "underlying structure alone", doesn't this mean that it would still be "inherited" software (though in this case, maybe we think of it like punch cards).


We inherit ~2GB of digital data as DNA. Quite how that turns into nest building how tos is not yet known but it must happen somehow.


I’ve seen different figures for information content of DNA but they’re all mostly misleading. What we actually inherit is much more. We are the result of an unpacking algorithm starting from a single cell over time, so our information content should at the very least include the entirety of the cell (which is probably impossible to calculate). Additionally, in a more general sense, arbitrarily complex behavior can be derived from very simple mathematics, e.g. cellular automata. With sufficient complex dynamics (which for us are given by the laws of physics), even very small information changes lead to vastly different “emergent behavior”, whatever that means. One could improperly say that part of the information is included in the laws of physics itself.

A biological example that I like: the neural structures for vision develop almost fully formed from the very beginning. The state of our network at initialization is effectively already functional. I’m not sure to which extent this is true for humans, but it is certainly true for simpler organisms like flies. The way cells achieve this is through some extremely simple growth rules as the structure is being formed for the first time. Different kinds of cells behave almost independently of each other, and it just so happens that the final structure is a perfectly functional eye. I’ve seen animations of this during a conference talk and it was one of the most fascinating things I’ve ever seen. It truly shows how the complexity of a biological organism is just billions of times any human technology. And at the same time, it’s a beautiful illustration of the lack of intelligent design. It’s like watching a Lego assemble by just shaking the pieces.


Problems like this will turn out to have simple solutions. Once we get past the idea of "inherited instinct" (obvious nonsense and easily proved to be so) the solution will be easier to see.

An example that might be useful: dragonflies lay their eggs in water. Since a dragonfly has like a 4-bit CPU you might be amazed at how it manages to get all the processing required to identify a body of water from a distance into its tiny mind, and also marvel at what sort of JPEG+++ encoding must be used to convey what water looks like from generation to generation.

But they don't do that at all: instead they have eyes that are sensitive to polarized light. The surface of water polarizes reflected light. So do things like polished gravestones. So dragonflies will lay their eggs on gravestones too.

One I like to ponder is: beavers building damns. Do they have an encoded algorithm that knows that they need to damn the river to have a place to live, by gnawing on trees, carrying them to the right place on the river bed, etc? Nope, certainly they don't have that. Perhaps they have teeth that grow so long that they hurt, motivating the animal to gnaw on something solid to wear them down. The only solid thing they have available is a tree.


A similar phenomenon was demonstrated with deep neural networks nearly a decade ago. You optimize the architecture using randomized weights instead of optimizing the weights. You can still optimize the weights in a separate additional step to improve performance.


That's interesting indeed - or take spiders building nets. So there must be some 'microcode' that does get inherited like physical features.

But then you have things like language or societal customs that are purely 'software'.


I’ve always said that animals have short term and long term memory via the hippocampus, and then there’s supragenerational memory stored in DNA - behaviors that are learned over many generations and passed down via genetics.


The emergent property theory seems logical, but I'm also partial to the quantum-tunneling-miasma theory which basically posits that there could be something fairly complex going on, and we just lack the ability to observe/measure it in our current physics. (Although I have difficulty coherently separating this theory from faith-based beliefs)


>We don't inherit any software, so cognitive function must bootstrap itself from it's underlying structure alone.

Hardware and software, as metaphors applied to biology, I think are better understood as a continuum than a binary, and if we don't inherit any software (is that true?), we at least inherit assembly code.


> we don't inherit any software (is that true?), we at least inherit assembly code

To stay with the metaphor, DNA could be rather understood as firmware that runs on the cell. What I mean with software is the 'mind' that runs on a collection of cells. Things like language, thoughts and ideas.

There is also a second level of software that runs not on a single mind alone, but collection of minds, to form cliques or a societies. But this is not encoded in genes, but in memes.


I think we have some notion of a proto-grammar or ability to linguistically conceptualize, probably at the level of some primordial conceptual units that are more fundamental than language, thoughts and ideas in the concrete forms we generally understand them to have.

I think it's like Chomsky said, that we don't learn this infrastructure for understanding language any more than a bird "learns" their feathers. But I might be losing track of what you're suggesting is software in the metaphor. I think I'm broadly on board with your characterization of DNA, the mind and memes generally though.


At the most fundamental level, is it even linguistic? Would Tarzan speak at all?


Children (who aren't alone) will invent languages to communicate between each other, see Nicaraguan Sign Language.


Don't know who this Bach dude is, but I've been postulating the same thing since the early 1980s. Only to my friends in the pub, but still..


> We don't inherit any software

How do you claim to know this?


Lemme start by saying this is objectively amazing. But I just really wouldn't call it a breakthrough.

We had one breakthrough a couple of years ago with GPT-3, where we found that neural networks / transformers + scale does wonders. Everything else has been a smooth continuous improvement. Compare today's announcement to Genie-2[1] release less than 1 year ago.

The speed is insane, but not surprising if you put in context on how fast AI is advancing. Again, nothing _new_. Just absurdly fast continuous progress.

[1] - https://deepmind.google/discover/blog/genie-2-a-large-scale-...


Wasn't the model winning gold in IMO result of a breakthrough? I doubt an stochastic parrot can solve math at IMO level...


Why wouldn't it? I still have to hear one convincing argument how our brain isn't working as a function of probable next best actions. When you look at amoebas work, and animals that are somewhere between them and us in intelligence, and then us, it is a very similar kind of progression we see with current LLMs, from almost no state of the world, to a pretty solid one.


As far as we know, it was "just" scale on depth (model capability) and breadth (multiple agents working at the same time).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: