Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s indeed odd that current dnn’s require massive amount of energy to retrain and lack any kind of practical continuous adaptation and learning.


With computer-based intelligence we have the overhead of computing every bit though (probably) inefficient silicon and direct electric currents. The brain leverages the properties of chemicals, though millions of years of evolution.


The brain isn't a faster computer.

An infinitely-fast computer wouldn't meaningfully change the "expensive training vs fast, static inference" workflow that neural networks have always been developed around (except in the most brute force-y "retrain on the entire world, every single nanosecond" sense).


I think we agree? I am talking to the efficiency of the brain. Not processing speed. Efficiency of the brain to do things advantageous to the selfish genes I guess.

The brain is supremely efficient at what the brain has evolved to do. It is almost tautological! Because if it wasn't, it wouldn't have evolved to that.

Silicon comes from an alien land, and is emulating. Even with the best algorithms there has to be a limit on how efficient a computer-based intelligence can be without changing how the chips work.

You could spin it around and say, well computers are better at many things than humans, and there is no way you could get a biological brain to be as good for the same amount of power (e.g. a raspberry pi can do calculations our brain couldn't possibly do).


Really well said, I think this is an excellent way to frame the dichotomy (comparison?).

Much of these threads make the binary mistake: can these systems be compared, or are they fundamentally different? A bit of both, almost certainly.


> The brain is supremely efficient at what the brain has evolved to do. It is almost tautological! Because if it wasn't, it wouldn't have evolved to that.

This echoes an extremely naive view of evolution.

There are many phenotypes in the living world which have evolved but for which there is no reason to believe that the phenotype is either (a) supremely efficient and/or (b) under selection pressure (the two are obviously related).

Evolution has no tautology. Brains do not evolve to be supremely efficient, just like humans do not evolve to be supremely efficient.

What exists today is that which has survived, for whatever reason. It's not even possible to say something as apparently simplistic as "the only purpose evolution respects is leaving behind more copies" because that ignores (a) group selection (b) changing ecosystems that favor plasticity in the long run.


> There are many phenotypes in the living world which have evolved but for which there is no reason to believe that the phenotype is either (a) supremely efficient and/or (b) under selection pressure (the two are obviously related).

> Evolution has no tautology. Brains do not evolve to be supremely efficient, just like humans do not evolve to be supremely efficient.

> What exists today is that which has survived, for whatever reason. It's not even possible to say something as apparently simplistic as "the only purpose evolution respects is leaving behind more copies" because that ignores (a) group selection (b) changing ecosystems that favor plasticity in the long run.

A primary example of this are our legs, they would be much more efficient if the knees pointed backwards. They are not the most efficient design, but simply good enough.


> "our legs, they would be much more efficient if the knees pointed backwards. They are not the most efficient design, but simply good enough."

I don't think you can say one leg type is better than another without reference to the intended use of the leg - plantigrade legs have better "stability and weight-bearing ability"[0], whereas digitigrade legs (like those of cats and most birds, which BTW appear to have a reverse knee but don't because it is the ankle working like a second backwards knee) "move more quickly and quietly"[1].

Tying this back to the original point, the same is true for brains and computers - they are each better in very specialist cases within specific constraints.

[0] https://en.wikipedia.org/wiki/Plantigrade

[1] https://en.wikipedia.org/wiki/Digitigrade


> The brain is supremely efficient at what the brain has evolved to do. It is almost tautological! Because if it wasn't, it wouldn't have evolved to that.

Not really, evolution doesn't guarantee the brain will be supremely efficient. It just guarantees that it will be efficient ENOUGH.


Again, it is efficient at what it does.


You're talking about something orthogonal, how efficient it is. He's talking about something different:

https://en.m.wikipedia.org/wiki/Catastrophic_interference

Which practically requires full retraining at every step to integrate new knowledge. I think we have some partial solutions like learning to select between finetunings, but not if the task needs to crosscut between them.

The human brain doesn't seem to suffer with catastrophic interference to nearly the same degree, independent of its computational efficiency, though there are possibly related things like developmental stages that if they are delayed may never be able to take place.


It's an apples-to-oranges comparison. They're both fruit that grow on trees, but that's where the similarities end.

The primary difference, and likely the reason that brains are unreasonably effective, is the specifics of the architecture and internal representations (in the rigorous, information-theoretic sense) of its computational systems. It's not quite analog but it uses analog means. It's not quite digital but it does process via abstractions.

You can still reasonably call the brain a "computer" if you decide it can shed the laden history of that word and its close association with binary operations using transistors. You can do so because it uses internal structures to process inputs and emit outputs. But like I said above, it requires a generalized interpretation of the word to start to understand where and how the two fields of study may be unified.


Yes, it's odd that sled dogs make terrible housepets. /s

Neural networks fundamentally aren't designed to be otherwise. The workflow that has guided their entire development for over a decade is based around expensive training and static inference.


Why then all the talk about AGI when fundamentals don’t even allow for it to emerge.


Because "AGI" is very poorly defined, and ChatGPT is very "general" (compared to everything before it) and matches some (but not all) definitions of "intelligent".


Because drumming up talk about AGI is a really great way to get funding for your startup. The tech industry sustains itself on hype.


Not only but also.


Because transformers et al have gotten us the closest we've ever been to any system that can even claim to be AGI.


First make it work; then make it efficient.


Your scientists were so preoccupied with whether or not they should that they didn't stop to think if they could.


... seems potentially better than the other way around? Well, I suppose it depends.


When you say “massive amount of energy” are you comparing the energy requirements to a single human or to the billions of years of solar and geothermal energy that went into producing the human species?


I don’t think this is an apt comparison, but I do think the amount of energy it takes to grow a human into brain maturity in adulthood is an interesting one. Brains + bodies over a 20 year development cycle is still probably much less than training even a low quality Llm.


Let’s say a human needs an average of 2000 calories a day. A calorie is roughly equivalent to 1 Watt hour, so over 20 years, it takes about 15 MWh to sustain a human.

Let’s say a single A100 has a peak power draw of 250W, and you need 100 to train an LLM. So each hour of training consumes 25,000 Wh of energy. 15 MWh / 25,000 W = 600 hours, or 25 days, which is probably pretty close to the true training time.

So the numbers are actually pretty close. But a human brain doesn’t start out as a set of random weights like an LLM. The human brain has predefined structure that’s the result of an extremely long evolutionary process.


It's probably taking the analogy too far, but perhaps the brain's predefined structure is akin to the original LLM training and our "life" is the fine tuning.

I wonder how many MWh the entirety of evolution represents.


By that token the amount of energy for neural networks will be bound to some extent by the development of the biosphere and the creators of neural networks.


Not really? The point is that most artificial neural networks are started from basically zero (random noisy weights), where as a human neural network is jump-started with an overall neural structure that has been shaped by millions of years of evolution. Sure, it's not fair to compare the overall energy required to get there, but the point is just that a biological neural network starts with a huge headstart that is frequently forgotten when talking about efficiency.


See “The last question” for some sci-fi solutions to this.


>> It’s indeed odd that current dnn’s require massive amount of energy to retrain and lack any kind of practical continuous adaptation and learning.

To me that just means nobody has figured out how to do that effectively. The majority will simply make use of what's been done and proven, so we got a plateau at object recognition, and again at generative AI (with applications in several domains). One problem with continuous adaptation and learning is providing an "entity" and "environment" for it to "live" in which doing the adaptive learning. There are some researchers doing that either with robots, or simulations. That's much harder to set up than a lot of cloud compute resources. I do agree with you that these aspects are missing and things will be much more interesting when they get addressed.


In-comtext learning exists though.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: