Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There is a physical limit on the efficiency of computation - the Landauer limit. Modern chips already got within 3 orders of magnitude to that limit.

So this exponential trend is not gonna last long term... A decade at most.



We're way further than 3 orders of magnitude away.

"Theoretically, room‑temperature computer memory operating at the Landauer limit could be changed at a rate of one billion bits per second with only 2.85 trillionths of a watt of power being expended in the memory media."

That would put a piece of memory changing at 1gbps at 2.85 billions of a watt of power with current technology.

Past that, modern CPUs are still horrendously inefficient with respect to how much data they're shuffling around to perform computations, since they're optimised for getting maximum performance out of a single thread.

A chip operating near the Landauer limit and efficiently computing useful results on top of that makes our current CPUs look like abacuses.


From doi:10.1038/nature10872: "From a technological perspective, energy dissipation per logic operation in present-day silicon-based digital circuits is about a factor of 1000 greater than the ultimate Landauer limit".


I ran the numbers, and while you're way closer than I was, it still seems to me it's way more than 1000.

1 billion bits at 2.85 trillions of watt

-> 1 billion transistor CPU, uses 2.85 trillionths of a watt at 1 HZ

-> 1 billion transistor CPU, uses 2.85 thousandths of a watt at 1ghz

-> 1 billion transistor CPU, uses ~1/100 watt at 3.5ghz

So that puts us at a factor 10,000 away (billion transistor, 3.5ghz CPUs use ~100 watts at peak), assuming that all transistors are on. However, they're not nearly always on. Most are cache, most are logic that's not used, etc. Call it 5 orders of magnitude, or 100,000x.

The paper you're referencing seems to ignore leakage current etc, but I don't think for practical analysis of the field you'd want to do that, since obviously other factors are going to be improved as well.

Add in how inefficient our modern computation is, in terms of how many bits we're flipping around for the result we're computing, and I stand by my comment; theoretically optimum computation devices are going to make our current CPUs look like abacuses. Though we'll have to break out of our x86 and even CPU model to get there.


True, but right now, programmers often aren't even using what we have available to us very efficiently. There are a few- a select few- who are making supercomputing systems more efficient, but by and large we have a long, long way to go.

Furthermore, the things that have 'changed everything' in the last few years are all user interface related. As we get better at human speech recognition and creation, that will be when things truly are wholly changed.

However, one thing I do have to ask those who know better than me: Does the Landauer Limit imply that we won't be able to optimize any further, at a certain point, in terms of hardware capability? And would this include optimizing for size? In other words, if we hit that limit with something the size of Watson, could we reasonably still expect it to reach the size of a cell phone, half a century after that?


From the wiki entry: "Theoretically, room‑temperature computer memory operating at the Landauer limit could be changed at a rate of one billion bits per second with only 2.85 trillionths of a watt of power being expended in the memory media."

Do we really have chips that update a billion bits per second for only two billionths of a watt?

Also, reversible computing would take things much further without violating Landauer.


You are right, reversible and quantum computing could take things further.


Even just a decade of this exponential trend might very well "change everything" as the title claims.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: