Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This graph is interesting:

https://www.quora.com/Why-is-Moores-law-no-longer-valid

In terms of how much processing a single CPU core can accomplish (single thread performance) it plateaued about 15 years ago. So if your computation must be single threaded (audio effects processing, audio codecs, Photoshop, any app where computation has to finish before the next computation can begin, some real-time apps) then there has been little performance increase since then. However, if your computation can use multiple cores (video, graphics, user interfaces) the CPU power is approximately multiplied by the number of cores. More cores is like using more computers, so power use tends to increase, but since the task can be broken into multiple independent parts overall it gets done faster. Number of transistors is no longer a good measure of overall computional capability since often added transistors are not powered up most of the time (they are only used for special tasks like cryptography) but even so the number of transistors has also plateaued in the graph. Note the vertical scale is logarithmic, with each increment a factor of 10 over the line below it, so you need a LOT more transistors to keep that number-of-transistors curve going up the way it did in the past.

New technologies such as optical computation and quantum computing might help create even faster single threaded processors, but so far they have had no effect on consumer devices. A lot of the performance limits on CPU's are related to how fast you can feed them data (amount of RAM, RAM bandwidth, long term storage speed, bus bandwidths.) Such bus speeds have also been improving over time, but they still keep CPU's from running at top speed. We do not yet have widespread use of consumer systems where all the apps and data are stored in long term stable RAM rather than on an SSD (though some server apps do run from RAM only), so it is not just the CPU single threaded performance that matters. If apps and data start residing in RAM rather than on disk drives then CPU's can run faster but they'll use a lot more power as well, so cooling may become a bigger problem even if the bus systems can be made faster. Even if computation becomes really fast you generally need to then move the data to and from somewhere else for it to be useful (onto an SSD, a flash drive, network storage) so the speed of that transfer also limits how much computation can be done. So even if your processor could do a color transform on an entire video in a few microseconds, it may take an hour to transfer the video to your tablet, which means it takes an hour to free up space to do the next computation. Only applications which do a lot of math computation without much data I/O (like computing the Mandelbrot set) can really make full use of fast CPU's today, so increasing your I/O speed will usually have a bigger impact than increasing your CPU speed.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: