While we have achieved greatness with the Von Neumann evolutionary line, now that the Mhz free lunch is ending, it’s time for us to recognize that its sequential nature is holding us back. There are still many branches of Computer Science that researchers and industry have barely explored. We need to reevaluate these other branches like Functional and Dataflow programming.
Ivan Sutherland explains it best when he says that concurrency is not fundamentally hard. The problem is that we constraining ourselves to the fundamentally sequential Von Neumann architecture.
https://www.youtube.com/watch?v=jR9pAaQlVRc#t=267
I feel the Mill architecture is sufficiently familliar while still being quite different that it could help us transition to more interesting/efficient/parallel/concurrent etc. architectures.
On the contrary, I think that we haven't exploiting sequential machines to their full potential, and what's "holding us back" is all this theoreticism and overabstraction. There likely is a point where concurrency is really necessary, but one only has to look at the demoscene, full of people who have never formally studied CS, to see something closer to what hardware can really do -- and then wonder how so many others working in software, with formal backgrounds and extensive education in CS, couldn't.
This crosses my mind every time I hear about the MHz free lunch.
https://www.youtube.com/watch?v=oBegD7k2wvo , 02:20. JS and WebGL, which seem to be all the rage nowadays, would smoke the shit out of my i7 doing that. In case younger chaps are watching, the C64 has a 1 MHz CPU and a whopping 64 KB of RAM. For comparison, minified jQuery 2.0.0 is about 80 KB.
I agree that concurrency isn't fundamentally hard, but the reason everyone kind of prefers faster serial execution is because a lot of algorithms (extremely simple example: f^100000(x)) are fundamentally unparallelizable. Faster serial execution is just so much more straightforward.
So a common problem with concurrency tends to be not "How do I make these functions run in parallel", but "Is there an algorithm that does the same thing I want without relying on constant function composition?"
Ivan Sutherland explains it best when he says that concurrency is not fundamentally hard. The problem is that we constraining ourselves to the fundamentally sequential Von Neumann architecture. https://www.youtube.com/watch?v=jR9pAaQlVRc#t=267