Serious question. Do you browse the web with Internet Explorer?
V8 and the new Mozilla engine are optimized to hell already. And they compile to native code. http://kailaspatil.blogspot.com/2011/08/jaegermonkey-archite... And they are efficient.. These engines have proven that you don't need pointers or explicit types to be an efficient Javascript implementation.
The only reason that I can see for these types of features like explicit types and pointers are to support systems programming, where they are required for certain activities (probably less that you would think though). For that, I would really love to see a CoffeeScript with those features available for use optionally. I think to do that you would want to find a way to compile directly to machine code or assembly or LLVM bytecode or something like that.
There is still a lot of performance slippage out there in high level languages (eg look at the need for assembly when people write raw codecs in C - x264 etc). We are rapidly approaching the battery wall on mobile devices and the clock wall is already here. ISA's may be extended to support dynamic languages more efficiently (tags, direct uop translation from user specified instructions) - but right now at a given TDP static execution is greener for many applications than dynamic, and history has shown us cpu vendors are unwilling to go down this path.
Javascript is anything but efficient right now - some code paths are fast but with a high startup cost. Look at the startup problem with Chrome/V8 for instance, or watch a nodejs application hit the GC wall with 800MB of live data.
There are times when static is faster than dynamic and vice-versa. Times when heavy upfront compilation is better than incremental run time analysis and vice-versa. As the clock wall, TDP wall and process communication wall hits we need all the tools available to exploit maximum performance.
I'm proposing to compile as much upfront as possible, more than the default for those engines. But for something like JavaScript, without the types specified or inferred in all cases, you can't necessarily compile everything upfront. Type inference would be much better than requiring manual specification of all types.
I think that it is just much better software engineering to improve the GC and JIT compilation rather than to code all of the memory management and types manually or use pointer tricks. If you are building a codec or critical part of an operating system then you may need assembly or well-defined types for static compilation.
It would be nice to have the memory management and other features available for when you need them but I don't think they should be the default.
Anyway I think it could actually be useful to find ways to remove the separation between assembly-level coding and higher level coding. For example, if I were writing a codec in CoffeeScript, I could would probably write something like interleave.highBytesFromQuads rather than PUNPCKHBW.
I agree totally about removing the separation between assembly level coding and higher level coding.
With current ISAs no matter how much genius is thrown at GC/JIT its always going to yield a layer of overhead. The pipes are a fixed with, the caches a fixed size - the plumbing is static.
Below this floor a thinner abstraction will yield greater performance. The thinner abstraction is useful to implement GC/JIT. Any language that wont let you bust outside of the GC heap is always going to hit a pain point.
Until a language can self host with no significant efficiency loss there will always be another language to wedge under it. Until a platform has a mechanism to expose its raw feature set up to a hosted language we will forever be in a world of software rasterizers, sluggish Java UI, where software developers re-implement functionality that highly tuned hardware pipes already provide.
Better to be able to write using interleave.highBytesFromQuads and better to able to include your implementation of this using a punch thru to PUNPCKHBW where available or an emulation where not. I guess its the old argument of high level interfaces versus low level. Useful high level abstractions appear over time, but without access to the low level we cannot experiment and build them on a rapid cycle.
I'm sure some OS vendors would like to keep the browser crippled - because in the natural end game their OS doesnt need to exist as an expensive product. Its good to see Mozilla pushing the boundaries.
JS JITs are wonders of modern engineering, for sure. But there's a weird paradox in that as they get smarter, it doesn't get easier to write optimized code, it actually gets harder! Nobody really understands how to write optimizable code. Nor, in some sense, should they. But I think there's a real need for subset languages with more predictable performance models than JavaScript, that have compilers tuned for the kinds of optimization done by modern JS engines.
One nice description of this general problem was by Slava Egorov:
V8 and the new Mozilla engine are optimized to hell already. And they compile to native code. http://kailaspatil.blogspot.com/2011/08/jaegermonkey-archite... And they are efficient.. These engines have proven that you don't need pointers or explicit types to be an efficient Javascript implementation.
The only reason that I can see for these types of features like explicit types and pointers are to support systems programming, where they are required for certain activities (probably less that you would think though). For that, I would really love to see a CoffeeScript with those features available for use optionally. I think to do that you would want to find a way to compile directly to machine code or assembly or LLVM bytecode or something like that.