But would they have done this without Mozilla pushing the performance with asm.js, or that fast? Competition is great, and it also makes me wonder, if they keep this up - maybe NaCl won't be needed either?
Value types (64-bit, DFP, etc) are possibly on the table for ES7. Brendan has a strawman he shows in talks. If you look at his JSConf.EU video he discusses it.
That's not quite true -- it is entirely possible to write a 64-bit integer class entirely in Javascript, and have a JIT / ASM.js compiler recognize that and swap out a native version (See ecmascript_simd [1] which I believe was written with this idea in mind)
Do you have any pointers to how value types can be implemented efficiently without adding type tags to the language? I can see how that would work in simple cases (say within one function), but it seems that there would be many cases where the vm cannot assume the type of a variable.
Sorry I meant type "tags" in the sense of modifying the language syntax to have required type annotations.
I understand nan-boxing and similar techniques, but they seem to imply at least some runtime overhead to test the type of the value in some cases. Also, AFAIK, 64 bit integers cannot be represented with nan-boxing, as there are only 51 bits available for the payload.
> Sorry I meant type "tags" in the sense of modifying the language syntax to have required type annotations.
There are already multiple types in JavaScript, and this already affects performance; for example "+" is defined for both numbers and strings. All JavaScript engines handle this in the same way, more or less. You start out with a "baseline JIT" that does not make assumptions about the types of objects and has type-check-and-dispatch on operations like "+". These type checks record the types of objects that have flowed through each point. Once enough types have flowed through so that we can reasonably predict the types, we recompile the function to assume that the same type (for example, number) goes through, which enables greater optimizations. You then hoist type guards up (or eliminate them if you can prove them never taken) so that you deoptimize and bail out.
Value objects don't change this overall picture, they just add more types.
All of this, however, is irrelevant to asm.js. With an ahead-of-time optimizing compiler, we know what the types are beforehand via the asm.js spec. So there are no type guards inserted at all, and none of this is an issue. For example, NaN-boxing is not used in the Firefox asm.js compiler (OdinMonkey). We know which values are doubles and which are integers, so we need no runtime guards or type tests at all.
I was responding specifically to the comment that if v8 team continues optimizing in response to asm.js as they have been, perhaps there is no need for NaCL (or asm.js). I don't think that will ever be true.
I think that part of the value people see in things like NaCL is not having to rely on JIT magic to figure out when to optimize 64 bit math. You get direct control over what is going on. It may be true that in certain edge cases the JIT can actually out-perform clean native code, but people are willing to sacrifice that for direct control over what the machine does.
GC is a related example of a place where VM designers told us not to worry ourselves, but it turns out that GC is inherently hard and you are going to pay for that convenience - either in performance or in memory. See: http://sealedabstract.com/rants/why-mobile-web-apps-are-slow....
> I think that part of the value people see in things like NaCL is not having to rely on JIT magic to figure out when to optimize 64 bit math. You get direct control over what is going on. It may be true that in certain edge cases the JIT can actually out-perform clean native code, but people are willing to sacrifice that for direct control over what the machine does.
This is precisely what "use asm" is for. It gives you direct control: each allowable operation in the subset has a direct analogue to the appropriate machine instruction(s). If you fall off the happy path and stray into "JIT magic" territory, you get a message in the developer console telling you to fix your code.
Unfortunately V8 is opposed to "use asm" in favor of the "JIT magic".
That post you linked to is actually full of some fundamental misunderstandings about GC and JS VMs :) But yes, it is the case that GC isn't a free lunch - it comes with costs and you have to design your applications to avoid the weaknesses of a given GC. It's rough.
I agree that not having to rely on the JIT to figure out how to optimize your code with 'magic' is preferable. I tend to lean towards that where possible in most of the JIT/GC-based environments I use (C#, JavaScript, etc), and it tends to pay off.
Competition is great, and projects like pepper.js (http://trypepperjs.appspot.com/) demonstrate that coexistence is possible as well. What's exciting about this is that for developers all of this is very positive - they can write faster code to write in the client.
You still have the pthreads issue. NaCL can leverage SMP in a way that single-threaded JS can't, and the idea of introducing concurrency into JS sends shivers down my spine.
That model is one of the easiest and safest ways of doing concurrency of which I'm aware, unless you want to enforce immutability on shared memory records.
We're not talking about what's safer, were talking about NaCL vs asm.js, and the primary purpose of those is to run native C code in your browser without significant rewriting (e.g. emscripten)
The target for most of this is porting C games to the Web, and many modern C games use all kinds of multithreading and blocking I/O. Game programmers are not interested in safety, they are interested in performance.
Yes, message passing leads to fewer bugs. It also leaves a lot of performance on the table if you are trying to maximize usage of CPU resources on an SMP architecture.
It seems to me that most modern game engines have settled with some sort of parallel task system where the main thread pushes asynchronous tasks into a thread pool. Building such a generalized parallel task system on top of pthreads or WebWorkers in C/C++ is easier then trying to emulate something like pthreads in emscripten (you can't do this anyway because there are no shared memory resources in JS). One has to be aware of the overhead of getting data in and out of workers though.
The PS3 SPUs also didn't have access to system memory, and SPU code had to be compiled into small "executables", so in a twisted way this is a similar model to WebWorkers ;)
Shared TypedArrays would be nice, but only if this also comes with a complete set of atomic operations. But in my opinion, vector datatypes would be more important in the short term for JS (so it can make use of SSE under the hood).
"modern C games use all kinds of multithreading and blocking I/O."
The funny thing here is that gave devs are still kind of weary of threading bugs, and blocking I/O was something that was probably earliest dealt with by the people having to stream content off of spinning plastic discs.
It's true that gamedevs worry about this, and Tim Sweeney has even given prezos on how great functional languages would be for game programming, but the reality is, hardcore game devs crave two things from what I can tell: 1) determinism and 2) maximizing utilization of resources
A VM or high level compiler abstraction, especially with GC, tends to interfere with #1, and sandboxed environments with abstract APIs for accessing hardware tend to interfere with #2.
Yes, this is not to say that games won't use scripting, like Lua or UnrealScript, but those are not part of the rendering loop.
Asm.js provides predictability in terms of GC, but it does not ensure predictability in terms of performance (because of differing JIT implementations), nor does it give the direct access to hardware resources one would like (e.g. SMP).
I don't really see it as a target for next-gen games, regardless of the cool UnrealEngine demos. Mobile casual games maybe.
> A VM or high level compiler abstraction, especially with GC, tends to interfere with #1, and sandboxed environments with abstract APIs for accessing hardware tend to interfere with #2.
For #1, asm.js doesn't have GC, by and large (there are a couple of places where the GC gets used when interacting with Web APIs, but those are fixable and don't matter much in practice). For #2, "a sandboxed environment with abstract APIs for accessing hardware" precisely describes Pepper!
> Asm.js provides predictability in terms of GC, but it does not ensure predictability in terms of performance (because of differing JIT implementations), nor does it give the direct access to hardware resources one would like (e.g. SMP).
I don't understand what you mean. The performance predictability is fixed by "use asm" and AOT compilation. V8 is opposed, but your point (rightly, IMHO!) argues against that choice. If you're just arguing that there will be multiple implementations of asm.js and that's bad for predictability I don't see that being fixed with any solution outside of a browser monopoly. If PNaCl is to become a cross-browser solution, then you must be open to alternative implementations.
As for threads, we can continue to evolve JavaScript so that it is supported, at least in asm.js mode. I don't see this as some fundamental obstacle.
The variance in JIT optimizations will be much wider than the variance in compiling C code to NACL compliant x86 code. In one case, you have varying implementations of a runtime compiler, in the other, you have a single compiler known to the game-dev ahead of time, what varies in the sandbox hit.
Yes, pepper is an abstraction, but you get concurrency primitives which is lower level than asm.js.
The reality is, neither asm.js nor NaCl are going to succeed in getting the gamedev community onto the web platform. The two most popular platforms, iOS and Android, have native SDKs, and so do the consoles and desktops, so any developer who wants to maximize revenue is going to want to target those. The additional headache of a browser port onto a heavily fragmented platform with unpredictable performance, and an audience that has shown little appetite for paying for web apps pretty much guarantee that this whole asm.js vs NaCL debate is moot.
Developers just don't care about runtime portability as far as games are concerned. They will do the work to do a source port for optimal performance if the market is there.
Does Electronic Arts or Valve care about getting Battlefield 4 or Portal 2 running in the browser in a portable way? I highly doubt it.
I think the proper question under the formulation you provided is, "Where can I find one that does anything other than produce bugs," since you asserted that bugs are the only result. To which the answer is: any of them.
Nothing of this complexity is bug free, but the fact and the points is that some of the most reliable software everybody uses every day -- OS kernels, DBMS, etc. -- are not just multithreaded, but heavily multithreaded.
Perhaps locking the DOM is a good trait if you can communicate between the main thread and the webworkers then you can request that the main thread changes the DOM and synchronize, without this you might get unexpected behavior anyways?
> NaCL can leverage SMP in a way that single-threaded JS can't, and the idea of introducing concurrency into JS sends shivers down my spine.
I agree that we don't want to break the single-threaded JS model. But we have options, the easiest of which is to restrict shared mutable state to ArrayBuffers and asm.js, or asm.js-like code.
They've been doing it for years, remember that Chrome/v8 is the one that forced all the others in really caring about js speed in the first place, and kept the race going for years by always being a few steps ahead.
In this cas mozilla is the one that took the lead on the asm.js case, and healthy competition is always better, but I believe it makes no doubt that the v8 team would have worked at improving their js speed as much as possible even without that pinch from mozilla.
"remember that Chrome/v8 is the one that forced all the others in really caring about js speed in the first place"
That's not accurate. I recommend you go back and look at the actual order of events here.
Apple started the JS perf battle with SquirrelFish Extreme and Firefox answered with TraceMonkey, all before Chrome was even announced.
I am not discounting the impact that V8 has had on the industry, but it is simply wrong to claim that Google kicked this thing off when Apple and Mozilla were both there publicly squaring off in JS benchmarks well before you, and just about every one else on the planet, even knew Chrome was a thing.
This does not changed the fact that chrome crushed the competition just like gmail crushed others. It took a very long time others to keep up and they are still behind in most cases. IMHO v8 deserves all the credit for the advance of JavaScript performance race.