I love the fact that Mozilla keeps pushing javascript to its limits. They seem to be doing a lot of things really well lately, this is awesome!! Plus you have Firefox OS, which pushes web-standards to native mobile. Now I just wish the devtools would get up to paar with those on chrome/chromium and I'd be switching back to FF in a heartbeat.
I wish google would stay on board with JS since they have the engineering power to do a lot in this area, but to me it seems after V8 they've kind of abbonned JS in favor of Dart (As oppossed to supporting asm.js). For instance, I just did the "try anyway" in chrome 26.0 linux and everything crashed. Did anyone get it working in chrome?
Chromium has been a little dodgy for me lately too. Lots of memory leaks and what not. I've had to kill it's parent process id on a number of occasions (but even then, there's no reason why a browser should take out the whole OS).
Basically, each font in each size uses some handles in a renderer. At 10000 handles, the tab renderer dies. They have a font cache but never clear it.
When enough render processes together use too many GDI handles, the whole Windows desktop breaks down.
I've had problems with fglrx where it crashes because you resize the window too quickly, and then it's done something really funky, so when you do CTRL+ALT+F_N it doesn't actually give you a shell.
I've never once had an application in Linux crash the whole OS - or at least not so long as I've had physical access to the machine (I've had rouge database requests brick a server before because it took down sshd - but that's a different story)
In fact on the laptop I'm on now, the parent link crashed Firefox. But it OOM'ed and got killed before I even noticed there was a problem (and that's on a beefy desktop environment with compositing enabled too)
My bet: the OS and all the code running on it is written correctly. and you can see it gave you a warning about tainted kernel when you loaded that piece of crap nvidia/ati proprietary driver.
And yours is very polite and contribute a lot to the discussion ;-)
but being the devils advocate here, both parents of your comment are not that far off.
Ubuntu is more prone to crash because it invites the user to install much more closed source and proprietary code, by design. so they are not that idiotic and flamebaitic.
You would think, but I've noticed that on HN specifically, the comment pointing out that a stupid argument is forming--if it gets in early enough--tends to be both highly upvoted, and the end of the argument. One reason I like this community.
Apparently, they are still trying to make V8 faster. V8's graph isn't flat and there are also very recent bumps.
>As oppossed to supporting asm.js
Asm.js is about JavaScript the compiler target, not JavaScript itself. Using asm.js means writing code in C/C++ (or something similar), which means it's more comparable to PNaCl than Dart.
The original team working on V8 in Aarhus switched to work on Dart, so V8 transferred ownership to a team in Munich. It is still very much alive and well :)
The only amazing stuff is the distribution mechanism really. Other stuff is just plain old OpenGL but in your browser this time. It is not like it has some crazy support for raytraced voxels or something.
> The only amazing stuff is the distribution mechanism really.
That's the main thing, but not the only thing.
WebGL is also very portable, more than other flavors of GL. Huge effort has gone into that. As a result there is a much better chance a WebGL app will give the same output on different browsers/OSes/GPUs (and if it does not, that's a bug that should be fixed).
WebGL is also more secure than OpenGL, since it was designed to run in a web browser, that executes unsafe code off the web.
I personally love that OpenGL is 'plain old' haha. Not to criticise your comment at all - but as someone who's been forced to work with OpenGL 3.3-4 quite a lot recently ... it's pretty incredible at times.
Also - as I understand it, what's impressive about all this isn't just the graphics. Once you've passed vertices etc... off to the graphics pipeline then yes Javascript doesn't handle much of the load .... but as we know a lot of games are CPU bound, and the fact that Javascript is increasingly able to offer acceptable performance relative to the native compiled experience of c/c++ is pretty nuts.
This is so awesome that I never want to see another WebGL demo again. This one proves it; you can make awesome games in WebGL. From now on I only want to read about non-demo WebGL games that are in development with a real release date.
I hate how DRM is being thrown around as a "turn-key" solution against piracy, because it's simply wrong. What's stopping anyone from selling copies of heavily DRM'ed - but cracked - games right now? Nothing.
Actually, WebGL games would be a lot more secure from piracy, because they would be online, and you could easily stop 99% of the piracy by requiring a login for the game. It's basically like the Diablo 3 model, only better. Because Diablo 3 should be easier to crack and play on private servers (not sure if even that has happened yet).
Making an "online game" instead of a native "PC game" is the best way to stop most of the piracy.
Diablo 3 is definitely not easier to crack. You need to know x86 assembly, and on top of that beat Blizzard's reflection-ish protection that checks for modified code. JavaScript is much easier to deal with.
In addition to that, just cracking the Diablo 3 client isn't enough, because a lot of the game logic runs exclusively on the server. For example, even with tens of thousands of test runs of a specific monster, you still won't know the correct item drop probability table, because that table is never sent to the client. Only the result of the server-side dice roll checked against the server-side loot table.
Moot point. Any client software can and will be cracked, regardless of how difficult it is to crack. Which is mtgx's point - online games are easier to secure because you can put logic and validation on a remote server. No DRM needed. Your example of Diablo 3 only proves his point. It's not the DRM that makes Diablo 3 hard to crack - it's the server side logic and validation.
Unlike the current situation with piracy where pirated copies are for all to see and get only hours before the official release.
The piracy while a legitimate problem is a relatively minor one. The first is visibility, the second is quality. Getting to the point where someone bothers to pirate your stuff and the majority of people that download it install it and play trough it is a mark of success - doing something right.
And with sizable proportion of the gaming community in their 30s - you will be surprised how many people have desire to support the studios. We learned our lessons with Shadow of the colossus, Psychonauts and the fate of Black Isle.
This is actually very old demo ported to the browser. The original Epic Citadel was/is an iOS demo released in September 2010. It runs well on an original iPad: http://en.wikipedia.org/wiki/Epic_Citadel
Comparatively, this runs a little sluggishly on a MacBookPro8,2 (early 2011).
It's funny: when I saw this on HN I thought they meant the intro to the original Unreal. I was actually more excited to see that than what was presented :(
This is Unreal Engine, with physics, cloth simulation, particles, light & glare effects. It's not C++ on Native Client or a plugin, it's running in javascript. Just yesterday you couldn't draw a circle on a 2d canvas at 30fps.
[some negative rant below to offset the hype - sorry]
> "This is Unreal Engine"
- a subset of it
> "with physics"
- did not notice any falling kickable boxes and such
> "cloth simulation, particles, light & glare effects"
- impressive, but there will be twice less of that then via native code
> "It's not C++ on Native Client or a plugin, it's running in javascript"
- what's the difference between downloading one/two specific browsers or making a build of each with a flash player bootstrapped?
> "Just yesterday you couldn't draw a circle on a 2d canvas at 30fps"
- so instead of pushing to make a universal VM, they decided to use a dynamic prototype-OOP language just because it happened to be most common - not very impressive.
asm.js is a universal VM with bytecode that just happens to be similar to subset of JS, but it doesn't have dynamism, doesn't have OOP, doesn't have garbage collection.
Firefox has a separate ahead of time compiler for asm.js that isn't a JS VM.
And it only works in Firefox, and even then only very well in Firefox Nightly.
Those points aside, though, this is pretty amazing. I fully expect multiple engines to target HTML5 in the same way Unity/Unreal/etc were cross-compiling for the Flash runtime. It's just not quite there yet...
(And it's good that demos like this exist to put pressure on browsers to fully support them).
As Brendan said, it's UE3, all of it. See the video at http://www.youtube.com/watch?v=BV32Cs_CMqo (second half of video) for the Sanctuary UT3 map, complete with bots, shooting, etc. Once the engine was ported, we threw random UE maps at it, and it worked fine.
It works in any browser that has solid JavaScript and WebGL support. Audio will work with Web Audio, but it'll gracefully not have audio effects if it doesn't. There's no Firefox specific magic here. If you're on Aurora/Nightly, then you get asm.js/Odinmonkey optimizations and it runs even faster; that's it.
Edit: UE3 was chosen because it's known stable and optimized tech. This isn't the pinnacle of what can be done, far from it. But neither us (Mozilla) nor Epic wanted to be working with code that was still under active development for a next generation engine while simultaneously trying to port it to a new experimental platform. One step at a time! :p
No, not a subset. It's UE3 cross-compiled by Emscripten from ~1MLOC of primary C++ source. Some configury to use OpenAL (mapped to WebAudio), and of course OpenGL-ES (mapped to WebGL).
It's really not. It's the UE3 Mobile Engine. Which while it's compiled from the same C++ source, It has enough things stripped during compilation to not really be considered 'full UE3' custom shaders for instance are not supported.
And atop of missing some things from UE3, Citadel is designed specifically for limited devices.
While still a very impressive feat, let's not confuse this with running GoW or The Samaritan demo in the browser.
As mentioned in other comments, we also ran other UE3 games, like Sanctuary. We demoed that in a booth at GDC last month where people could play it. That's a full UE3 desktop game with bots, AI, normal FPS mouse control (not tablet-like), etc. etc. You can see it in action in the 2nd half of this video
Ran this on a laptop on Firefox 20 (stable) with Bumblebee and it's surprisingly impressive and smooth.
I have a gaming desktop at home that runs all new games at 50+ FPS, but for some reason looking at graphics that a late-era PS2/early PS3 game would have _in a browser_ is more impressive than running a game like Crysis 3 for the first time.
What's the point of having these massive javascript virtual machines & webgl systems in our browsers? How does having a separate implementation of a specification help us build a safer, more open web?
My theory is that Mozilla and Google are in a race to develop the most advanced html5 engines because they know that if a killer html game or app gets developed in their browser, everyone who wants to use the app will switch to them while the other implementations tries to catch up.
It will also prevent new browsers from trying to pop up, unless they simply fork and try to keep up with one of the two major engines. Google and Mozilla know they can outpace any competitors that are trying to innovate in the web space by having the "latest" html5 features integrated. In example, unless Microsoft pours a bunch of money into it, or Internet Explorer forks Mozilla or Google, they can pretty much count that competitor out.
Is this supporting an open web?
What's the problem with focusing on open source plugin virtual machines that can work in every browser regardless of that browser's version?
> What's the point of having these massive javascript virtual machines & webgl systems in our browsers?
Because the safety model of JavaScript has been sound for the past few years.
> How does having a separate implementation of a specification help us build a safer, more open web?
What seperate implementation of what specification? If you're referring to asm.js, asm.js is just JavaScript, any browser can run it. The "specification" is a gift to other engine implementers to super charge evaluation of compliant code.
This ran surprisingly well on my 3+ year old Asus laptop with integrated graphics. By surprisingly well, I mean I doubt I ever got over 30 fps (probably 15 fps on average) but there were little to no hitches when loading new areas.
I wouldn't be able to play a fast paced shooter like this, but I could see it being more than tolerable for playing a slow paced role playing game or something in the browser, with the added ease of connecting with other players, possibly MMORPG-style.
On a side note, it'd be nice to make it capture the mouse instead of having to drag it.
Controlling the spectator is remarkably bad. I hope this is not how i am supposed to play video games. Even Dead Space 1 is better, and that was so bad i could not stand it for more than 30 minutes.
Well, unusually, this demo isn't using the Pointer Lock API to do normal FPS-style controls. It's fully possible to do them, but this demo doesn't for some reason.
The "Compiling Javascript" loader takes way too long (at least on my system) to not give any indication as to how long I will wait or whether it's still doing anything. At least a "this might take up to X minutes".
Demo itself ran halfway ok, clicking on 'Benchmark' (which I had to guess was in the upper pulldown) immediately froze everything for me (Firefox on Kubuntu).
Still: I do appreciate a demo that even kinda-sorta works on gnu/linux.
I've found it works almost flawlessly on ArchLinux 64bit with the standard Firefox package. Only one minor issue with the flag in the wind going all over the place, but that's probably the fault of open source radeon driver.
It is very refreshing to see something cutting edge work on linux.
What continues to impress me about these demos is that JavaScript has gone from "not useful for much more than form validation" to "can compile and execute a 52 MB block of what's basically assembly code".
52 MB of JavaScript is... a lot (even if half of it looks like data tables of some kind)
I'm on a pretty old laptop and this runs extremely well on Windows 7 (using Nightly). I also tried it on Xubuntu 13.04, but I couldn't get it to run (I tried it on Chromium and Firefox).
Definitely awesome, still a while before this is usable in the market (across many browsers) and load times seem longer than Unity/Flash. I can't wait until browser support of emscripten/asm.js is better.
Some stats from the benchmark running Windows 8 and Ubuntu 13.04 on Asus Zenbook UX21A (Intel HD Graphics 4000, Intel Core i7 3517U, 1920x1080):
OS Browser Avg. FPS Min FPS
Win8 Fx 20 26 16
Win8 Fx Nightly 46 34
Ubuntu 13.04 Fx 20 17 16
For me those numbers tell that if ams.js does not catch on in other browsers, browse based games just will not fly. Also, graphics drivers on Linux still suck and that's why I still need to double boot.
It doesn't work for me. I downloaded Nightly using the provided link, but when I open the URL it still says unsupported browser. It also doesn't have the option to 'try anyway' like Chrome does.
On Firefox 20 on Windows on my 2500k with a 7970 it seems CPU bound and can only achieve 43fps benchmark at 1920x1200, while having some clear artifacts from lack/too little texture anisotropy on the floor and antialiasing issues.
Also, the textures are not fully detailed when looking directly at the floor and walls.
If this is supposed to be a full quality benchmark (ala Unigine Heaven when it came out), it needs to improve.
This is awesome! I wished I could run cause its slow to walk around but then I discovered you can alt-tab to another window and keep the event for W active
I take issue with calling this JavaScript/HTML5. This is powered by asm.js. The source is emphatically NOT JavaScript, it's a subset that's meant to be a target for compilation (like a bytecode in .NET and the JVM) and is treated entirely differently from JavaScript by the JIT. We don't call everything that runs on .NET C# nor everything that runs on the JVM Java.
Additionally, asm.js is not part of HTML5. It's interesting, and this demo is cool, but it's not JavaScript/HTML5.
> I take issue with calling this JavaScript/HTML5. This is powered by asm.js.
It is faster with asm.js optimizations, but as other comments mention, it also runs well (depending on CPU/GPU) even without such optimizations, in browsers that have no special asm.js optimizations whatsoever.
All the demo needs to work is WebGL and JavaScript.
> The source is emphatically NOT JavaScript, it's a subset that's meant to be a target for compilation
It is not 'typical' JavaScript, but it literally is JavaScript since it's a subset.
> (like a bytecode in .NET and the JVM) and is treated entirely differently from JavaScript by the JIT.
Only in some cases (Firefox 22 and 23) is it treated in a special way, and as mentioned above, the demo works great in other cases as well, that treat it like normal JavaScript (Firefox 20 stable, for example).
Even when it is treated in a special way, it still uses the same parser and same backend and optimizations (IonMonkey) as the Firefox JS engine uses for all JS.
> We don't call everything that runs on .NET C# nor everything that runs on the JVM Java.
The main difference is that .NET and the JVM have bytecodes. C# is not .NET bytecode and Java is not JVM bytecode. But, JavaScript does not have a universal bytecode, there is just the language itself.
So asm.js is literally JavaScript, not some lower-level bytecode. It runs like normal JavaScript, the only difference is that some JS engines might optimize it a little better, since it's easy to optimize.
Note that asm.js-like code is nothing new. It's been generated for years now by compilers like Emscripten and Mandreel, and Firefox and Chrome (and likely others) have been optimizing for it, for example Google added a Mandreel benchmark to Octane. (The only thing new with asm.js is that there is a formal typesystem which makes it simple to make sure you emit proper code, and simple to verify you are receiving proper code; also, while developing the type system some bugs in how emscripten generates code were found and resolved.)
"It is faster with asm.js optimizations, but as other comments mention, it also runs well (depending on CPU/GPU) even without such optimizations, in browsers that have no special asm.js optimizations whatsoever."
Actually, that's not true at all. That's the plan for asm.js, but this demo doesn't run in anything other than very new Firefox builds, AFAICT. It crashes chrome, IE doesn't support WebGL (yet).
Even though it's using JavaScript as a kind of bytecode, it is just JavaScript / HTML5 (and some CSS, but that's pretty anedoctical I guess). Asm.js is a subset of JavaScript. As soon as Chrome fixes the crash, it'll run in a browser that supports just HTML5 (for presenting the page / the DOM / the canvas) and JavaScript with no special asm.js optimizations, and we'll have to see how fast/slow it is then.
To correct your analogy: it'd be like using a subset of C# as the bytecode rather than a language. It's still actual C#, it's not just .NET bytecode, but it's being used differently.
I have a lot of respect for Egorov, and I agree with his views on this matter quite entirely.
The sole benefit of embedding a bytecode in a language is that you get the side effect that it can run anywhere that language runs. But in general, this implies performance penalties, especially when the language is relatively slow to begin with.
But in this case, this demo is made usable (ie: the impressive part) by writing a compiler specifically for the bytecode itself. So it works well in spite of the fact that it's a bytecode embedded in JS, not because of it. The really cool things about this demo are precisely 2 things:
1. asm.js makes running C++ code that would render to OpenGL in the browser a possibility, and with relatively good performance. Kudos to Mozilla.
2. It runs on browsers that don't know about asm.js, but it's effectively an emulated machine, and it's slow.
I just take issue with calling this an example of what JavaScript/HTML5 can do. I think seeing an application written in JS using WebGL or Canvas as something much more impressive, and there are Q3 renderers that are natively JS that run with amazing performance in the browser. This is just dressing up the fact that turing completeness is a thing and calling attention to it. The real thing to get from this demo is that there are enormous benefits to having a bytecode for the web over trying to optimize a fully dynamic language like JS. I feel like that is relatively lost by the title.
> 1. asm.js makes running C++ code that would render to OpenGL in the browser a possibility, and with relatively good performance. Kudos to Mozilla.
> 2. It runs on browsers that don't know about asm.js, but it's effectively an emulated machine, and it's slow.
Do you have benchmark numbers to support that? In my experience, asm.js code is quite fast even without special asm.js optimizations. It depends on the benchmark obviously, but look at
Many of those benchmarks are very fast in browsers without special asm.js optimizations.
All they need to do to be fast on asm.js code is to optimize typed array operations and basic math, and those are things browsers have been doing for a long time. Google even added a Mandreel benchmark to Octane for this reason.
Emscripten and Mandreel output, with or without asm.js, tends to be quite fast, generally faster than handwritten code. asm.js is often faster than that, because it's easier to optimize, even without special optimizations for it. Those special optimizations can help even more, but they are not necessary for it to run, nor are things "slow/emulated" without them.
That link is misleading in the context you present it. Those are microbenchmarks, which a JIT can optimize relatively well. But if you look at the very next slide at http://kripken.github.io/mloc_emscripten_talk/#/28 you will see that for a larger application, non-optimized asm.js performs abysmally, as does JS in general. A performance penalty of ~1000% of native is what you should expect for a nontrivial JS application, and asm.js does not do much (if anything) to alleviate that unless you run odinmonkey.
I was purposefully linking to that slide + the one after it.
Yes, there is a factor of around 4x slowdown between asm.js optimizations and without them, on the next slide. But even 4x slower than asm.js is quite fast, it's enough to run the Epic Citadel demo for example (try it on a version of firefox without those optimizations, like stable release - as others reported in comments, it runs ok). A lot of the work in Epic Citadel is on the GPU anyhow, say it's about half, so the difference is then only something like 2x.
2x is not that much, we have similar differences on the web anyhow because of CPU differences, JIT differences, etc. That's within the expected range of variance.
It's faster than "handwritten JS" because it's using a subset of JS that's already tuned to near the peak level of performance that you can achieve in JITting JS, which is tightly looped arithmetic with everything inlined working on SMIs. In V8, that's ~2.5x slower than the equivalent C code with optimizations. That's basically best case.
The problem is that it's still dynamic, and at any point a null can come along and force you to throw away a JITted function, so you have to have these checks everywhere just in case. Further, ECMA compliance does not require JIT compilation, which means to expect performance, especially on the web, is dubious at best. I can see merit to expecting performance metrics in something like Node.JS since it assumes V8, but the web does not have 1 JS engine, and the standard does not require such performance metrics.
The "benefits" of building a bytecode in that's represented in some subset of JS are essentially red herrings since they're effectively lies and the downsides are a lot more significant, IMO. Instead, we should be focusing on actually building a bytecode for the web such that all implementations are expected to have certain performance metrics.
As someone who grew up in a time where you had to tweak your config.sys and autoexec.bat files so that you could get a few extra FPS on Duke Nukem 3D, I can tell you: this is pretty freaking amazing.
>One concern for this type of product is that there's no concept of "installing" a webapp. The website will need to stream all game resources to the client for each user. I would wager that's one reason the graphics are poor: all of what you see was probably generated from <100MB of content.
As the demo says, it's recommended to try it in Firefox Nightly. The demo starts up and runs much faster there thanks to optimizations that are not yet in the stable release (which is what you're running, I think).
So it basically means that C++ got another reasonable compilation target. Not much to do with applying JavaScript to large-scale games. Also it is not HTML 5, it is HTML FF23.
Factually, all code emitted from emscripten is JavaScript. It may look funny with all the bitwise operators that are barely ever used by web developers, but it's just JavaScript. It doesn't run correctly in Chrome right now because they have issues with their JavaScript environment not supporting large numbers of variables for a given context. It doesn't run in IE because IE doesn't support webGL. Both of these will be fixed in the future.
I wish google would stay on board with JS since they have the engineering power to do a lot in this area, but to me it seems after V8 they've kind of abbonned JS in favor of Dart (As oppossed to supporting asm.js). For instance, I just did the "try anyway" in chrome 26.0 linux and everything crashed. Did anyone get it working in chrome?