Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
JDK7 to have escape analysis on by default...also back ported to JDK6 (java.net)
84 points by suprgeek on Oct 7, 2009 | hide | past | favorite | 21 comments


Interestingly, this could really boost performance for languages on the JVM like Scala as they tend to create an awful lot of temporary objects.


Now, the only really big thing missing from the JVM seems to be proper tail call optimisation. Does anyone know the status of that?


From what I've heard, it aint gonna happen soon. It wasn't in the list of things to include for java 7. Perhaps this is an opportunity for one of the open-source JVMs to gain usage-share.


I'm confused here, isn't that something that is better handled at compile time than in the JVM ?


tl;dr: No, it has to be built-in to the JVM.

First of all, if a JVM were to attempt this optimization for tail calls using the ordinary method call bytecodes, it would break existing programs that are written to rely on stack frame inspection.

So the only solution is to add a new bytecode for tail calling methods, so that this bytecode can be used from non-Java languages without impacting existing code; however there's no such bytecode yet.

Self-tail calls (where a function calls itself in tail position) can be compiled as a goto, however this is not as general as full tail call elimination (you cannot implement a state machine consisting of several functions this way, for insance).

Also, an arbitrary set of tail-recursive functions can be compiled into something that fakes a call stack using a heap-allocated data structure, with a top-level trampoline that drives an interpreter loop. However this incurs a significant performance penalty, and requires a whole-program transform. When you hear about Scheme implementations on the JVM that implement tail calls and call/cc, they generally use this trick, and as a result as too slow to be useful.

So, right now there's no good solution for tail calls on the JVM, and it would require changes to the JVM to implement in full generality.


> First of all, if a JVM were to attempt this optimization for tail calls using the ordinary method call bytecodes, it would break existing programs that are written to rely on stack frame inspection.

That is not a bug, that's a feature. Too bad the feature request for hanging people who write programs that rely on stack frame inspection didn't go trough.


> Self-tail calls (where a function calls itself in tail position) can be compiled as a goto

Right, that's the example that I had in mind.

I can see that for more complicated situations something more would be required, but I always figured that if the JVM is turing complete that it should be possible to 'target' any language to it without having to invent special bytecodes. Probably this is a direct consequence of the JVM and java being developed in tandem, it is not as general as I thought it was.

Thanks!


This is why clojure has a special form for tail call where you can "recur" a loop by re-binding the variables. When proper tail call comes to JVM, it'll be possible to do write the loop as a function call instead of hiding it inside a function.

  (def factorial
    (fn [n]
      (loop [cnt n acc 1]
         (if (zero? cnt)
              acc
            (recur (dec cnt) (* acc cnt))))))


In java, you could use the "while" keyword for that.

Anyway, tail-calls aren't necessarily loops. If I'm not mistaken, tail-call optimisation for looping constructs only (which the poster a few levels above probably had in mind) wouldn't require a special byte code.


With the added benefit that it will warn you if you're not really in tail position. The only place where recur falls flat is with mutual recursion in which case you need to use trampoline. I love me some Clojure.


No, because you can't optimize tail-calls made to virtual methods.


There's no theoretical difficulty with compiling tail calls to virtual methods; of course perhaps you meant that the JVM doesn't expose a bytecode for doing so, which is absolutely correct.


Why not? The TCO is done on the caller end and is completely transparent to the callee. You just don't bother pushing your return address when calling a function, so it'll return to the previous one instead.


Not to mention if necessary languages like scala can optimize for maximum use of this feature.

Now... If only my job would move from 1.4.2 to 7 :)


So, what exactly is escape analysis? Neither the article nor the comments here even touch on it.


http://en.wikipedia.org/wiki/Escape_analysis

In a nutshell, figuring out whether a variable or object is going to be entirely confined to a single thread and scope and using that information to perform optimizations (like sticking things into registers) or avoid unnecessary locking.


afair, the most important optimization there is that something can be allocated on the stack instead of the heap, which is quite an optimization. EDIT: it does not actually have anything to do what you can stick in registers--at least nothing comes to my mind w/o thinking too hard about it...

relevant papers directly from the hotspot-jvm are:

-kotzmann, moessenboeck, "escape analysis in the context of dynamic compilation and deoptimization" http://www.usenix.org/events/vee05/full_papers/p111-kotzmann...

-kotzmann, moessenboeck: "run-time support for optimizations based on escape analysis" http://ftp.ssw.uni-linz.ac.at/Research/Papers/Ko07/Ko07.pdf


Actually, once you've figured out that you can stack allocate an object, you can stick its instance variables in registers. This can be a huge win.


"The Java language does not offer any way to explicitly allocate an object on the stack, but this fact doesn't prevent JVMs from still using stack allocation where appropriate. JVMs can use a technique called escape analysis, by which they can tell that certain objects remain confined to a single thread for their entire lifetime, and that lifetime is bounded by the lifetime of a given stack frame. Such objects can be safely allocated on the stack instead of the heap. Even better, for small objects, the JVM can optimize away the allocation entirely and simply hoist the object's fields into registers."

Taken from an IBM developerworks article here: http://www.ibm.com/developerworks/java/library/j-jtp09275.ht...


Does this mean that simple accessors (or at least getters) effectively have no method call overhead?

Also, the linked article's linked article mentions (in 2005) that Java 6 would have had this. What happened?


> Does this mean that simple accessors (or at least getters) effectively have no method call overhead?

Method inlining has nothing to do with escape analysis (other than enabling it in more cases) and in any case getters and setters have had zero overhead since HotSpot was first introduced in Java 1.3.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: