Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can do pretty well for analyses where n is the number of variables instead of the number of program points / expressions. In those cases, the typical trick is to shorten the height of the tracked lattice, as you only get in trouble in a few places.

To make that concrete, control-flow analysis (which answers "which possible functions could I call through this variable?") is cubic in general. But if you cut it off to only tracking N separate functions and then giving up to ANY when you hit that N, you can cut off the few bad cases (e.g., all of the bindings to the function passed to map, which you will probably just inline later anyway) that generate the cubic behavior and keep your execution time well-behaved, even for whole program compilers, at very small loss of precision in practice.



If your language has pointers or aliases, precise call graphs are exceedingly expensive (for large programs), because you need a precise pointer or alias analysis. Do you have a link to a publication about the technique you describe?


There are meatier technical presentations around about the formal bounds, but I always recommend Manuel Serrano's early work, which is much more accessible than more modern presentations (my own included):

http://dl.acm.org/citation.cfm?id=315891.315934

A good survey of many of the tricks used in practice (and the formal techniques, as well) is available in Jan Midtgaard's tome, which was finally published in the last year or so:

http://dl.acm.org/citation.cfm?id=2187671.2187672


Cool, thanks. I wasn't clear whether you were talking about functional programs, but I can believe that it works for them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: