Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Julia has similar capabilities, probably because it was heavily inspired by lisps. You can even modify Julia's compiler from within Julia. I often write code at a python level of abstraction and then use julia introspection to check the machine code that was generated.


So if I understand correctly, in Julia you programmatically look at generated machine code? Is there a way to modify it, or is just for making sure some optimizations were applied?


We can modify the code at a few different levels. The easiest level is our untyped intermediate representation. The next easiest level is to modify things at the level of the LLVM code which is basically one step above assembler, and almost always better to work on than direct machine code (also machine code can be embedded in LLVM code if you need to). You can also use https://github.com/YingboMa/AsmMacro.jl if you like.

We are also working out interfaces to make it easier to programatically work on our typed IR through a technique and set of interfaces known as "abstract interpretation".


@code_native just lets you look at generated code, but Julia also uses macros frequently to give the compiler hints about how to compile your code. Some examples are @inbounds which disables bounds checks, @fastmath which is the local version of C/Fortran's --math-mode=fast, @simd which lets the compiler assume it can re-order loops (it will do so anyway if it can prove you won't notice). If you need more fine grained control (which is very rare) you can also emit LLVM bytecode (or direct assembly) directly.


Some abstractions are costly if the compiler doesn't optimize them away, so one use is to check if that happens. So one iterates changing the Julia code, not the machine code mostly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: