Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I know that part of the reason I like FP is that a compiler is usually a pretty perfect pure function - source code -> machine code. So those people researching languages will often find that FP is the best tool for their job and will be most familiar with it. Then when they come to research something new they do it in the context they're most familiar with.


Symbol table, multiple passes, iterative processing....

The state is not event-based in nature but does exist.


Having state does not mean a function/algorithm is not pure.



Pure functions handle state without any issues (quite elegantly, actually). They simply can't have mutable state that is globally observable. For instance, you could have a struct, and every time you 'mutate' the struct, you actually clone it and simply set the relevant field to the new value when you're instantiating the new struct.

Functional programming languages have mechanisms that make this process pure, simple, and efficient and you get the benefits of never mutating an existing value (which is important when several different functions may hold a reference to it, among other reasons).

See also: The Haskell state monad or lenses. Theses are pure mechanisms that Haskell provides to update state without mutating any data.

Edit: In the event that you do need shared mutable state, the Haskell STM (Shared Transactional Memory) monad is your answer.


The ST monad [1] provides an efficient hashtable now; doing hashtables as pure functional data structures is well know to be slow.

I would hope one wouldn't need to resort to STM for a compiler, especially since retry is fairly undefined! Still nothing about iterative computation however, I wonder how the ST monad would deal with a Y combinator?

[1] http://hackage.haskell.org/package/hashtables


Pure functions model state nicely.

  compile :: Source -> Core
  optimizer :: Core -> Core
  nativeCode :: Core -> Assembly


Those are just signatures, what is going on inside the boxes?

Also, how does one encode iterative computation, like a data-flow analysis, in Haskell? And symbol tables? Is it sufficient that the symbol table is encapsulated within compile even if it involves dictionary read/writes? I'm genuinely curious.


Inside the boxes, you might use a (State SymbolTable) monad to carry around the symbol table:

  type SymbolTable = Map String Symbol
Me and a codeveloper are working on a type system in Haskell, where our current code has iterative computation, until it reaches a fixpoint.

Here's the iterative code:

https://github.com/Peaker/lamdu/blob/master/Lamdu/Data/Expre...

If you need to internally carry state, you just use a State monad to thread around the state purely.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: