Don't be. The fundamental observation that monad's usability is tied closely to Haskell's features is important, and still hasn't been taken on board elsewhere. There's always someone trying to do monads in Clojure...
Yup. If I had read this article when it was first written instead of now, I probably would have produced a fair bit less unnecessarily complicated and difficult-to-maintain code over the past several years.
Instead, I had to figure out why I was having such a hard time coming up with a satisfactory implementation of the pattern in C# - and why I maybe shouldn't have wanted one in the first place - the hard way.
Just curious, why would you argue that you shouldn't have wanted it the first place? Monads are one thing, which you can approximate somewhat via LINQ, but there are other simpler patterns such as Monoid, that really makes sense to use as programming construct, that just somehow never works in C#.
Because it seems like there always turns out to be some other approach that works better and is more idiomatic in C#. Usually something really forehead-smackingly obvious like mutation or a callback.
Yes, but doesn't it also have the three requirements mentioned by the article: ML-style function foo, something akin to type classes, and polymorphic return types?
Don't be embarrassed about this. I think it's a very important point. I've heard monads getting buzz outside the Haskell community and it almost always misses the point. I gave a presentation at the NYC Haskell meetup that made essentially the same point as you, but came at it from a different angle. You wrote this post quite awhile earlier, but it's nice to know that we independently came to the same conclusions.
> It appears that in the presence of mutable state, a lot of the advantages of monads become moot.
In the simpler cases where use of monads can be replaced by simple mutable state -- you still lose out on the explicit types of the mutating vs. pure code.
For example, STM is possible because mutating effects are typed, and so can be ruled out of STM transactions.
And in the more complex monads (e.g: transformer stacks), mutable state is just not good enough. You can compose transformers to build things that mutable state simply cannot express, and you'd have to CPS transform your code and avoid mutable state to express those things.
For example, these two monads:
ListT (ParsecT m)
ParsecT (ListT m)
Have no corresponding "mutable state" representations.
That's a really great point that I rarely see come up. Yes, Haskell makes you "recreate" the RWST monad stack that imitates impure languages... but the same tools also let you build other "language domains" which are fiendishly difficult to implement in non-pure settings (on par with CPS transforming yourself into the mother monad and then working backwards from there).
On Lisp has a chapter devoted to making a leaky macro-based CPS transformer for Common List. ContT adds CPS to any monad stack as quickly as
newtype ContT r m a = ContT { runContT :: (a -> m r) -> m r }
>Yes, Haskell makes you "recreate" the RWST monad stack that imitates impure languages
This is really a point that seems to get lost in most of learning materials on the web. LYAH doesn't even acknowledge transformers. I am pretty new to Haskell but it seems to me that transformer stack decisions are a pretty significant design decision for your application. What has happened to me is almost all the code in my application has wound up inside this stack; the only pure functions are trivial helpers. I'm not aware of what has been written on this topic in terms of guidance or advice. But it seems like it is definitely one of the later-stage humps for a new Haskell developer like me.
That's definitely a stage of a Haskell program design. The stack helps to separate out layers of effects, but it can be easy to fix that stack concretely at an early stage and then feel the weight of it later on with overly specialized functions.
The solution is usually to use the mtl-style transformer classes which let you write generalized functions which depend upon capabilities of the transformer stack instead of the concrete stack itself. You end up with a sort of natural dependency injection framework. For instance, here's a function which works on any RW-style monad stack
augment :: (MonadReader a m, MonadWriter a m) => a -> m ()
augment a = ask >>= \a' -> tell (a' <> a)
where the Monoid constraint allowing us to use (<>) is coming from the typeclass Prolog involved with the MonadWriter class.
You can then use augment in ANY monad transformer stack which involves Reader and Writer over the same state type.
Note that this defers the extremely important decision about your monad transformer layer order---the caller of augment decides whether to call it with "ReaderT w (Writer w) a" or "WriterT w (Reader w) a". Reader/Writer commute so it's not a problem, but this changes things in the op's example using "ParsecT (LogicT m)" versus "LogicT (ParsecT m)". If your function depends on a particular ordering of the effects then it must have a less general type.
For maximum modularity, you can combine this with functions like "zoom" and "magnify" of module Control.Lens.Zoom. They let you convert stateful operations that work on a fragment of the global state/environment into operations that work with the whole state/environment (provided that you have the appropriate lenses.) Very useful in complex monad stacks.
Ah, sorry, I was accidentally writing in the context of another comment I wrote in this thread where I talked about the typeclass resolution machinery. Without that context, it is pretty out of place.
getAny :: (Random a) => State StdGen a
getAny = do g <- get
(x,g') <- return $ random g
put g'
return x
I was like.. how on Earth Haskell knows which function "get" should it call? I think this is a point which should be more stressed in the tutorials (I read Learn yourself a Haskell and glanced at couple of others..)
It's easy for a Haskell expert to sweep typeclass resolution under the rug, but it's definitely some pretty black magic at first no matter how "simple" its implementation is. The reality is that the compiler works really hard to ensure that it can guess the right "get" and it's pretty possible for it to fail.
Of course, when it fails you know immediately and can remedy it by type annotations, but that can still be challenging.
To answer the question, `get` has its principle type resolved by the type inference engine. In this case, `get`'s most general type is `MonadState s m => m s` and it can resolve what `m` is by unifying it with the type annotation of `getAny`, so we know that we need `get :: MonadState s (State StdGen) => State StdGen s`. From here, the compiler searches through the type class instances in a Prolog-like style to find that `MonadState s (State StdGen)` occurs when `s ~ StdGen`. This resolves more type information and also tells us which definition of `get` is needed—the one that was defined as `instance MonadState s (State s) where`!
The end result is that get ends up with the type `get :: State StdGen StdGen` and is the function defined as `get = State $ \s -> (s, s)`.
It's also quite possible for this type of functions to get a polymorphic type, in which case the compiler generate code that will pass along essentially a virtual table object for the particular instance of the call site. Hardly black magic.
I agree that the implementation is quite obvious. It's the inference—the collusion between the HM type constraints and the Prolog-like typeclass constraints—that's magical.
Thanks. I sometimes wonder, is there a way in Haskell to display a type signature of a function for a particular instance? For example, is there a way to display type of get when used as an instance of State ? I know I can derive it but I would sometimes like to see if I am correct..
There's a hack you can use until Type Holes lands in production GHCi. If you turn on ImplicitParams you can write
getAny :: (Random a) => State StdGen a
getAny = do g <- ?whatAmI get
(x,g') <- return $ random g
put g'
return x
which will fail the type checker because your type didn't include the information about what ?whatAmI is---the error will be something like "could not find definition of ?whatAmI :: State StdGen StdGen -> State StdGen StdGen", although you may only get partial information as the compiler may complain about your undefined function before it's resolved what you're looking for.
You can also leave off the type annotation and GHCi will infer the type of ?whatAmI. Then if you check the type of "getAny" you'll see
getAny
:: (?whatAmI::m s -> n s', RandomGen s', Random b,
MonadState s m, MonadState s' n) =>
m b
which provides all of the information the typechecker knows all in one go. Take note that the type it derives is somewhat more general than the one we want since it allows ?whatAmI to transform the involved monad. Obviously "get" doesn't do this.
Some vim and emacs plugins can give you the type of subexpressions like this. Namely, you want the ghc-mod plugin since it provides the hooks into the compiler to query type information.
* Try to quickly write the whole thing as a single recursive descent parser. Note the exploding amount of ugliness. Give up.
* Separate out the tokenizer (novel idea, huh?) to keep parser complexity down. Parser is still a mess. Ugh!
* Play around with some CL parser frameworks. This helps a bit, but none of the systems I tried produce errors with enough information.
* Remember the breeze it was to write a parser with the Haskell Parsec library. Mess around with monads for a while, learn a few things, but not how to write elegant parsers in Common Lisp.
I've been going through these exact steps but in JavaScript. Writing languages is fun! And makes me want to kill things!
I have no idea what you're working on, but I spent a few weeks this summer writing parsers and trying to figure out how to tackle the ugliness that seems to be inherent in hand-written recursive descent parsers. While I learned a lot, I am not an expert by a long shot. However I did find an interesting technique that is not well covered elsewhere.
I came across this* article, which uses JavaScript as the implementation language, and found it very interesting, in large part because this approach (Top down operator precedence) sort of pulls the precedence hierarchy out of the call graph of a recursive descent parser and into a table, but also because it's an approach that OOP (and JavaScript in particular) is well suited to. I've used it (in combination with traditional recursive descent) in a functional setting (Standard ML) as well, and would use it again, especially for parsing infix expressions (arithmetic expressions, type annotation expressions).
This is a bit of a tangent, but I thought you might be interested given the intersection of parsing and JavaScript. I've been meaning to write this up in a short blog post...
I like the precedence climbing algorithm since you can easily fit it into a normal recursive descent parser and it allows you to easily add new operators just by putting them in a table. See: http://eli.thegreenplace.net/2012/08/02/parsing-expressions-...
The parser is a bastard mix of recursive descent and parser combinators borne out of looking at the Dragon Book and thinking "fuck me, 2000 pages? I'm sure I can remember some CS325...". The language will eventually have macros I'll use for the built-in operators. I guess it's a Lisp? The syntax is sweet-expressions, so infixes are {l op b} which is just (op a b).
If your needs fall towards JS as the target vs. the source language, then it's worth having a look at Fay[1] and GHCJS[2]. Both allow calls from regular Javascript, but Fay currently has a much better interface for that vs. GHCJS' still very low-level support.
I came here to make a joke about the real reason why was that no one understands them... but apparently everyone here does. Off to google monads for idiots...
As a connoisseur of monad explanations (if not monad understanding) Crockford's talk sounds rushed. It spends a fair amount of time on setup, but when it gets to the meaty bits, it has the feeling of glossing over the details. It could be that all the essential elements are there, but they're covered to quickly to impart any comprehension.
I think the proper presentation of this material is as how Learn you a haskell[1] does it: functors, applicatives, monoids ... then monads but I don't think i've seen this done for other languages. Each of those structures should be learned as a set of properties, rather than something concrete (container, wrapper, sequencer, programmable semicolon) and then you see how important the applicative vs monadic style decision is for library design in haskell
Nah, I think its just because they result in really nested types which is hard to keep track of without a really good type system. Using a single monad by itself like error or continuation is straightforward in Clojure; there are smart & vocal people who do this. But the super awesome OMFG of monads is that they can be combined (e.g. parser = error + state), and this more or less requires abstracting in the type system to keep track of all the lambdas.
There is a point made here about how Haskell's namespace handling makes monads easy, while CL's makes them ugly. I don't know enough about CL's namespaces to understand this. Can someone explain?