I think for newbies there are two separate aspects to explain: first an intro to algebraic structures perhaps using groups as an example, then monads in particular.
It’s important to emphasize that algebraic structures are abstractions or “interfaces” that let you reason with a small set of axioms, like proving stuff about all groups and writing functions polymorphic for all monads.
With monads in particular I think the pure/map/join presentation is great. First explain taking “a” to “m a” and “a -> b” to “m a -> m b” and then “m (m a)” to “m a”. The examples of IO, Maybe, and [a] are great.
You can also mention how JavaScript promises don’t work as monads because they have an implicit join semantics as a practical compromise.
You really don't need to introduce groups or other algebraic structures to understand monads, and if your goal is to teach monads I believe it is harmful to do this.
The average programmer is much more likely to encounter monads (e.g. error handling, promises), than they are to encounter groups in an abstract context. Unnecessary maths will drive people away. Making a big deal of axioms, reasoning, and all this stuff that functional programmers love (including myself) is the approach that has been tried for the last 20 years, and it has failed to reach the mainstream. If you want to reach the average programmer you need to solve problems they care about in a language (both programming and natural) they understand.
Elitist functional programmers are probably just average programmers that didn't run away screaming at the first sign of math, but instead just confronted what was presented to them and took the time and effort to properly grok the material.
Why not go all the way and teach functors and applicatives before monads? Then the student can see that monads are just a small addition built on top of the other two. Functors, in particular, are very easy to grasp despite their intimidating name. They just generalize map over arbitrary data structures:
l_double : List Int -> List Int
l_double xs = map (* 2) xs
f_double : Functor f => f Int -> f Int
f_double xs = map (* 2) xs
Applicatives are a little bit trickier but once you get them, there's only a tiny jump to get to monads. Taught this way, people will realize that they don't need the full power of monads for everything. Then, when people learn about idiom brackets [1], they start to get really excited! Instead of writing this:
m_add : Maybe Int -> Maybe Int -> Maybe Int
m_add x y = case x of
Nothing => Nothing
Just x' => case y of
Nothing => Nothing
Just y' => Just (x' + y')
You can write this:
m_add' : Maybe Int -> Maybe Int -> Maybe Int
m_add' x y = [| x + y |]
Functors are not useful for much on their own, so they are difficult to motivate.
The Haskell formulation of applicatives doesn't make much sense outside of languages where currying is idiomatic---which is most languages. In these languages you tend to see the product / semigroupal formulation, and here applicatives become a bit trickier to explain as you need more machinery.
Functors enable the Fix functor and, from there, the whole universe of recursion schemes, so I'm not sure that I agree that they're not useful on their own.
Sure but virtually nobody cares about how to finitize a recursive function when they are trying to learn a new programming paradigm. Recursion seems to work just fine in languages that don't have any of these bells and whistles.
"Hey you can implement Fix" is like saying "now you can program in green" for most readers.
Oh, certainly. My assumption was that the GP meant that functors aren't useful on their own _in general_, rather than in the particular context of someone just getting into the typed functional paradigm. And, of course, recursion does work just fine in other languages, but (and I'm saying this more for posterity than as a retort since I assume you're well aware of this point) recursion schemes offer a layer of abstraction over direct recursion that eliminates the need to manually implement various recursive operations for each of the data structures you have at hand. As with a lot of the higher-level constructs in languages like Haskell, that level of abstraction may not have any practical benefits in most domains, but it's nice to have when it does offer a benefit in a particular domain.
It is pretty hilarious. Map is such a common place concept, people are familiar with map and filter over arrays, but OP makes the exact same mistake 90% of people explaining this stuff do. He used Haskell / Idris syntax.
If someone already can intuitively/naturally read Haskell syntax, they've likely already looked at a bunch of these tutorials and read about functor/map.
For those that you're targeting that don't read haskell syntax naturally, explaining something to someone in a language they don't speak is if zero use to them.
The above comment managed to make something people already know confusing.
Monoids are nothing but a design pattern generalizing string append and the empty string.
Functors are simply a design pattern generalizing "map()" over arrays.
Monads are simply a design pattern generalizing "SelectMany()" over arrays.
It turns out that these patterns are significantly more powerful than most programmers realize, and by learning the underlying design pattern you'll be able to recognize novel situations where you can apply them and write much better/simpler code and APIs.
It’s important to emphasize that algebraic structures are abstractions or “interfaces” that let you reason with a small set of axioms, like proving stuff about all groups and writing functions polymorphic for all monads.
With monads in particular I think the pure/map/join presentation is great. First explain taking “a” to “m a” and “a -> b” to “m a -> m b” and then “m (m a)” to “m a”. The examples of IO, Maybe, and [a] are great.
You can also mention how JavaScript promises don’t work as monads because they have an implicit join semantics as a practical compromise.