I still wonder to this day why Haskell programmers want their language to be loved so much.
At times it feels like that kid at the playground that spends half his time telling everyone how he's the best thing since sliced bread and cries himself to sleep at night wondering why no one will play with him and his monads.
Don't get me wrong, Haskell looks like a great language with obvious qualities and I don't knock anyone for using it, to each his own, it's just the never ending publicity and proselytism that really rubs me wrong. People adopt new languages, not the other way around.
I'd say Haskell does one percent of the proselytising that Ruby does.
You hear about it a lot on HN because people write interesting articles about it. Remember, most articles that are submitted aren't being upvoted. There's probably someone writing about how their ImmutableSet implementation for Java is the best thing ever, but it's not being upvoted because the community doesn't consider it interesting.
I am a Ruby programmer and let it be clear, the holier-than-thou attitude can indeed be pervasive in the Ruby community.
Yet, most of the proselytism int the Ruby community are different shades of "Programming with this language makes me very happy, I'd like everyone to be happy as well". Of course it's not always as clear cut and there is sometimes much to be desired in terms of behavior. But that's where the Haskell in MY PERSONAL OPINION (so to take with a pinch of salt) is slightly different in how it seems to be variations of "Haskell is a superior language, everyone that doesn't use it has not reached enlightenment"
TL;DR: Ruby people can be annoying like hippies can be annoying, Haskell enthusiasts I've found to be more akin to the Jehovah's witnesses of programming.
And just so we're clear, I am going on a slightly provocative/trollish bender here, one that I hope will not offend too much, it's all in good fun.
I think you're imagining the holier-than-thou attitude. While people are certainly excited to have figured it out, nobody is asking them to be excited about it.
If you disagree, how about citing some sources? You're good at coming up with analogies, but coming up with analogies is like being Hitler. What?
Rubyists aren't like hippies (hippies have no taste and they dislike drama), more like hipster art critics if anything.
Haskellers being compared to Jehovah's witnesses is even more wrong. They're more like scientists, maybe of the climate change variety: most of them keep their heads down with a dedication to improving the state of the art, but when they go public it's with good reason and people should listen rather rejecting out of hand anything that challenges their ingrained worldview.
I love how easily us nerds get sucked into debating analogies and whether they're right or not and which analogy is the best one, even when it has no bearing on anything. It's a really funny phenomenon to watch.
Haskell has not uncovered an impending disaster which everyone ignores at their own peril. It is a programming language on the same basic level as many other popular and useful programming languages.
as a rubyist who has dabbled a bit in haskell, i think the two camps are exactly the same in terms of proselytising. it's just frustrating to see people using "less-capable" languages and imagining how much happier/more productive/safer they'd be if they only adopted yours.
The difference there is that Haskell actually does offer some functionality and concepts that aren't really present in most other programming languages.
Ruby, on the other hand, is pretty unremarkable. It doesn't really offer anything beyond what older languages like Perl and Python, for example, offer.
I can mostly agree, Haskell's additions aren't even in the same ballpark. I also prefer Python overall to Ruby. But Ruby has a huge advantage of anonymous code block parameters.
That facilitates dsls and I believe it's important enough to note as significant.
in my experience, ruby has a lot of small improvements over perl and python that add up into a far nicer programming experience. it's subjective, perhaps, but i've used a lot of languages and i find that ruby hits the sweet spot for developer productivity.
What prompted this is not that I'm not fine with proselytizing, I just think that you should be even handed when you do so. False claims serve neither language or their communities and potential users.
1. It is very terse/expressive. I can look at code I haven't touched in 6 months and still understand what it's doing. Perhaps because my code is newbie code and thus somewhat simple.
2. It is the only language that brings me back to the days of Turbo Pascal when the joy of coding came from coding itself, not from building cool things. A small but vital distinction.
That said, I rarely get to use Haskell. For most of my tasks javascript is better suited, but haskell has affected how I write javascript to a great extent.
Maybe it is because I'm not a Haskell Guru, but although I love Haskell's terseness, it tends to make it very hard for me to read code I've written a while ago. I tend to get excited about cool Haskell features like Arrows, using them whenever I can, then I forget about them, and when I read my code 6 months later, it's like gibberish..
When I find myself using arrows, usually I realise that I got lazy and used tuples because my data model wasn’t good enough. Good code flows from good data.
But really, you should use the “tricks” when they make your code more expressive, not just because they’re cool. It’s the same in any language.
You have expressed something I have been worrying about with all these new APIs and frameworks - there is a joy to coding not necessarily connected to delivering business value.
Joy is relative. What gives one person Joy might not give another joy. There just might be people who derive joy using frameworks to deliver business value who might not find joy in coding just for coding sake. For some programming is there hobby and doubles as work, so they derive joy in coding for coding sake. For other's coding is just a means to an end in their work and they have other hobbies.
So every man to what gives him joy and no need to worry about those using Api's and framework to deliver value and pay bills.
When you find a tool that in many ways is far better than every other tool used by the mainstream (and of course in some ways worse) you might get excited about it so you want to share the knowledge you found.
Or maybe you prefer the tool you like to get more adoption so you could use it in more situations.
I advocate Haskell because of those two reasons. I think that like me, many others can get enormous educational and practical benefits from learning Haskell and I'd really rather be using Haskell than e.g Python.
Tons of other languages and programming styles can provide the same benefits. It doesn't have to be Haskell against Python or C++ against Erlang. We can pick, mix and match and let the best of the bunch naturally emerge.
There aren't any other languages that can provide the exact same group of benefits.
What you do is you weigh the benefits and disadvantages of the languages you are considering.
Haskellers just want the benefits of their language known so that it isn't overlooked due to being too difficult/unfamiliar/uncommon etc.
This article missed the point of the original article it was addressing, which is that haskell is often fast enough for the job with a minimal amount of optimisation.
The problem is that the original article author chose a shootout example, where speed was the objective. Idiomatic Haskell is rarely going to win on speed.
The C code in this article is by the authors admission not very robust against invalid input. A robust Haskell implementation would likely take less effort to produce than a robust C implementation. Which is better depends on the goal.
I don't think other languages have the same benefits of Haskell. They have other benefits but not Haskell's.
Python/Ruby: very easy to learn and be productive quickly.
C: high level of control over resources and easy to get good performance.
Haskell: very good static guarantees about correctness. Relatively easy to get decent and good performance. Extremely educational and mind expanding, far more than say Lisp. Allows very high levels of abstraction. Great concurrency and parallelism support.
Lisp: easy to extend syntax with macros and manipulate programs programmatically.
All these languages have benefits, but not quite the same ones.
What about Objective Caml? Seems to fit the bill for all those metrics as well, yet for a reason that eludes me to this day, it never quite reached the kind of "street rep" that Haskell now enjoys.
I don't know OCaml very well, but I do know that without laziness, it does not support as much high-level/reusability[1].
Also, it lacks much of the mind-expanding stuff in Haskell (The class hierarchy explained by the Typeclassopedia).
AFAIK, GHC had surpassed OCaml's compilers' performance, concurrency support, etc. Compiler rewrite rules are also a very nice feature that other languages cannot immitate due to lack of purity.
Many of the parallelism benefits are much a result of purity, which OCaml lacks. Also stuff like software-transactional-memory rely on purity for their guarantees, which OCaml cannot provide. Reasoning about code is also much easier with purity.
The only thing Caml lacks that haskell has is typeclasses, so doing non-integer math is ugly, and non-int/float math is REALLY ugly, like something out java or something, since there is _no_ operator overloading.
I suspect the main barriers to Caml gaining wider acceptance had more to do with a cultural barrier - both the documentation and error messages in English were (and mostly continue to be) fairly poor.
Caml actually did attain a measure of popularity for a while in the early 2000s - the winning ICFP content entry was in Ocaml for something like 4 years running at one point.
Unfortunately, OCaml doesn't have enforced purity outside the IO monad. This removes one of the main benefits of Haskell, which is to force people who are terrible at functional programming to actually write code in a functional style.
I actually think the main thing OCaml lacks is the separation of pure from non-pure code via the type system. I have really become addicted to the way this ends up affecting the architecture of the code, and the guarantees it provides when using code I did not write.
You aren't speaking to the principal point in the post Peaker links, which is accepted by Harper in the notes "As you know, in the eager world we tend to write out our own recursive functions, rather than use combinators", which is of course all any anyone cares about; with explicit recursion the user's IQ falls 50 points immediately. The whole discussion presupposes a mechanism for 'opting out' of default strictness or default laziness, which exists in many languages.
Optional laziness is not good enough -- read the blog post I linked to. You can't re-use and compose existing functions from the standard library if they all tend to be strict.
OCaml isn't purely functional, so it misses some of the most important benefits of Haskell. It's also not entirely honest (in the fact that you can do things that the type system doesn't tell you about.)
That's certainly true and always be true given the mutability underlying architectures and operating systems. However, in Haskell, unsafePerformIO is normally only used when it can be shown that a function is referentially transparent. When this is not the case, a function ought to return an IO value.
The general rule is: don't use unsafePerformIO unless your name is Simon ;).
Haskell succeeded in isolating side effects to a great extend. It has its advantages, referential transparency leads to predictable and understandable code. And disadvantages - you have to keep state via argument passing (which can be hidden nicely using the State monad) and when implementing inherently mutable algorithms you usually end up using the ST monad. Still, on the outside, a function will be pure and pretty :).
This is all correct, and I do think it's an important difference between Haskell and OCaml, I just didn't like the way it was phrased ("lies") because that's not really the difference. The difference is that Haskell has a place for you to document (in the type system) whether functions have certain types of effects (and if you cooperate just a little, the compiler will make sure this documentation is correct). The cost is the (conceptual) overhead of actually doing this - which is small when you know what you're building but can require some re-plumbing of a chunk of your code when things change.
This difference isn't inherently a win for Haskell - that OCaml doesn't bear the cost of significant restructuring because you find you need to do some IO based on results internal to a function several layers down is a big win in the short term. Whether it's a win in the long term, and how much the short matters versus the long, would seem to depend very much on the particular problems you're solving.
Having come to Haskell from C by way of OCaml, my personal preference is to get as much out of the type system as one can - but one should always be aware of the trade-offs.
unsafePerformIO for an IO action that is not actually pure is considered a bug, and the compiler makes sure you can't abuse unsafePerformIO without being bitten very very hard.
The core of Objective Caml is awesome, but there is a severe lack of libraries, which is guess is in good part due to the limited number of users, and also to the limited advocacy/marketing done by them (compared to, say, Ruby or Haskell).
OCaml led to F#, which is getting some level of adoption, but perhaps limited to folks OK running Windows, as not everyone knows/likes Mono. The fact Microsoft ships a very complete set of tools for F# is pretty cool.
Wrong. There are three main things I want from a programming language: first class functions, strong static types, and purity. Haskell is literally the only language out there today that has these three characteristics and is mature enough for production systems.
It's different. It's like you were listening to one music genre all your life and there's suddenly something new.
If, say, Python was the only language with "for(each)" loops, you'd see many blog posts about that too. After I saw this, the old style of iterating by index feels so antiquated. Haskell gives the same feeling many times. It has unique features when it comes to abstraction and I feel they are the right way to program.
strangely enough, I think one of the interesting features (type classes) only ended up reappearing in Go with its interfaces (granted, only in a very limited fashion).
I think one of the most important features of typeclasses is the ability to be polymorphic on just the return type of an expression. For example, there is a typeclass called Read which comes with a function called read:
read :: Read r => String -> r
That is, you get a function from a String to whatever type is in the typeclass. It's the opposite of toString. This is also used in a whole bunch of other contexts like numbers--numeric literals are polymorphic, letting you add any numeric type you want and still use literals for it.
As far as I know, Go cannot do anything of the sort.
There are some other nice features of typeclasses, like the ability to have multiple types. That is, you could have a typeclass for multiplication that allowed you to multiply two numbers, two matrices or a number with a matrix. You can even have typeclasses that are recursive, allowing you to define them for an infinite amount of types: you could have a typeclass for functions of any number of arguments, for example.
I think Go interfaces do none of that as well.
Now Rust, on the other hand, actually* has typeclasses.
I'm personally on a quest to "get" Haskell right now, but I really don't see what's so special about that or unique to Haskell. Languages have long accomplished what `read` does by simply casting/coercing the string into the desired type.
In python, for example, I'd call int() on the input strings I want to turn into integers.
The point is that you use a single function for any type. So in Python you'd have to do int() or float() or customType()... In Haskell, all of these would be just `read'. The type system can figure out which instance to use for you, without having to specify it. Moreover, it's also trivial to write a function that works on any readable type, something hard (although not entirely impossible) to do in Python.
This makes it much easier to use: whenever you want to get any type from a string, you just read it. This is just like being able to print values of any type, except for parsing.
This can also be used with constants rather than functions. So maxBound is the maximum value for any bounded type. In Python, the closest you can get to that is something like float.maxBound. (Except, apparently, it's actually sys.float_info.max.)
As I mentioned, this also lets you define new numeric types that can still use the same literals. For my most recent project, I needed 18-bit words. I could do this and still write expressions like `x + 1` using the Word18 type. Moreover, it would be very easy to make my code generic over the exact type of number used--this would make it possible to use numbers of different sizes or even something more exotic like random variables. (It happens to be tricky because some of the semantics I was working with rely on having exactly 18 bits, but that's an issue with the domain and not with Haskell.)
In another language, I would either have to use the normal int type and make sure to always keep track of the overflow myself or I would have to wrap every literal in a function that turned into an 18-bit word.
So the special quality is being able to dispatch on the return type of an expression rather than on the types of the arguments. I think this is very special indeed and extremely useful. I hope this clarifies everything.
That does make more sense, and I appreciate the more thorough explanation. It seems a bit ironic though, that Python makes you more specific and certain about the output type than Haskell!
How so? Inferring which implementation of "read" to use based on the types is exactly the same spirit of inferring which implementation of "length" to use. Haskell can maintain consistency in both, but dynamically typed languages must revert to explicitly choosing the implementation in the former.
Yes, it goes nicely with that spirit. But it doesn't go nicely with "have a well-specified return type for all functions that you know in advance", as "read" can return anything. It's great for languages to pick the right implementation of "length" on the fly, but the point is, they all return integers. "read"? Who knows what type you'll get back in Haskell, the "you must specify [or at least plan out the] output type" language? There, it seems to break the trend.
By that broad a definition of "a well defined polymorphic type", so does every function in every language, at least those that can say "everything's an object/function/data/etc!"
show :: Show a => a -> String
read :: Read a => String -> a
And there's also a law that connects their behavior:
read . show = id
show . read = id
There's nothing special about the position of the polymorphic value, it can be the argument or result. In either case, it is fully statically typed, and you never need to "downcast" it to actually use it. There is complete type safety.
Note that this is based on a correction of an OO languages' mistake: it separates the passing of the function table parameter from passing the values. This allows passing a function table for any position in the type, and not just for the first-argument-type.
Having something like:
Object read(String x) { .. }
Is entirely different for two reasons:
* The implementation isn't chosen by the type
* You will need to eventually convert the "Object" type to the specific type you need.
In Haskell, when you write:
read :: Read a => String -> a
It is actually a shorthand form for:
read :: forall a. Read a => String -> a
The "forall" is called "universal quantification" and here it is like an implicit parameter "a" (think, template <a> parameter). Caller gets to pass in/choose any "a" they want, and as long as there is a valid Read instance for that type, it will work.
However, in you could also (in pseudo-syntax):
read :: exists a. Read a => String -> a
This is called an "existential quantification", and it is not like an extra parameter "a", but like a tuple: (a, Read a => String -> a). It means: "There exists a type 'a' such that this will be the result type of read". i.e: The caller does not get to choose which type "a" it is, but instead the caller gets a value of an arbitrary, unknown type that happens to have a Read instance.
The only way to make any use of a value whose type is existentially quantified like that is to unsafely "cast" it to the type you want, and hope this is the correct type actually returned by "read" here.
This is of course nonsense in Haskell, and nobody would ever do that. However, it is very typical OO/Java code.
Whenever a Java function returns an Object, it is basically returning an existentially quantified variable, and the caller needs to correctly "cast it down" to the correct type.
Instead of repeating the same thing over again with more words, perhaps you can explain why it's "in the spirit" of Haskell not to know the return type (except with the vagueness of "Object") of a function in this one case.
There's a difference between a type like Object and a polymorphic type. This is easy to see with generics in a Java-like pseudocode:
public foo(Object o) { ... }
is very different from
public foo<A>(A o) { ... }
In Haskell, you always use the second style of function--there is no sub-typing, so there is no real equivalent to the Object type in Java.
We can imagine something similar for read. If we didn't know anything about the type, it would look something like this:
public Object read(String s) { ... }
instead, it's actually something like this:
public A read<A>(String s) { ... }
So whenever you use read, you would be specifying the type it returns. I imagine it would look something like this:
read<Integer>("10") + read<Integer>("11")
This is exactly how read works in Haskell. The important difference, however, is that the type system can infer what type read is supposed to be. So the above code snippet would look like this:
read "10" + read "11"
If you made the types explicity, it would look like this:
(read "10" :: Integer) + (read "11" :: Integer)
So you always know what type read has when you use it. But what is the type of read itself? The Javaish version looked something like A read<A>(String str). The important part is the generic A: it's a polymorphic type variable. In Haskell, the type is similarly polymorphic: String -> a.
Of course, the type isn't quite this general: you can only read things that you have a parser for. In the Java-like language, it would probably look roughly like:
public A read<A extends Read>(String str) { ... }
In Haskell, we do not have any concept of "extending" a type: there is no sub-typing of any sort. Instead, we have typeclasses which serve the same role, so the type ultimately looks like this: read :: Read a => String -> a.
Hopefully this clarifies how you know what type read has. Really, it's no different from any other class like Show. There is a very clear parallel between show and read:
read :: Read a => String -> a
show :: Show a => a -> String
Being able to take advantage of this sort of symmetry in the language is extremely useful. My favorite example is with numeric literals, which are polymorphic:
1 :: Num n => n
In my previous Java pseudocode, this would look something like:
1<A>
and would be used like:
1<Integer> + 1<Integer>
2<Double> * 3<Double>
Of course, this is hideous, which is why type inference is so important.
I haven't used Rust very much yet, but my impression is that they can dispatch on the return type at the very least. So they can have a Num typeclass. Perhaps they're limited in some other ways, but it sounds like a good start.
What do you mean add two of the same things together exactly? If I'm not mistaken, what you are talking about is possible but I don't totally understand what you are saying. Could you provide an example?
The types of polymorphism are different. Haskell's parametric polymorphism is compile-time polymorphism, Go's interfaces are run-time polymorphism.
If we draw a parallel to C++, Haskell's polymorphism is like templates, Go's interfaces are like abstract base classes (except that they use compile-time duck typing).
For example, consider the (+) function in Haskell:
(+) :: Num n -> n -> n -> n
If you use an Integer as the first argument, the second argument must also be an integer, as well as the return type. You can observe this by currying, binding only the first argument:
In Go, you could write an interface Num and define a function:
func foo(a Num, b Num) Num
However, if float64 and int both implement the Num interface, you could also foo a float64 and int. And it would return something that conforms to the Num interface, but what type is actually constructed depends on the implementation of foo.
In other words, type classes have relatively little to do with Go interfaces, aside that they both implement a (different) form of polymorphism.
It's possible to make something akin to Go's polymorphism in Haskell, but you'd need to hide the type parameter of the typeclass using existential quantification.
data NumBox = forall n. Num n => MkNumBox n
You can now use the NumBox for runtime polymorphism:
Prelude> :type [1::Int, 4.4::Double]
[...]
Couldn't match expected type `Int' with actual type `Double'
[...]
Prelude> :type [MkNumBox (32 :: Int), MkNumBox (22.32 :: Double)]
[MkNumBox (32 :: Int), MkNumBox (22.32 :: Double)] :: [NumBox]
> If we draw a parallel to C++, Haskell's polymorphism is like templates
It's important to note that even if Haskell's classes feel like C++ templates they are actually implemented more like abstract classes. In C++ you essentially create a new function for each template instantiation. In Haskell the function is passed an additional pointer telling the function how to implement the class. Kind of like a vtable pointer except not bound to the actual object. This has performance implications, so it's an important point to remember when comparing the languages.
GHC is pretty good at specializing type-class-using functions for particular instances, e.g. if you have
fac :: (Num a, Eq a) => a -> a
fac 0 = 1
fac n = n*fac (n-1)
main = do
x <- readLn :: IO Int
print (fac x)
it will create a specialization (without any indirect calls)
fac_Int :: Int -> Int
and call that (in fact, in this case fac_Int will even be implemented by a worker function of type Int# -> Int# (unboxed ints)).
If you don't want to rely on this automatic optimization, you can always add a pragma {-# SPECIALIZE fac :: Int -> Int #-}.
(I used a recursive example because a non-recursive function would usually simply be inlined, avoiding any indirect calls too, assuming the call site is monomorphic).
That's a good point, however, the important aspect of typeclasses is that they're type-checked, not that they're compiled via monomorphization. Also, I believe that the way that typeclasses are compiled is at the discretion of the compiler, so actually typeclasses may be compiled in the same way as C++ templates. Another point is that monomorphisation isn't always possible, because a function could be run with an infinite number of types.
He's talking about the inability to specify an interface that specifies functions that take two arguments of the same type. Go can't do that.
You can sometimes work around it. For instance consider sorting. The natural approach is to specify a type that allows comparisons. Go can't do that. But in go you can have a collection which has a function Less(i, j int) bool that takes in two integers and compares the objects at those positions (which in turn just happen to be of the same type).
But this is a limited work around, and there are plenty of cases where you can want something more flexible.
I'd say a healthy job market is also something anyone would want for their platform, and the more teams pick it up the more likely it would be for you to find a good job working with some stack you love.
Many programmers find it easier to understand something by trying to explain it. Haskell has a lot of foreign concepts, so you see a lot more articles from people trying to convey their "aha!" moment. Many of these are erroneous or overly exuberant but it's just because people get excited when they first understand something new. It probably does come across as proselytization--that may even be the intent--but if it helps you cope with it, know that the majority of these articles are first impressions from beginners getting their minds blown, and most of them don't stick it out.
The number of people present in the IRC channel is more of a sign that they have a very engaged and active community, which is a good sign for the language itself, for sure, but doesn't necessarily correlate to the general adoption of Haskell as a whole.
A measure like http://www.tiobe.com/index.php/content/paperinfo/tpci/index.... (by no means a perfect one, quite like speed benchmarks ironically) seems to indicate that Haskell still has a long way to go before reaching "mainstream adoption". But you're right about that statement, it was intended more as a humorous jab than a statement of absolute truth :)
I've seen far more people advocate Python than Haskell, and look at it now!
Of course, since Python is already popular and widely used, this has died down a bit since. That said, even on HN, where it seems almost everybody is using Python anyhow, I still see at least as many comments promoting Python as Haskell.
I still didn't take the time to learn CT (after 4-5 years of Haskell). It might be time to do so, though, because I keep seeing the awesome things people with a CT background apply it to.
However, you can get very very far in Haskell without ever doing or touching any CT.
>I still wonder to this day why Haskell programmers want their language to be loved so much.
I don't. I just talk about it so that some of the brighter people out there will try it out. This increases the pool of people for me to hire as haskell programmers that can get started right away instead of having to learn first.
At times it feels like that kid at the playground that spends half his time telling everyone how he's the best thing since sliced bread and cries himself to sleep at night wondering why no one will play with him and his monads.
Don't get me wrong, Haskell looks like a great language with obvious qualities and I don't knock anyone for using it, to each his own, it's just the never ending publicity and proselytism that really rubs me wrong. People adopt new languages, not the other way around.