Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't buy this argument. Erlang does have a steep learning curve, but it isn't the semantics of the language, it's understanding how to build and deploy OTP applications. Understanding apps, releases, clustering, mnesia, process registries, circuit breaking etc. all take time. It also will cost you hours, and hours, and hours down the road when you have a large sprawling app with very little abstraction and no -specs, because dialyzer is frustrating.

I was able to learn Rust in about week by reading the O'reilly book. Compared to C++ it's almost tiny (I have many Bjarn's C++ tome and the Meyers books right next to my Rust books actually), and unlike C++, the compiler will essentially teach you the language if you didn't learn enough from the book to write code that compiles.

Once you've put in the initial investment, Rust code is way easier to maintain, read, and scale. It's also easier to onboard people. Onboarding people for Erlang is hard, and it's hard to hire for. Rust on the other hand is familiar to all of our FP people who like Haskell and Erlang, and our Java and C++ devs.

I've also found it extremely suitable for writing services. We have many services in Rust running in production right now. We have a service toolkit for our cross cutting concerns. We can run our IDL through a compiler and get a client and server that are on the order of 20x faster than their Erlang counterparts.



For the record, I would prefer not to get into a programming language flame war. Having said that, I will add the following: It's much harder to reason in Rust, as there are many concepts one needs to keep in their head (borrowing, lifetime, pointers, etc). Plus Rust is much more difficult to read. Whereas Elixir, and I emphasize Elixir over Erlang here, is much easier to reason in, concise, and simpler to read.

At the end of the day, use whatever language you prefer, just keep in mind that software needs to be (1) shipped, (2) maintained (by multiple people who read each other's code) and (3) evolved. I would also add (0) experimentation; before one even ships any code, one ought to easily experiment with various ideas.


I content Erlang can be very hard to reason about. I've seen some extremely gnarly Erlang where the author didn't write a -spec and it was almost impossible to tell what shape a tuple parameter would take. In Rust the compiler enforces all of this for you. Rust gives you:

- Maintainability: Rust signatures not only tell you the types of arguments, but also their lifetimes. A signature in Rust is an extremely strong abstraction. Traits can further constrain types making it essentially a game of plugging the right blocks in to the right holes. Erlang is like having the blocks but all the holes are under a tablecloth and you just have to guess how to fit them in.

- Correctness: Without a static type system Erlang does very little for correctness. Rust has pattern matching as well and can enforces exhaustion. Not even Haskell does that. Lifetimes guarantee safety even for shared pieces of data.

- Speed: Obviously, Rust is a compiled, manually memory managed language with near C++ performance.

On every metric, Rust offers a lot more than Erlang. As someone who has spent years with Erlang, Rust is simply better for professional development.


Rust however does not offer nearly the same level of power when it comes to runtime introspection that BEAM does.

Coming from a company that uses elixir heavily and has made a significant investment in rust, I don’t think we would ever use solely rust on our distributed systems. However, we have rewritten some code that elixir was too slow at in rust and exposed it as a NIF on BEAM - and that has worked well. (Blog post on that soon hopefully).

I do admit, we are also going to be ditching mnesia for one of our clusters for our own in-house simpler system (ETS replication with different consistency/netsplit guarantees for our use-case), we've had to write our own cross-node process monitoring solution (at peak we see 200M+ cross-node monitors on our cluster), and we've also had to overcome the limits of message-fanout on distribution as well (https://github.com/discordapp/manifold).

However, for operating at our scale (peak 9m ccu @ 5m events/sec fanout to clients), we run a surprisingly small number of servers for our real time system (~120).

EDIT: I can't reply to your post below, but I think the runtime introspection we run into is not dealing with OS level metrics, but application level introspection. Introspecting the state of processes, writing code in the repl to debug issues within the cluster, benchmarking to find hot functions or where specific processes are spending a lot of their time. Capturing traffic to replay on a test cluster to simulate production load, all becomes very trivial with BEAM.


Yes, we can do that through flamegraphs. For any container running on our system, we can get the call stack of the running process, time taken, resources used etc. We can attach to containers, proxy traffic etc, all through Kubernetes.

I said this earlier, but we've essentially separated operation concerns from our applications, and that opens us up to relying more on knowledge of Linux which is easier to hire for, and we can reuse that knowledge and all our tooling with any other languages we want to use.


We run between 200k and 2m processes per beam VM. I don't know how it'd be possible to get as precise metrics as we need just from relying on linux utilities. And in the same cluster, some processes although identical in code have dramatically different workloads.


We couldn't get enough introspection. Now we use Rust and we can use a much larger toolkit. We have BCC, kprobes, flamegraphs... When you use Rust the OS is your machine. With Elixir you need to understand the BEAM, a slightly esoteric platform with a rather small community working on it.


> and enforces exhaustion. Not even Haskell does that.

Maybe it doesn't throw an error message, but ghc does warn about it when you use the -Wall flag. I believe you can get the behaviour you want by turning on only that specific warning (I forget what it's called) and using -Werror.


Seems like you don't get it. In the comment above there is a suggestion that Erlang being well-researched and based on the right principles of FP is good-enough to build almost half of 4G platforms having crappy syntax, crappy VM, crappy everything but OTP itself. This is the point and the big deal.

Rust is a mess of amateurish overcomplicated poorly understood hype-driven crap. Tokyo is utter bullshit (look how Erlang or Go solve the same problems with order of magnitude less lines of code), etc.

Rust, it seems, is repeating the story of Ruby, where a crowd of overly excited (for no reason) amateurs quickly (without understanding) pile up "solutions" to really hard problems, which has been researched by the best minds for last 5 decades or so.

For example, all the concurrency bullshit could be boiled down to the well-understood concept of a software interrupt, which is hardware-assisted to be a lightweight isolated process. No sharing, no threading, no cooperative multitasking, no bullshit.

On the other hand, there are Streams and Futures, which also has been well-understood and researched.

Finally, the Actor Model defines how to build distributed system the right way - the way Mother Nature does (isolated entities communicating by message-passing), which is at the core of Erlang and things like Akka.

Erlang and Go are the best examples of how small, uniform and simple systems could be when based on the right principles and proper abstractions. Rust is the opposite of this.


> Rust is a mess of amateurish overcomplicated poorly understood hype-driven crap.

Please be more specific about the amateurism.

> Tokyo is utter bullshit (look how Erlang or Go solve the same problems with order of magnitude less lines of code),

No, they work in fundamentally different spaces.


Piling up features instead of reduction and unification is the clear sign of amateurism.

For example, ML's (and Erlang's) unification of bindings via unified pattern-matching everywhere is a major achievement and canonical example. Haskell's unified approach to typing, is another. Everything is an expression of Scheme and even Lisp's unification of representation of code and data is the great discovery of old times.

PL design is hard, design of good runtimes (OTP, Go) is even harder. Ignoring almost everything which was good and true in PL field is definitely amateurism.

I don't want even start about what kind of nonsense Tokyo is. Universal event-driven frameworks is the same madness as J2EE. On the other hand, ports, typed channels or futures or pattern-marching on receive - support of fundamental concepts in a language itself is the right way.


Rust is a simpler language than Haskell. You can’t write an os in Haskell. Rust is a replacement of c and cpp and it’s well suited for that. Rust embraces zero cost abstractions which none of the languages you mention do.

You need to wait a bit. Tokyo is a low level abstraction. People will build on top of this. Let’s be real, it’s the most promising language of late.

How long did it take for erlang to mature. Rust has come insanely far in the little time it had. Give it five more, you’ll be surprised.


You are comparing the learning curve of a complete ecosystem (erlang wirh clustering) to a barebone programming language (rust). Isn‘t this totally unfair? ;) Learning another framework for rust (if it exists) to solve the same problems as erlang (transparent clustering of everything) will add many hours too.


That is the language though. Erlang is DSL for writing massively concurrent, distributed programs. Rust is a general purpose language that enables systems programming. Learning how to do either one of those activities with their respective language is all factored into my assessment.


Erlang's syntax can be learned in a few hours. The language is very small. How is that a "complex learning curve?"


Erlang the language is a small part of Erlang the platform. Erlang the language is simple; Erlang + OTP + BEAM VM, which is what Erlang actually is, less so.


I have been following this thread with interest. Wouldn't Golang (over Erlang/Rust) be the ideal language here for scalable microservices ? Comes with in-built concurrency, all your afore-mentioned container orchestrators are written in Go-Lang and it comes with first class support for gRPC. Very stable, very easy to learn, a terrific concurrency model, great libraries for network programming, productive right off the bat and compiles to container-friendly binaries.


Would you say that learning Erlang/OTP has given you transferable knowledge about building concurrent and distributed system? Did you transfer any of that knowledge to your Rust designs?

Do you have an opinion about Scala/Akka?


I don't have experience with Scala/Akka. Erlang is a good language to learn clustering/concurrency, SMP, queuing theory, etc. Absolutely. If you learn about concurrency through Erlang, you'll have the right mindset. But you should also read Simon Marlow's book about concurrency in Haskell. You should learn pthreads and futexs so you can understand the building blocks of mailboxs and channels and higher level patterns.

I am a big believer that you should continually invest in learning and mastering new languages. Each language gives you a different perspective on how to solve problems.

But as far as the right tool to build professional software, there are caveats. Safety, correctness, speed, and maintainability are all important factors in choosing the right tools.

Edit: I should also say I've had plenty of interviewees who couldn't explain the difference between concurrency vs parallelism, and whose knowledge of these concepts was limited to spawning a pthread and locking shared data. Needless to say these people tend to do really poorly at explaining how to scale a distributed system.

Contrast that with the Erlang developers we interviewed, who think in terms of scale. Their answer is almost always something that could accept 10 requests/s or 10 million. Same thing with our Haskell interviewees. They'll answer our algorithms questions by writing out the types and deriving an answers with a single expression. I had an interviewer who was extremely confused by this. We hired the guy that confused him, not only does he understand concurrency, he also understands lazyness and we love lazy developers :)


> I've had plenty of interviewees who couldn't explain the difference between concurrency vs parallelism

Is there a good answer when it seems people don't even agree on what those term mean and apply to in CS.


In computer science, concurrency refers to the ability of different parts or units of a program, algorithm, or problem to be executed out-of-order or in partial order, without affecting the final outcome.

Parallel computing is a type of computation in which many calculations or the execution of processes are carried out simultaneously.


So much this. I don't see myself deploying anything big in Erlang anytime soon but learning it (and building one or two small projects) gave me intuition about message passing concurrency. This knowledge transferred very nicely to Android programming with Kotlin coroutines. My colleague was astounded learning how easy it is to model concurrency with actors and channels and how much of it just works.


Very happy with Akka. It is fast (thanks to JVM), has lots of build-in building blocks / modules and documentation covers everything, also community is very helpful.

https://akka.io/docs/


JVM can't be really fast, since it does lots of context switching when doing I/O, with locking and busy waiting, JNI string conversions, useless buffer copying, etc, etc.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: