Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I keep reading in Rust surveys that Rustaceans just don't care that much about compile times enough to prioritize improving them.

I am interested in how you got that impression, because at least in our official surveys, it's often one of the most-requested improvements to Rust, and it's something that we're constantly working on improving. Still a ton of work to do though!

For what it's worth, my workflow is closer to yours than "compile twice a day." And if you're using rust-analyzer, by default, it compiles the code every save!



Here's the exact thing that I checked when making this conclusion a few weeks back, the most up-to-date state of the compiler roadmap that I could find: https://rust-lang.github.io/compiler-team/minutes/design-mee... I see 16 top-level goals on here for next year (under the Goals section). The only thing that seems related to compile speed is to continue working at incremental compilation (no new initiatives?). And even there, the only action item for the entire year is to create a working group.

I love rust-analyzer! But I'm not sure what you mean by "it compiles the code every save." I love how fast it can type-check my code, but I was unaware it could actually compile my code into something I can run? I thought it was just a LSP provider.


Ahh I see. Yes, so that's meeting minutes, so they're very inside baseball, and so it would make total sense that you would get this impression from this.

Before we get into that, to answer the other question:

> But I'm not sure what you mean by "it compiles the code every save."

Rust-analyzer (by default) will run "cargo check" on save. cargo check does everything except codegen, so it won't give you something you can run, but it does invoke the full set of compiler analyses and everything else.

Now, what it does with the type-check stuff is the key to understanding what you're missing from the compiler team roadmap, funny enough. So rust-analyzer is actually going to end up merging with the rustc codebase eventually, if all things go to plan. Basically, rust-analyzer is slowly re-implementing the compiler from the outside in. It's doing this because the best way to get the largest win on compile times is to completely re-architect the compiler. This comment is too long, so I won't get into too many details, other than to say that it's going to be less like the Dragon Book and more like C#'s Roslyn, if that means anything to you.

It's doing this by taking advantage of a process called "librarification," which is extracting stuff from the compiler into re-usable libraries. You can see this on the notes when they talk about chalk; chalk is the next-generation trait resolution system. This is integrated into rust-analyzer, and into the compiler. So slowly, bit by bit, things are being re-architected and integrated in a way that will be much, much better in the future.

So this is a massive, massive project that touches everything, and so there's nothing spelled out in the minutes that says "this is for compile times" because the folks involved already have this context, basically.

And so yeah, that's the big project. There are also contributors who, while this larger work is going on, are working on individual PRs to make things faster. See https://blog.mozilla.org/nnethercote/ for one of the largest contributors in this regard, that talks about the work he's doing. But that doesn't appear on this either; there's no need for a plan or coordination here.

... does that all make sense? I am thinking maybe it would be good for us (to be clear, I mean the Rust project, I am not on the compiler team and am not doing this work) to like, actually write a blog post on all of this...


Mostly, that makes sense. I think the thing that doesn't make sense to me is that rust-analyzer seems to me to be a LSP - my mental model of it has nothing to do with codegen. And I thought that codegen and linking was the slow part in Rust (cargo check runs so fast!), so how could those gains be brought back to rustc? I'd bet other people have this misconception, too - maybe that should be highlighted on the rust-analyzer readme on Github.

If rust-analyzer will eventually introduce those gains to rustc, that is fantastic news, and I'll be watching it with great interest! The improvement from RLS to rust-analyzer is exactly the type of improvement that brought Rust from the "fascinating tech demo" to "I could actually see myself using this day-to-day."

And yeah, I'd love to read a blog that went into more detail.

> working on individual PRs to make things faster. See https://blog.mozilla.org/nnethercote/ for one of the largest contributors in this regard

Yeah, I read through that too. He's my hero! :-) But the feeling I got when reading it was: "fast compile times" really need to be part of the DNA of a language. Let me explain what I mean. Some languages, like Go and TypeScript, have this in their DNA, and that means that every design decision and every addition to the language is considered seriously through the lens of compile time speed, and vetoed if it were too costly. But with Rust, the fact that there's one guy writing a blog post about some incremental wins he managed to chalk up just... doesn't seem like it's part of the DNA. If it is, why is it just one guy, and why do a lot of his changes seem more like incremental wins than the big sweeping changes I'd expect to be necessary? I could definitely be wrong here (sounds like I am and rust-analyzer is that big sweeping change).

Thanks for all your great responses, by the way. I really appreciate it!


> Thanks for all your great responses, by the way. I really appreciate it!

Any time. :D

A couple more brief comments:

> rust-analyzer seems to me to be a LSP

Remember that LSP is a protocol, something/someone has to actually figure out the answers. Like, the LSP says "please draw the squiggles here", but something has to actually say "line 1, column 10, please". Doing that involves semantic analysis, which is what a compiler does.

> And I thought that codegen and linking was the slow part in Rust

It is the slowest part of the current architecture, but that doesn't mean that the current architecture is the best possible one.

The RLS invoked the compiler, and then examined its output to say "line 1, column 10, please". And you said you saw the improvement with the architectural switch to rust-analyzer. Same thing.

> "fast compile times" really need to be part of the DNA of a language.

You are correct that, when push comes to shove, if there's a tension between, say, runtime speed and compile-time speed, Rust will choose runtime speed. Rust will not have compile speed as high up on the list of concerns as Go does. But it is important enough that we don't let major regressions happen, and actively pursue improvements where possible. https://perf.rust-lang.org/ for example, tracks this data over time for this kind of reason.

> why do a lot of his changes seem more like incremental wins than the big sweeping changes I'd expect to be necessary?

Well, again, it's not always either or. He's doing the incremental thing, and others are doing the big sweeping changes thing. His improvements land nearly every release, but the bigger projects take a lot longer. They work in tandem to make things better than they were before.


Maybe with some heuristics, you could hide some latency by beginning to compile before saving.


Maybe. At the rate I hit :w, probably not :)


One of the first computer system I used frequently was Windows ME. This system used to crash (BSOD) extremely frequently (depending on what you were doing, it could be as frequently as every 20 minutes), and any unsaved work would be lost.

I have therefore developed a reflex where I hit Cmd-S every time I finish typing. I actually have to put conscious effort in to stop this when using software where saving is slow.


I used to work on some 3D printing related software, and so I had to type “Objet” a bunch and it was impossible to not type “object.”


How do you have Vim (or neovim) set up to do that and report errors inline?


I use VS Code with the vim plugin.

I hear you can get this working with vim/neovim but I haven’t done it myself.


I use ALE, which I believe includes Rust by default.

https://github.com/dense-analysis/ale


How I did it is that I install VimPlug then CoC then the rust extension.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: