Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Such a boring release. That's why I love Rust.


For me the best releases will be when it matches D/Delphi/Ada/Eiffel/.NET Native compile times and cargo supports binary dependencies.

The language as such is already quite good, in spite of one or other possible improvements on borrow checker ergonomics (e.g. callbacks).


Binary deps are on the Cargo team’s plans for the year, and compile times are always a focus. Working on it!


Looking forward to them. :)


> For me the best releases will be when it matches D/Delphi/Ada/Eiffel/.NET Native compile times [..]

Any ideas on when this will be? There has been talk about faster compile times for years now without that much apparent progress.


> There has been talk about faster compile times for years now without that much apparent progress.

Really? What makes you say that? Have you tried it?

    $ git clone git://github.com/BurntSushi/ripgrep
    $ cd ripgrep
    $ git checkout 0.4.0
    $ time cargo +1.12.0 build --release
    
    real    1:09.13
    user    2:06.08
    sys     2.839
    maxmem  359 MB
    faults  1292
    $ cargo clean
    $ time cargo +1.34.0 build --release
    
    real    22.484
    user    2:32.66
    sys     3.380
    maxmem  702 MB
    faults  0
That's a >3x wall-clock speedup over the past 2.5 years on a cold start. That's pretty good.


3X speedup is a definite improvement. I don't use rust regularly, so have been relying on community news for updates and while they have mentioned planned work to increase compilation speed a few times they haven't really talked about any landing (that I saw anyways).

Also cold start speed is not really what I care about. I am more concerned with compiler speed while working and running tests. I find that if a language compiler is to slow it breaks flow while testing changes. I haven't seen much mention of the improvements the incremental compiler gives in a while. Last I read was the 2017 blog post [1] during beta and it only showed modest improvements and more recently I only see talk of how it still needs a lot of work [2].

[1] https://internals.rust-lang.org/t/incremental-compilation-be...

[2] https://nicoburns.com/blog/rust-2019/#compile-times-especial...


Incremental compilation was a big improvement over the status quo.

I don't follow compiler performance developments. I'm just responding to clarify that there has been performance improvements. They have likely just built up over time. I don't think there was any one specific change that dramatically improved things.


The memory doubled?


I don't know about that specific measurement, but memory usage of the Rust compiler has not doubled in general. If it did, we'd notice immediately, as the script crate in Servo would probably OOM.


I wouldn't be surprised if it was due to paralellism, e.g., compiling multiple code units in parallel or even better parallelism at the Cargo level. My stat is just the maximum memory usage reported at any point in time. But that's just a guess. ripgrep itself uses more memory when using parallelism, just because of having more buffers.


Not sure if it’s the case here, but decreasing time complexity can often require a corresponding increase in space complexity.


We can only speculate about the lower bounds of compile times, but making things faster takes a lot of work. Rustc is a huge codebase, and making general leaps requires large architectural changes. We’ll just keep chipping away. It’ll take some time.

There has been a lot of progress, it’s just been slow and steady. For example, since the first of 2018: https://perf.rust-lang.org/compare.html?start=2018-01-01&end...


This will probably be a big win if/when it lands [0] (adding the ability to replace LLVM with cranelift)

[0]: https://github.com/bjorn3/rustc_codegen_cranelift/issues/381


This is only for debug builds though, cranelift. Isn't going to offer good enough performance for a release mode binary anytime soon (and probably never will).


If Cranelift makes debug builds much faster, then that's still a huge win. In my line of work, I do tend to compile things in release mode a lot purely because I often need to debug performance related problems, and for that, debug mode doesn't work. However, most of the test suites in my crates are built and run in debug mode. For example, the test suite for regex-syntax is fairly large, and it can take several seconds to build after making a change. Incremental compilation helped a lot with this, but there's still an annoying waiting period to run the tests. I'd be very happy to see Cranelift reducing the time it takes to run tests.


I 100% agree with you that Cranelift is a big deal (Last year I had to change my desktop CPU just to be able to build, in a reasonable amount of time in debug mode, the Rust project I'm working on − ok my CPU was still a Core 2 Duo at this point).

Even for things where debug mode is too slow, Cranelift could be a game changer since it promises to produce more performant binaries than LLVM in debug mode (Idk if it will be fast enough for you use-cases though).

I just wanted to point out that Cranelift won't solve the compile-time issue all by itself.


We will soon have async/await syntax which will replace some of the use case of callbacks, and (I assume) reduce the borrow bookkeeping tedium.


I don't think it covers GUI callbacks like on Gtk-rs, accessing internal widget data.


I think that's likely. async/await doesn't appear to map directly onto traditional OO GUI framework callbacks well, just like rust doesn't tend to map well to traditional OO. I believe that async/await will open up new architectural patterns that should be as ergonomic as traditional OO callbacks for GUIs. But these won't map well to existing code or existing frameworks.


There have been attempts, like relm, to map futures onto GTK. That means it would work with async/await too, as they’re fundamentally sugar for futures.


Even those are basically Relm components that happen to render via GTK. You can't really import a plain `GObject` and implement a traditional `fn on_click()` without having to worry about internal mutability and the associated borrow checking complexities.


Why binary dependencies, what for?


Compile speed, not compiling the same dependencies all over the time.

Many commercial use cases require distribution of binary libraries, Rust community might care about winning those customers, or just let them go and leave them to keep using the languages that fulfil such use cases.


> Compile speed, not compiling the same dependencies all over the time.

Sounds like caching these would be a simpler but solution.

> Many commercial use cases require distribution of binary libraries [...]

Good use case, and in this case another cache would solve the issue as well. Defining a protocol for binary caches and being able to add your own could solve this very well. The same solution could help solve the previous one too.

BTW, if you are in either case, have you looked into Nipxkgs[1]? They might be able to do both, the basic capabilities are there, not sure if the Rust infrastructure[2] already provides it.

[1] https://nixos.org/nixpkgs [2] https://nixos.org/nixpkgs/manual/#users-guide-to-the-rust-in...


And if you have to drop down to looking at the disassembly, distributing binaries helps ensure everyone's looking at the same disassembly. Reasons for this include investigating codegen bugs, figuring out optimization issues, ensuring functions involved in cryptography aren't leaking information through timing side channels, etc...


Personally I am very excited about TryFrom, I have been waiting for that forever it seems.


Agreed! I definitely have a handful of hand-rolled `try_from` definitions that I'm excited to erase from existence.


For the most part, that's true. But I am so excited that custom Cargo registries have finally landed!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: