I've been writing a parser generator in rust as a way to learn the language, and it's been a mixed journey of being very impressed, and very frustrated. A large part of my frustration comes from what seems like a negative feedback between algebraic data types and the borrow checker. I often find myself with a piece of code that looks clean and simple, using the Option and Result monads to deal with errors in an elegant way, but then I can't compile it because of issues with borrowing. Ultimately, I usually have to break apart the nice structure that feels natural and reimplement it as either a sequence of match or "if x.is_ok()" statements, which just feels... wrong somehow. It becomes tempting to just punt on the error handling and call unwrap() on everything just to get it working for now, which leads to problems later on. It's so close to being incredibly expressive and great for systems programming, but it isn't quite all the way there (at least for me, at this point).
I think that higher-kinded types will probably help a lot with this if they make it into the language, and it's likely that there's a way to get around the errors and get the full benefit from the algebraic data types and monadic functions that I just haven't found yet. The language is enough of a level-up from my C++ days that I'm sure I'll stick to it, even if those things aren't true.
I thought this quote from the article really nailed it regarding my recent experience:
"Expressiveness or elegance is not a goal of Rust. It’s certainly not bad in this regard, just not as wonderful as you may wish if you care it a lot."
I guess it depends where you come from, coming from C Rust feels very elegant. I miss Option a lot when I go back to writing C, for instance.
Maybe if you write your code first and then modify it to get through the borrow checker it just means you're not yet familiar enough with the language's pattern (and given how young and unstable it is at the moment, I doubt anybody can pretend to know idiomatic rust at that point). Maybe if you took ownership constraints while designing your application you would end up with more elegant code? It definitely adds a cognitive load though, that's for sure.
As for the "match", "is_ok" and "unwrap" noise, it used to be a problem in my code as well but since refutable lets have been added to the language it's not really a problem anymore. For instance in my code I used to have:
match map::in_range(addr, map::ZERO_PAGE) {
Some(off) => {
// Do stuff with off
}
_ => (),
}
Or:
let off = map::in_range(addr, map::ZERO_PAGE);
if off.is_some() {
let off = off.unwrap();
// Do stuff with off
}
None of which is elegant or nice looking. With refutable lets I can write that:
if let Some(off) = map::in_range(addr, map::ZERO_PAGE) {
// Do stuff with off
}
For is_ok I use the try! macro which is admittedly a bit hackish but works well in practice. In the end I never use unwrap()/is_ok() outside of test code where I don't want to do proper error handling and let the runtime panic if something goes wrong.
I just rewrote one of my parser functions using that, and it literally eliminated 90% of the code and improved error reporting at the same time. Very cool, thanks for the tip.
Definitely one of the things that frustrates me about Rust, is that I feel like I can't write "pretty" code. Multiple times it was even the reason I gave up learning Rust. Clarification: I think my definition of pretty has come from a certain feel/comfort you have when reading Ruby code (which I have written/read for only a couple months).
Meanwhile, to continue learning Rust, I've done myself a favor. I just told myself: "Rust is worth writing but it isn't for pretty code. Sorry!".
The other, related, mental block I haven't been able to overcome yet: In Java/Python/Ruby/(even C++), the runtime "seems so heavy" I don't care when I "waste" allocations or memory. However, the fact that it says on Rust's home page "featuring zero cost abstractions" makes me feel guilty/incompetent every time I introduce a cost-full abstraction. Again a reason I stopped learning Rust a few times.
Sadly this feeling of inadequacy/guilt extends even to applications. Which is ridiculous since if your application isn't doing expensive things, you probably have a very simple application. This definitely plays a part in why I can't write code as elegant as Rust will allow it.
For the record, sorry, I don't usually like critiquing something I don't have a solution for. However, I thought this was worth bringing up. Mostly I needed to vent, but does anyone else feel similarly? Am I really the only one?
If other people are feeling the same way, I hope it doesn't become a habit for us to feel the only way to write good Rust is to allow it to be inelegant. It would be disheartening if the next systems programming language was memory safe but still noone enjoyed reading it. I'm curious what peoples thoughts are?
Mostly joking, but maybe just a line on the website or guide saying: "We take care of the performance so you can write expensive applications"?
Yes, I think I know exactly where you're coming from. I've gone through the same cycle of starting and stopping with it, for the same reason. I will sometimes spend hours looking at a simple function thinking "There has to be a way to get this to work," when I would have long since put the sequence of if-else statements in a C program, or just allocated some heap memory and been done with it. It's like I want Rust to be Haskell, but without the GC, and I feel like I'm dumb for not being able to make it work that way. It seems like the abstractions Rust makes available should be composable in a way that is prevented by the memory model. The language features are mostly there, but it's just tantalizingly out of reach and it seems to trigger my "puzzle solving" response.
Hear! Hear! I often have to persuade myself: “It’s OK to use String/box, buddy. There are more wasteful programmers in the world. No need to feel guilty of not using references as much as you can,” etc. But still, you can’t stop thinking there may be a better way to do this...
Very interesting point and I also feel similar a need to "overoptimize" my code when I program something in C. This is specially notable when working with strings: In high level languages I use strings concatenation and splitting willy nilly but in C all those strcats and for loops "feel" suboptimal so I have a tendency to write hairy code full of pointers and strtok.
Developers are a strange bunch. I've never heard someone say, "Wow, look at that optimized code! I can't wait to work with this dude!". However, I have heard many times, "Working with this person is great. They do everything from expressive code, to small logical commits to make my life easier."
It's almost as if many of us would rather be seen as an elite, rather than seen as a pragmatic developer.
I usually just tell myself "bad code gets run". Optimizing string concatenations and the like is something I can do after the code is done, doesn't leak, and has enough form that I can write function tests around it.
I think I've never had that issue since I started with C -> C++ -> Java -> JavaScript -> Python -> Scala -> Ruby -> ...
It was always more expensive (performance wise) to be more expressive. And I suppose since the whole community is making that performance compromise, I think it never really effected me.
Meanwhile, in Rust, the majority of the community is looking for the most performant best possible API, and it just seems overwhelming to always try to be a part of that. I'm sure I'll get over it soon. Most likely as the community grows the majority of Rust users will care less about optimal performance.
Actually, I suppose that is what happens to most languages. As the community shifts to users of the language rather than builders of the language, things might seem less overwhelming. Maybe I'm just coming to Rust too soon for ergonomics. For example in Java, there are the people that build the Netty Client / Server and they care very much about performance. Then there are most Java developers who don't really care.
> I think that higher-kinded types will probably help a lot with this if they make it into the language, and it's likely that there's a way to get around the errors and get the full benefit from the algebraic data types and monadic functions that I just haven't found yet.
How would HKT help with this? It sounds like a nonlexical borrow scope issue.
I don't find nonlexical borrow scopes to be a big pain anymore now that I'm used to how borrowing works (and I write thousands of lines of Rust code a week), but they can be annoying when getting started. They're definitely something that can be added backwards compatibly post 1.0; the semantics are not as trivial as they may seem though (you have to do union and intersection of arbitrary regions of a control-flow graph, or loop-nesting tree).
> It becomes tempting to just punt on the error handling and call unwrap() on everything just to get it working for now, which leads to problems later on.
If unwrap() works for you, why not use the try! macro to make your error handling correct? If it's truly borrow check errors you're having, calling try! is treated exactly the same way as the compiler as unwrap() is—but with try!, you will handle errors correctly.
> If unwrap() works for you, why not use the try! macro
Unwrap only works in a loose sense. Writing a parser generator, I want to have it give usable parse error messages. Sometimes this can be done easily with a try!, but many times I should be doing a bit more processing on the error in order to get things like line numbers, input samples, and expected alternatives in the mix.
> and I write thousands of lines of Rust code a week
This is what I think I need to be doing ;). I'm not convinced yet that what I'm trying to do can't be done elegantly in the language, it's just uncommonly hard for me to "get it." I've been writing code professionally for 14 years, and have used over 30 languages in various sized projects during that time, and Rust has given me the biggest skill-check of the bunch. I actually think that my experience with other languages that have similar abstractions is getting my way more than helping.
Regarding HKTs, I'm not 100% certain they would solve my problem, but I frequently find myself wishing I could work in that space. I definitely miss Haskell's do-notation, which I think would be a potential feature in the language with a true monad (I realize there's a macro for it currently, but I haven't played with that yet).
In my original comment I said that my problem was a feedback between algebraic data types and borrow checking, but I think on further reflection that it would be more accurate to say it's between closures and borrows. The and_then function of Option and Result allows me to use them as monads, but doing so means I make lots of one-shot closures. I think the interaction that keeps stalling me is actually there, frequently with the &mut self from the parent function being used inside the closure in a way that's not allowed, or with a non-copyable type being used in the closure. I can get around it with some matching, testing, and unwrapping, but it doesn't feel nearly as solid and clear as the and_then approach.
I think Rust is an awesome language so far, and I think it's "shortcomings" are likely really my own shortcomings, but I have to say that it's been full of surprises so far. The advanced features I keep expecting to have seem to be locked away behind memory management barriers. I'll definitely keep working with it, and I suspect that I'll never turn back to C++ if I have the choice, so it's a triumph in that regard. It's just so tantalizingly close to being the "one true language" that I'm constantly expecting things of it that probably aren't quite realistic.
> If unwrap() works for you, why not use the try! macro [...]
I was under the impression that the whole `try!` situation would become nicer with HKT. It wouldn't solve the problem you've identified, but wouldn't it make the code nicer (in some peoples eyes at least)?
What HKT would let you do is to write a generic "try" function that works for all "error-like things", instead of having to have a separate try! macro for every error type. Except that that isn't the case any longer, because we now have the FromError [1] trait, which means that in practice try! works for custom error types as well.
Instead, what HKTs would actually be useful for is something else entirely: generic algorithms that need to (for example) work with different kinds of collections' iterators and can be specialized for them at runtime, without paying the price of allocating the iterators on the heap. For example, consider a graph algorithm that's parameterized over the type of graph. (Note that I've never actually needed to write one of these algorithms in a generic way myself, but I can see it being useful in some circumstances.)
What HKT is not there for is Haskell-like monadic "do" notation. That simply isn't idiomatic Rust, and I don't foresee it becoming so in the future. The try! macro is the way you do this kind of thing, and it's already shipping today without HKT as a type system feature.
In that case it sounds like a difference of opinion. I'm not sure what is meant to be undesirable about do-notation when the alternative here is an unnecessary early return, which has always rubbed me the wrong way to be honest.
I don't think that full-fledged higher-kinded types will be necessary to alleviate the specific frustrations you're hitting there. There are plans in the works to make borrow scopes smarter (so-called "non-lexical borrows"), which should hopefully free us from having to split out a nice chain into temporaries merely to appease the borrow checker.
This is enough of a priority that I foresee it landing in the language sometime in 2015.
This is my experience as well. I was experimenting with a Rust implementation of the Whisper storage system, and I got completely stymied while trying to wire up the unit tests.
While I could freely pass a file handle into multiple functions, I could not do the same with any standard implementatoins of mock read/write object due to the underlying Vector objects implementing the `drop` method, which disallowed re-use of the object (including simply reading the contents after coming out of the function).
> I could not do the same with any standard implementatoins of mock read/write object due to the underlying Vector objects implementing the `drop` method, which disallowed re-use of the object (including simply reading the contents after coming out of the function).
That was simply the compiler preventing you from using memory after it was freed. It sounds like a case of not calling "clone" to copy the vector, instead of anything relating to nonlexical borrow scopes.
I haven't touched C or C++ in over 10 years as I've been living like a free spirit in the Perl world. I've always wanted to go closer to the metal, but Perl did everything I ever wanted. But I slowly started to feel that dynamic languages had kind of boxed me in, almost making me timid of memory management and handling resource allocation.
So I wanted to go deeper. I played with Go for a while, but kept an eye on Rust. However over the past couple of months, Rust just felt like they got it right. It made me care free about resources just like how Perl took care of everything, while being a strongly typed, static language with generics, and without the problems that might get in my way that Go has (GC and a heavy runtime).
If you still haven't had a look but are interested, take a look at the Rust Guide:
"Rust" was an interesting but slightly confusing language a year-ish ago, when it had chans and tasks and libuv and M:N threading and three kinds of pointers with their own funny symbol. That language, as far as I can tell, basically stopped existing (for lots of small good reasons that added up). Rust 1.0 will be an interesting head-on competitor to C/C++ without the awfulness of C/C++, which seems like a very different thing, and a thing I personally have more of a use case for.
I think, to some extent, this is why Rust keeps being compared with Go. Last year's "Rust" was definitely something that merited comparisons with goroutines and the like. Rust 1.0 is going out of its way to make sure that it can be used for writing dynamic libraries that can be called from C or any other language with an FFI, and is possibly easier than C++ for that use case. This is completely not doable in Go (I suspect that this goal is fundamentally incompatible with having goroutines or a similar built-in concurrency story). Go seems to be a good language, but it's addressing a very different use case from what Rust is (now) addressing.
The "called from C" case isn't the interesting one, even though it's what the grandparent mentioned. It's the "called from a scripting language" case. You can call Haskell or Ocaml or even Java code from within CPython or Matz Ruby. However, if you do this, you bring along an entirely different runtime system and memory layout. Usually this means that you face some very odd memory management bugs and lose any speed advantage you gain as soon as you have to marshal objects across the language boundary.
Servo is doing some things to outsource GC to JavaScript, because the DOM (and therefore object lifetime tracking) needs to interact tightly with both JavaScript and native code:
What are the limitations here? I'm quite a bit surprised you're able to make this work properly with pthreads... I somehow thought the GHC runtime will automatically multithread pure functions that can be evaluated in parallel.
How much can you call back into C from a Haskell function and have things behave reasonably? I suppose there's actually a bit of advantage in that such a thing would need to return a type in the IO monad, but I'm super unfamiliar with the GHC runtime.
I am not aware of any limitations, except for the static linking caveat noted in the README. The build process is also a bit involved; see the Makefile for details.
My use case involves shipping Haskell plugins for C programs to end-users, without requiring any Haskell to be installed. Hence, statically linking all Haskell libraries into the plugin binary. On Linux, this is slightly inconvenient, as GHC packages are not usually compiled with -fPIC, so some recompilation is in order. On OS X, all code is always position-independent.
As for your other question — calling C from Haskell is very easy, and there is a lot of examples.
Sure, I've played with Haskell-to-C FFI before (a very tiny bit). I'm curious if there are some complications with C->Haskell->C, but maybe not.
I'm strongly thinking that Haskell has an advantage here because "well, what if the Haskell function fails" or "well, what if the C function fails" is well-defined by the type system....
Can you return a pointer/reference to a Haskell object to C on one thread, and then call Haskell functions on it from another C thread?
Yes. You can use stable pointers for this purpose.
A stable pointer is a reference to a Haskell expression that is guaranteed not to be affected by garbage collection, i.e., it will neither be deallocated nor will the value of the stable pointer itself change during garbage collection (ordinary references may be relocated during garbage collection). Consequently, stable pointers can be passed to foreign code, which can treat it as an opaque reference to a Haskell value.
This is a great comment, and very much in line with my experience. The tightening of Rust's focus over the last year has been really impressive, and I think it has become more of a great language for a few things than a good language for everything.
On living on free in Perl: I consider Perl, Python, and Ruby to be a part of the broad category of dynamicly typed, OO, interpreted languages. They are all different, but do have a very similar basic concept. In my experience, for writing scalable code there is nothing better than a Python/Perl/Ruby developer who can and does also write C. When the programmer can make a mental translation from their P/P/R to C, they have a much better chance of writing fast code that scales well. This IMO is a killer combination of skills: you write fast code because you know how to avoid expensive operations, and you write code fast because you don't have to declare every struct and accessor method as you would in C.
Then again, there is fast and scalable code, and there is actually shipping. Just because the code is very well written, counting every implied malloc() and memset(), does not mean it's worth anything if it's just sitting on your harddrive. Write code in whatever you are good at, and ship.
> Write code in whatever you are good at, and ship.
Provided that you're not writing operating system code, or crypto code, or code with an inherently large attack surface (web browsers etc), or code which outright needs to be performant or it's useless (databases etc), or code which manages physical processes or lives, or code which deals with sensitive information (such as identifying details of real people).
If it can cost someone's job, or leak someone's identity attached to some data, or worse - lose a human life - if it breaks, writing code in "whatever you're good at" isn't good enough.
Well, I'd rather see a really well written crypto lib in C, than a half finished one in Haskell, but yes I see your point. My main thing is that "genius has its limits" and all that: you can make mistakes in any language and it's not your choice of language that automatically determines if you are going to write crappy code.
If you can't write a piece of code in Python that won't kill a person, I don't really trust you to write it in Rust either.
Actually I wouldn't want to see a crypto library in Haskell - crypto is quite special in that there is a requirement of deterministic timing, to prevent leaking secrets via timing attacks, which is rather impossible to guarantee in a high-level language, especially with a "smart" compiler like GHC that may optimize in non-obvious ways.
Hence it could actually be most secure to implement cryptographic primitives in assembler, however counter-intuitive that may sound.
(Also, how do you erase or overwrite secrets from memory after use in a purely functional language?)
Nim seems to be what I was wishing for Go to be. I'm pretty deep in Haskell right now, but Nim is definitely on my short list.
Of interest is that it is homoiconic IIRC and not a lisp, has very good performance, and seems to have the pythonesque "one way to do it" mantra I believe.
I had a quick poke a few months ago, and found that a lot of the documentation was conflicting (one of the key resources everyone talked about was so out of date it was unusable) and unhelpful, because of the rate the language was evolving at.
I'll get started with it about 6 months after version 1.0 is formally released, I think. Hopefully, the docs will have stabilised by then.
A big problem with Rust right now is that the documentation is awful. This has two implications. The less important one is that the language is harder to learn. The more important one is that there's no architectural document that prevents Rust from becoming a collection of features in search of an architecture. If you have to clearly explain in writing how something works, and the explanation is overly complex, that's an indication that simplifying the design may be necessary.
The basic concept of ownership in Rust is simple. Everything has either a single owner, or a reference counted cell as owner. Things which cross thread boundaries have a locked reference counted cell as owner. Temporary references to single-owner objects are allowed, but must have a shorter lifetime than the primary ownership, and this must be demonstrated by simple static analysis. That's straightforward.
Making those restrictions usable seems to require many features and much churn within the language. It's too soon to tell how this will come out. As I'm mentioned before, I'm bothered by the need for "unsafe" code in pure Rust that's not calling hardware or C code.
> I'm bothered by the need for "unsafe" code in pure Rust that's not calling hardware or C code.
Why is this bothersome?
I'm curious, as to me the advantage is not in full-program safety, but in having a distinction between portions of a program that are guaranteed safe and portions that aren't.
Or, to turn it another way, Rust isn't giving you guarantees about 100% of the code you're looking at: it's vouching for the >0% (hopefully!) but <100% that's marked safe.
It seems an elegant, realistic engineering solution to a hard trade-off between performance and safety.
FWIW, I've found the Guide incredibly helpful, and in general found the docs very satisfying. (Although I do wish the pointers guide was more complete :) )
The API docs + search in the docs are incredible IMO. I'm in absolute love with rustdoc. Having links to the source right there is the most valuable thing ever, both when trying to understand how to use a given API, and when trying to write similar things its really useful to be able to look at how others have done things.
I personally usally find myself searching http://doc.rust-lang.org/std/. One problem I sometimes run into is that the search seems to do fuzzy searches and generates an often overwhelmingly long list of not all too helpful items. E.g. search for "vector" -- a basic building block in rust. The most relevant entry "std::vec" is far down the list. It's also unclear to me, what "experimental" (std::vec) and "stable" (std::vec::Vec) are supposed to mean. Also, during my very first foray into rust, it was not evident to me how the various parts on the page belonged together. Newcomers often are blind to what is evident to the expert.
I also think there should be a consolidated guide, which users can download as PDF and maybe print out (Maybe I'm somewhat old-fashioned in this respect) -- an always up-to-date go-to reference that explains the ins and outs of the language. I don't want to browse 20 guides/tutorials that may or may not contain the info I'm looking for.
Yeah, we do a fuzzy search, so it can be weird. To answer your specific question, std::vec is the module, and std::vec::Vec is the actual struct itself... so the module is overall unstable, but the struct itself is stable. If that makes sense.
Have you seen "The Guide"? It's about 100 pages long, and we used to produce a PDF, at least...
I think people expect _more_ documentation. The guide can only get you so far IMHO.
That said, if I may say so, string handling should be more throughly detailed. It's one of the first thing new comers will stumble upon. Especially for those who come from a dynamic language. Heck, I think it might even been the perfect occasion to exemplify the borrowing system.
Edit: Just some further thoughts on the spot (feel fre to ignore if you strongly disagree).
It feels at the beginning that the guide is kind of building things up. In that spirit I would put an expended String chapter just after "7 Comments" since simple example programs can already be made without what's following (well except maybe loops). Maybe something in the spirit of "14 Guessing game" but smaller. That is, with less, more recently read, concepts to refer to.
What Rust code needs to be unsafe, other than the sharing primitives? Rust has really come together well over the past year, and no doubt with 1.0 providing a serious release, a lot of gaps will get filled in. That said, I've found it pretty easy to get running with Rust, and the IRC channel is one of the best I've been in.
This is exactly how I'd expect to see `unsafe` used: as an additional optimization in situations where you can't get better in any other way, and with a big-'ol comment explaining exactly why. Well, one of the ways, at least.
That's exactly how unsafe ought NOT to be used. All they saved here were about two machine instructions per operation. On some superscalar CPUs, those might be overlapped anyway. "Swap" is a safe operation in a single-ownership world, and Rust used to have a swap operator. That was taken out, because it could be done with a template. The penalty is that the compiler doesn't know you're doing a swap. Swap has some potential optimizations.
I am very suspicious of unsafe code for claimed "performance" reasons. That leads to exploits and backdoors.
Where did you get the two machine instructions number?
A swap could easily be a exchange of e.g. 16 (the size of a slices &[T]/&str on a 64-bit computer) or 24 bytes (Vec<...>/String) (or larger), and there's no way to use single SIMD instructions for these in general.
Also, "per operation" is vague, since the swaps would have to happen repeatedly in a loop, so a single .insert call is likely to result in multiple swaps.
ok, I accept. I know that it couldn't have been that easy to accept, given that it relies on the specifics of zero on drop etc. No better coder than strcat to do it though.
I would hope that it's possible to implement a memory allocator or garbage collector in Rust (given that it's marketed as a system programming language), and you definitely need something like "unsafe" for that.
I agree with respect to the quality of the documentation. I hope they put some effort in rewriting the docs before the 1.0 release. The problem is exacerbated by a load of out-of-date tutorials you come across when searching the internet for hints. IMHO they should let cool things down a bit before pushing out 1.0. (And no, having to hang around in IRC channels in search for an answer isn't really an option.)
The author's term "anti-sloppy programming" immediately brought to mind a former colleague's paper, aptly-titled "Sloppy Programming" [1]. Interestingly, Rust is a programming language, while this paper describes an editor/IDE enhancement to turn natural language style input into ASTs.
This suggests to me that the most anti-sloppy languages like Rust would benefit the most from sloppy input inference techniques. Has anyone been working on developer tools to make writing Rust code easier? (Lifetime management is the obvious new language feature to target.)
> (Lifetime management is the obvious new language feature to target.)
If you haven't toyed with Rust since late summer when Lifetime elision[1] landed, it's very worth revisiting. That feature made writing out lifetime notations unnecessary in 87% of the standard library. It's made things a lot more pleasant. There's still plenty of room for IDEs and editors to help out, of course, but the language itself is making great strides in usability as it matures.
As someone who wrote a lot of code not in the standard library, I championed this RFC pretty heavily. Part of the reason is that I noticed that with the lifetime elision rules, code using abstractions (as opposed to the implementation of abstractions) tends to use even fewer lifetime annotations.
Incidentally, the 87% number is a bit misleading. Before the Lifetime Elision RFC, a limited kind of elision was already allowed (when a function borrowed a value but didn't return a borrowed value). The 87% number was the number of lifetime annotations that were still required even with that limited rule that could be removed with the new rule. When taking all kinds of elision into consideration, the number in the standard library is closer to 95%. (many of the remaining cases in the standard library involve the implementation of collections)
In practice, that means that the vast, vast majority of borrowed references don't require explicit lifetime annotations, and that is pretty close to 100% in "application" code built on top of abstractions.
Greg Little's work is usually in PL HCI, which is concerned primarily with languages that improve on usability, accessibility, lowering barriers for the common folk, and such. The other approach is to force one to eat their vegetables (because they are good for you), so to speak; we also call these bondage and discipline languages. Generally, languages of the latter style don't focus on tooling so much, since the accompanying culture is more of the style of "think heavily before you write anything" rather than "fool around in your IDE for awhile to find something that works." They often don't consider the IDE or IDE-related issues as an integral part of the language design.
I'm not sure if anyone has done much work to narrow this divide. There are some COQ IDEs out there now, but I'm not sure how effective they are in relieving cognitive burdens while programming.
You're right to point out the cultural divide w.r.t. tooling. It just seems to me that when two different people come up with pretty identical names for something and focus an essay/paper on that name, then there's likely an opportunity for some sort of cross-pollination of ideas.
One person's vice is another person's virtue; I'm not sure how that would create opportunities for collaboration :)
The irony of the situation is that languages like Rust and Haskell provide a lot of static type feedback that theoretically would make their IDEs very powerful. However, in order for that to really happen, IDE concerns have to be considered very early in the language design process. As a result, we see the tooling crowns going to languages like Dart, whose "type system" is basically designed to make tooling possible (and secondly, for the early detection of errors).
I tried to revive as much information as I could from my days in Visual Studio, but if you have additional feedback, I'm sure he'd be very interested to hear it! Obvious caveats about it being a long-term goal, no immediate plans to tackle it, larsberg is on Servo so what does he know, etc. :-)
Speaking out of complete ignorance of what Rust's situation actually is, I think the key is to have a gentle slope from "bad program" to "good program" that allows for the IDE to provide bits of feedback along the way to get to the "good program" state along with helping out with much of the mental computation that the programmer otherwise does in their head.
Rust's type system doesn't really seem to be there yet: there are just "wrong programs" and "right programs" with not much in between. Gradual typing on the other hand works very well here.
Since rust is a true systems programming language, without a mandatory GC, I wonder if there will be any serious attempt to build a kernel entirely in Rust? It certainly sounds possible, and I suspect it would quickly be better than any FOSS kernel currently available, given the fact that Rust is so well designed against bad or sloppy code. Perhaps I'm just talking out of my ass here, but it sounds like a good idea.
I think Rust would be a interesting (probably safer) language to write an os, embeded code in . I asked the question a few months ago in the "lets write a Kernel" https://news.ycombinator.com/item?id=7588205.
It takes a while to write a stable OS, because a lot of things that we use in our helpful libs can't be used in Kernel space. I'm thinking of taking some time and trying to help a Rust OS project.
"I suspect it would quickly be better than any FOSS kernel currently available"
It would take a very long time for that to be even possible. The hardest part of writing an OS today is more about the amount of hardware out there to support and getting users. There are a couple of approachs to mitagate this thou.
1) Work well in a niche, this has worked well for L4, QNX, VxWorks etc, but doesn't make the OS very visable to most people.
2) Run on top of an existing virtulisation plaform to minimise the hardware support needed, butby doing that you also reduce the impact of working in a safer language.
I guess you could argue that "better" doesn't have to mean relevent to end users/devs, a bit of an ivory tower approach thou. Otherwise you are looking at several years of work
The reason very few people care is that "works" is not a binary, but obviously a continium -- and people inherently understand diminishing returns...
Especially when the "works" they care about is getting shit done, by having access to software (including proprietary), drivers, a community, vendors, etc.
The "is formally verified" they could not care less, and rightly so, if their other demands are not satisfied...
> Especially when the "works" they care about is getting shit done, by having access to software (including proprietary), drivers, a community, vendors, etc.
Yes, and we know what the quality there is.
If that's your attitude, you probably don't work in or near the industries that would have the budget or requirements to care about the seL4. I have, and it would be considered, without a doubt.
That's however orthogonal to what I wrote, I think.
It's not like it has a software ecosystem, community, drivers, etc, to be adopted over any major OSes as something that "all others being equal" is also "formally verified".
And yet your post can be summarized as "I am not aware of the usage of L4 microkernels".
>The "is formally verified" they could not care less, and rightly so, if their other demands are not satisfied...
Except their "other demands" are satisfied. It implements L4, just like every other L4 implementation does. It also happens to be the fastest L4 implementation ever tested. So rather than vague dismissive "calling BS", how about you point out the specific fault in SEL4 that causes people to need to use other L4 implementations instead?
>how about you point out the specific fault in SEL4 that causes people to need to use other L4 implementations instead?
I'm talking about people in general, prefaring their mainstream OS of choice over some platform based on L4.
Not about L4 users opting for this or that implementation.
The parent talked about people in general not caring for software quality, while those L4 users are highly specialized enteprises and businesses making a choice.
My post said nothing of the sort. Trying to justify your shitposting by saying I am to blame for your assumptions is ridiculous. If you don't know about the subject, don't post about it as if you are an authority.
> The reason very few people care is that "works" is not a binary, but obviously a continium
Not necessarily. If there is a formal model of a specification, and a piece of software provably implements that specification, then that is an example of software that works in a very clear, not-a-contiuum sense.
Or perhaps very few people are in a position to use a very recently Open Sourced OS that has no file system or network stack, or write these things for it?
Presumably you'd want formally verified versions of at least the network device driver... otherwise process isolation might be violated.
Tons of people use L4 for their systems, there's billions of devices running some L4 kernel. The question is why are they not switching to SEL4 given that it provides the same thing they are already using, but faster and proven correct.
I've only briefly heard of that kernel, and I suspect the same is true for many people at HN. It's also been open sourced pretty recently so maybe it needs more time for people to become aware of it. It may catch on if we give it more time. But even so, it's still mostly written in C and likely have many of the same pitfalls that all kernels written in C will have. I was thinking of a kernel written entirely in Rust, and that would be not only much more reliable, because it would have far fewer segfaults and the like, plus be more immune to the types of highly publicized security flaws lately (Heartbleed, etc.). Of course, the whole OS might have to be written in Rust + memory safe languages. So we might be a ways off from the ideal goal, but clear I was thinking that we can start with a Rust kernel and build on top of that.
SEL4 is provably free of segfaults, without any runtime overhead at all. Unlike Rust. Though it took considerable development time overhead to achieve that.
(Moreover, rust itself may have bugs, or there could be bugs in unsafe code in the standard library. The amount of trusted infrastructure is much smaller for SEL4, I suppose if you compile it with compcert, I imagine it's quite small)
The worst case latency is also part of the proof for SEL4-- which is a kind of correctness you cannot achieve via rust (even ignoring overhead).
There are many elements of program correctness beyond memory safety. The developers of rust generally eschewed concerns about other kinds of correctness. For example, your integer types may overflow and your algorithms could fail to work as a result. SEL4's proofs show the operation of the software match its specifications completely (given some assumptions), it's really far far beyond the memory safety promises of rust.
If it took a huge development effort for SEL4 to get to the state it is currently in, doesn't that make it unlikely anyone can easier go in and change thinks or add features? Not knowing too much about SEL4, but it just seems like a very limited kernel, meant primary for embedded devices, and that's about it. It doesn't seem to have the feature you expect from kernels these days.
A Rust based kernel could be a *nix, and do everything from desktops to server. It would be developer friendly in the same way Linux or Windows is. Plus, the baseline safety would be much higher with a Rust based kernel, without having to formally proven everything.
It's a microkernel: it is intentionally minimal. Microkernel operating system design is from the 1980s is different from the monolithic, bulky and older operating systems designs you are used to (famously including Linux, see the Tanenbaum-Torvalds debate). It has all the features that it would likely ever need already.
Instead of having all the services required for an operating system in the kernel, the kernel just implements the bare minimum including fast IPC. Things that traditionally are implemented in the kernel (filesystems, device drivers, other modules, etc) are all implemented as services living in userland and are assigned the minimum necessary additional permissions for them to perform their task.
You can build a *nix like system on top of seL4. You can even run Linux itself on top of seL4, with L4 acting as a hypervisor.
An operating system doesn't consist of just the kernel.
>If it took a huge development effort for SEL4 to get to the state it is currently in, doesn't that make it unlikely anyone can easier go in and change thinks or add features?
No. Most of the huge development was in developing proofs for C. The developers now find that development is easier than just using C directly. The process is to write it in a high level specification language, then when you're done messing around and know you have it how you want it, translate it to C by hand and the proofs will tell you that you did it right (or wrong).
>It doesn't seem to have the feature you expect from kernels these days.
It has exactly the features we expect from kernels these days. The features we do not expect from kernels these days but did expect from them 30 years ago are available in a library where they belong.
>A Rust based kernel could be a *nix
So can any L4. Why would "start from absolutely nothing" be easier than "start from an existing microkernel"?
Perhaps not, but in a kernel you'd have plenty of your own unsafe code.
SEL4 doesn't.
There are, of course, people thinking about bridging the worlds of formally provable correctness and Rust, e.g. https://github.com/rust-lang/rust/issues/18496 though extraction based is likely to not produce the most elegant or efficient code... and I assume the along-the-side proofs like SEL4 uses would be harder for Rust because the language is more complex than C.
>But even so, it's still mostly written in C and likely have many of the same pitfalls that all kernels written in C will have
No it won't, that's the point. It is a much stronger guarantee of being without those bugs than using rust would provide.
>because it would have far fewer segfaults and the like
SEL4 can not segfault, that's the point. It isn't fewer, it is zero.
>but clear I was thinking that we can start with a Rust kernel and build on top of that.
Yes, and I am saying if people actually wanted that, they could start with a formally verified kernel that already exists and build on top of that. But they don't.
The quickest comparison I'd say between choosing Rust and Go is:
- if you like C or Python, you will probably prefer Go
- if you like C++ with templates or D or Ruby (especially Rails), you will probably prefer Rust
We will see how the languages evolve, but so far I really enjoy Go having native channels, having a stable language definition, and a simple yet stable and powerful standard language API. Also the public/private access system based on capitalization is genius, and the package management is very intuitive.
If you "like C", then Go is a really poor substitute, because Go doesn't allow one to juggle with memory like what people are normally doing in C. Rust is not comparable with Go, because Go relies on a non-optional GC, a difference that changes everything, Go being in the same league as the JVM, except that the JVM is better at almost everything, like garbage collectors, tools and libraries available, multiple languages etc. The JVM is even better at concurrency.
Go in my opinion is the best proof that our industry is fashion driven.
>Go in my opinion is the best proof that our industry is fashion driven.
It's not though. What is fashionable about Go? Most testimonies I've seen state solid reasons for choosing it. Nobody claims it to be a marvelous language like Haskell. It's a simple language that's easy to pick up, implement an idea, and deploy. Much like python, but with much greater performance.
OCaml offered the same proposition for 10+ years. No-one was interested. Oddly enough, now that Go and Rust are getting attention, people seem to also be taking a second look at OCaml. What's that, if not fashion?
While Ocaml is a really nice language it doesn't have a runtime well suited for fine grained concurrency or multi processors, at least at the moment. If you compare standard libraries, commercially backed languages have also far more of the tedious groundwork for "real world" applications done for you. Of course depending on your project this might be relevant or not.
"OCaml offered the same proposition for 10+ years."
No, it didn't. In addition to the already-observed point that Go's concurrency story has always been better than OCaml's, Go also offers it to you in an Algol-esque syntax. It isn't C, but if you know C the differences are dialectal, whereas OCaml is from the ML tradition and is radically different. ML is the language family probably most distant from C in current medium-scale usage.
I don't like or celebrate this fact, but it seems that broad-scale success right now requires that you be an Algol-descended syntax.
You can still "like C" without needing to juggle memory (similarly, you can need to juggle memory and hate C). The core Go team is likely evidence of that: they created a language that, in some sense, retains the spirit of C and yet they felt that manual memory management was unnecessary for the problems they're now solving.
> Go in my opinion is the best proof that our industry is fashion driven.
Go itself isn't flashy or trendy or fashionable. It doesn't introduce exciting new concepts. It's as notable for what it doesn't include as what it does include. It is, in my experience, an extremely practical and pragmatic language. Saying it's proof of a fashion-driven industry is like saying the Honda Accord is proof that the automobile industry is fashion driven.
I believe some of the initial interest in Go was due to its newness and its affiliation with Google. But traffic-baiting bloggers have moved on to the next thing, as have programmers who are more focused on the process of programming (i.e., "I want the act of programming to be fun") than the results. I think it's settling into a comfortable role as a practical choice for teams who want to get work done.
I don't see how affinity for Python on one hand or Ruby on the other would make you prefer (a) channels hardwired into the language (which neither Python, Ruby, nor Rust have); (b) a batteries-included standard library (Python and Ruby are pretty batteries included; Rust is not as much); (c) public/private based on capitalization (which neither Python, Ruby, nor Rust have); or (d) Go's package management system (Python's, Ruby's, and Rust's are fairly similar, while Go does something different).
Not the OP, but I suspect he's making this categorization based on fundamental philosophical principals of the language. Python and C are "New Jersey School" languages. C++, D, and Ruby are "MIT school" languages. (For references, Google [worse is better].) The OP is saying that Go is firmly in the New Jersey School (understandable, since the developers worked at Bell Labs on C and UNIX), and I assume that Rust is in the MIT school. I haven't written enough Rust to have an informed opinion, but a quick read of the Rust tutorial makes me think the OP's assessment is correct.
The fundamental difference between the two schools is in how they view corner cases. The New Jersey schools suggests punting entirely on corner cases to keep the design simple; the complexity is either offloaded to the developer, or the language designer says that the language is simply not suited for that problem domain. So you have the Go devs saying that you just don't need generics or Guido van Rossum saying you don't need execution speed. The MIT school says "We should solve the problem and make things easy for the programmer, goshdarnit", and so you get things like the STL (where every algorithm is built for maximum generality at no performance cost) or Rails's auto-pluralization of database tables. Or another way to look at it is that the New Jersey school asks "Why?", while the MIT school asks "Why not?"
You can extend the analogy out to other languages: C, C++, Java, Python (borderline), Objective-C, Go, PHP, Javascript, and Perl are New Jersey school languages, while Scheme, Common Lisp, C#, Ruby, Dylan, Erlang, and Haskell are MIT school languages. Note that they very often occur in pairs (C <=> Scheme, C++ <=> Common Lisp, Java <=> C#, Python <=> Ruby, Objective-C <=> Dylan) with a similar purpose and timing in the market, but generally the New Jersey school language does better commercially. It is not universal, though - Python and Ruby are both commercially successful but in slightly different niches, as are Java and C#. Usually when the MIT-school language is successful, it's because they manage to differentiate themselves with a strong killer app or framework (eg. Rails for Ruby, .NET for C#).
I don't see Python vs. Ruby specifically as worse-is-better versus better-is-better though. If anything, Ruby is a worse-is-better Smalltalk, while Python gained a lot of its initial community from its perception as a better-is-better Perl (based on popular articles by ESR, etc. at the time). Dynamic features like automatic pluralization of tables in Rails isn't "better is better"—it's just highly dynamic (or "magic"), and many people think of magic as a worse-is-better thing to do (for example, many functional programmers would consider any sort of reflection to be a worse-is-better feature, since it breaks parametricity and so forth).
I also don't think it's right to think of Rust as a better-is-better language in philosophy. Many folks coming from languages like Haskell find Rust full of design choices that seem at odds with its "functional programming" reputation: lack of higher-kinded type parameters, for example, lack of an IO monad (or the use of the term "monad" at all), a willingness to have subtyping in the language, heavy use of macros, etc. etc. The reason is that we actually have a heavy bias toward pragmatism and simplicity wherever possible.
It's our constraints that required the language to have the concepts that it does (performance-competitive with C++; no global garbage collection; data-race free; runtimeless/embeddable). We simply couldn't have copied another language and said "here's our language, now let's write a browser engine in it". It would not have the performance characteristics or the safety characteristics we need. It's not a difference in philosophy so much as in requirements.
I think the point may be that once you embrace those constraints, there's nowhere to go but a language with a fair bit of complexity. While Go chose simply to not embrace those constraints, and tell developers "You don't actually need C++ performance, and you really want a GC, and you should ditch Python entirely instead of trying to embed Go in it." Perhaps this is why Go found that their userbase didn't actually consist of the expected C++ developers and instead ended up being mostly ex-Pythonistas who are willing to ditch Python entirely for better performance.
FWIW, I'm glad that Rust didn't choose that path, as I've tried Go out and found that it really didn't work for me as a Python substitute, and am now dreading the day when I (hopefully) will have to rewrite parts of my startup in C++ for performance. I'm actually seriously considering Rust as a substitute here, but would like to see more stability in the core language before investing serious time learning it.
But I'm also someone who has never shied away from complexity in a language before, which probably puts me more in the "likes C++, Ruby, and D" camp even though I chose Python as my main scripting language. One of my main concerns with Rust is whether it will be possible to hire & manage developers who can work effectively in it, since I know that not everyone is as willing to learn new languages and language features as I am.
I actually think it will be easier to find folks willing to dive into Rust than folks willing to dive into C++. The reduction in complexity, corner cases, and memory safety have all allowed many developers with a background in dynamic languages to feel comfortable approaching Rust.
I would actually consider Rust's philosophy to be 'better is better', at least within the core ideas: no nullable values, borrow checking, trait oriented work. Although, perhaps, that was how it started and not how it's being released?
> The reason is that we actually have a heavy bias toward pragmatism and simplicity wherever possible.
And functional programming languages are just arbitrarily complex and obtuse because it hates its users? No, it simply has different constraints, like the following...
> It's our constraints that required the language to have the concepts that it does
And it's Haskell's constraints that lead to it having to be a pure language, and so on. Rust is just opinionated in a different way, and because of different constraints. Do you want functional programming in Rust? - it's probably not going to be as accommodating in that regard as a functional language. Do you want to have zero-cost abstractions in Haskell? - the language is not made with that in mind, so you will probably have to wrestle a lot with extensions and custom abstractions in order to achieve that, if it is even possible.
I'd revised the post several times before posting it; initially I wanted to label the categories as "OAOOW" (One and Only One Obvious Way) vs. "TMTOWTDI" (There's More than One Way To Do It), then I figured that using the labels from "Worse is Better" more accurately captured the intent of the grandparent post, then I went back and re-read "Worse is Better" again to make sure I had a fair characterization of Dick Gabriel's argument. In the process C++ switched categories, but I forgot to update the first mention.
...now that I'm rereading it again, I wonder if my original phrasing was better. The idea that I want to capture is that there's a difference in how these languages approach corner cases: the C/Python approach is to punt and say "We're not designed for that", while the C++/D/Ruby approach is to add a feature to the language to solve that one case. C++ is basically what happens when you extend a New Jersey school language to cover the corner cases that it originally chose to ignore; that doesn't actually make it MIT school (which is more a philosophy on how to design systems, not how they end up), but it does put it in a different category from C and Python.
(Disclaimer: this is merely nitpicking, I get the gist of your original post).
I think your split of "Worse is Better"/"Better is Better" languages is a bit arbitrary. Everyone agrees that C is a Worse is Better language, but the rest? In particular, I cannot decide whether you equate OAOOW with Worse is Better or not -- Python is OAOOW, but I wouldn't say it's a Worse is Better language (and claiming it is because "you don't need speed" is like saying Haskell is a Worse is Better language because "you don't need the ability to easily reason about runtime performance". Every language claims "you don't need X"; that's clearly not enough to classify it as WiB).
Great posts. I'd like to add that 'OWLs' (one way languages) and 'MTOWLs' (more than one way languages) made the most sense to me. WiB and BiB makes sense too, because I knew what you were getting at. Having used many programming languages I've settled on 'WiB' languages as my preference. Though I consider WiB to actually just be better. :D
Complexity creeps in too quickly and too easily. While I don't like the WiB moniker, I definitely prefer OWLs. :)
What are you basing that on? C++ is a "lets add everything and let the user sort it out" language, how is that in line with the principles of the "MIT school"? Ruby is literally perl--, how is that MIT school? Because rails does incredibly dumb stuff like incorrectly pluralizing table names by default? That's not MIT school, it is just dumb.
>and I assume that Rust is in the MIT school.
I can't imagine any way to rationalize that idea. Rust is at its core all about trade-offs and "good enough"s. It is supposed to be a better systems language, not an ideal language. I can see claiming scheme, or SML or haskell or clean are "MIT school". Basically any language people spread FUD about being "not practical". But rust is explicitly intended to appeal to the people who pretend good languages are not practical.
I would like to know that as well. The strictness and cleanliness of Rust reminds me more of Python than of Ruby. Not to say that Go is messy or anything, Ruby on the other hand...
Go will be a shorter journey from JS. Rust may someday be a better complement, partially for the exact same reason that it's more distant from JS, but Go's 1.0+ today and production ready, whereas Rust is still a ways away from that, so consider your target timescales, too.
Then i'm afraid none will suit your taste. Go has hiddeous variable initialization ( make just for two types, and no unique syntax for initializing struct in general), and basic structure manipulation ( i still have to look the doc whenever i want to do anything with an array, or anything non trivial with a dictionary), and from what i've read rust will drive you mad with lifetime scopes.
These "if you like Foo you will like Bar" claims are really silly. I've never seen one that was accurate at all, and I don't think they can be. Different people like the same things for different reasons. For example, I like C and despise go, and I despise C++ and ruby, and like rust.
One thing the dynamic language (and Go, to some extent) comparison misses is speed: in general, Rust code is likely to be faster to execute than the equivalent Perl/Python/Ruby/Go, due to performance being a strong design choice of the language, and the use of the industrial strength LLVM optimiser.
This is a general statement, it's not always true; but the control Rust provides theoretically means one can always wrangle the Rust code to be equal to/faster than most other languages.
(In fact, the embeddability of Rust means it is pretty well suited to writing the parts of a Python/etc application that need performance, since Rust can easily make dynamic libraries which can be loaded as extension modules.)
Rust is probably the best programming language I will never use.
Most of the chances I get to experiment with new languages is in small and/or personal projects. None of those projects ever benefit from manual memory management so the main selling point of Rust (safe memory management without GC) is actually just a hindrance.
I'm sure Rust will be adapted to various boards for enthusiasts as soon as stable version appears and it will be suitable for small personal projects. There even have been some tries[1] and Zinc still looks pretty active[2].
Good post. I haven't coded C++ for like 3 years so Rust is a very new concept to me (I am a Python programmer).
This weekend I spent some hours to build a hangman game: https://github.com/mauricioabreu/hangman
I've been wanting to mess with Rust for webapps. But I've been wondering on how it performs with a lot of async operations. Is the idea that you write CPU heavy code in Rust then call the compiled blob in say a node.js app, or is the end goal to one day compete with node.js?
Async I/O is still an open question in Rust. Over the next year I expect third-party libraries to experiment heavily in this space, which will hopefully blaze a trail for a blessed solution to be uplifted into the stdlib at some point (or if not a complete solution, at least the infrastructure to make it reasonable).
Did anyone else find Rust's package/module management kind of, well, sloppy? Perhaps coming from Rubygems and `go get` has made me spoiled. (And npm, and Cocoapods, and pip...)
Have you used cargo and crates.io? It is effectively the systems-language successor of Ruby's bundler (designed and (initially) built by the same people), but also paying attention to lessons learnt in npm and Go etc.
Yes, that's what I used—however, I just realized I may have been conflating the difficulty of doing package management in Rust with some packages that I used that were enormously difficult to use or were outdated (speaks to the pre-1.0 thing, I suppose).
Can you be more specific? Cargo was designed and implemented by domain experts (the folks behind Bundler), and I haven't personally felt any longing for any other ecosystem's package manager.
I think I probably don't understand the Cargo Way yet? Here's a thing that always confuses me -- is there a way to specify a rust version that my project intends to target in my Cargo.toml, or am I stuck with a global rust install?
In the current scenario, where the definition of 'Rust' changes on a daily basis, that's a major pain-point, but once Rust hits 1.0 then all future changes will be backwards-compatible (until version 2.0, in the distant future if ever), so it should be much less of an issue.
Since the ownership system and borrow checker are tuted as new, wouldn't a few papers been appropriate? Sure would be fit to document development outside of the community.
Does Rust have anything like Python generators or coroutines? I don't even know if its possible on a strongly typed laguage, but it's a killer feature form me.
It has 'iterators' which are like Python generator objects (and are used in for-loops, just like Python's), but it doesn't have generator comprehensions or generator functions.
It's definitely possible in a strongly, statically-typed language (I believe C#'s async/await is an example), but Rust doesn't do it... yet: https://github.com/rust-lang/rfcs/issues/388
It can be difficult to implement well on nix systems. async/await works so well in C# because IOCP in Windows is completion based. I/O on current nix OSes is readiness based (epool/kqueue).
I implemented a completion based wrapper ontop of libev and libaio in C but could never quite nail it.
It's worth mentioning that IOCP can be used on any fd in Windows, whereas Linux epoll loop can only be used for sockets. There is a caveat to that however, you can use the epoll loop if you use eventfd with some effectively undocumented behavior of the aio subsystem that allows you to register an eventfd with your aio context.
There are drawbacks to this though, you have to use O_DIRECT and aligned writes. In Windows you can use IOCP to do async buffered writes...
Thankfully much smarter people work on Rust and hopefully they manage to make a completion based IO system that is portable across the operating systems.
I don't see why the IO model would be a fundamental issue here. Fundamentally, async/await is about automating inversion of control, which is a mechanical program transformation.
I also don't see that as a problem either. There are 2 seperate issues:
a) Be able to await the result Future/Task objects instead of attaching callbacks to them. This is done by source code rewriting and has no dependencies on an IO system.
b) Having IO libraries that return future objects. You can do that with IOCP as well as with unix poll based architectures. Might be a little bit easier with IOCP, but it's no deal breaker. I already did an implementation for epoll once. You simply spin up an eventloop in any thread. Any IO read/write returns a future object, posts the work to the eventloop and as soon as the work completes the eventloop completes the future.
The parents are saying that a transformation target that supports IOCP is easier to support than one that supports epoll.
Think about this the way that you might think about GCC backends -- some architectures make certain things easier than others.
Or how an abstraction layer like MonoGame papers over the fundamental differences between OpenGL and DirectX -- some abstractions perform better with one or the other subsystem.
Unfortunately most of what you want to do with await/async is IO based, the IO model matters a lot if you want it to be scalable. Without kernel support trying to build a completion based API is not very performant. Trying to emulate it in a library proved painful, especially when trying to do IO on files.
I'm not sure that's true. The same kind of mechanical transformation that gives await/async also leads to Python-style generator functions, and I wind up using them all the time for all kinds of iteration tasks, not just I/O (or at least, not just async I/O).
Certainly, code rewriting to support continuation is a factor in both C#-style await and generator functions, but they are not the same thing.
Much of the motivation behind async I/O is to reduce the number of active threads blocked on I/O. Blocked threads means stack space consumed, kernel transitions, and potentially CPU contention if too many threads become runnable at once, and then start fighting over mutexes or cache.
In this way of looking at the world, you don't want to block for any I/O. That's what the GP is getting at.
I would be surprised if it does anytime soon.
async/await's would conflict very much with the complete lifetime system.
After an await your programs state might be completely different. Things that are on the stack have to be saved for the duration of the await and must be restored. However destructors may not run in between. And you must even take into account that the await might never finish.
I think the original Rust did not even have a need for async/await because it used "lightweight" tasks for that purpose. But now these are out.
You'll have to work hard to manage to create a cycle in the first place, regular ownership through a value or a Box<T> will not allow it. You can create cycles (that need to be broken manually) with the reference counted smart pointer Rc<T>, or you use its weak pointers to resolve it automatically.
Oh, and datastructures that need cycles (the libstd's doubly linked list, for example), can use raw pointers to manage it manually, encapsulated in the implementation.
Rust has roughly the feature set and memory model of C++. The difference is that in Rust, the safety issues C++ sweeps under the rug are addressed. As I point out occasionally, the three big problems in C/C++ are "who owns it", "how big it it", and "who locks it". C fails to deal with any of those issues. C++ sometimes deals with "how big is it". Rust deals with all of them.
In Rust you have to deal with most of the memory management headaches you have in C++, but they're detected at compile time, not after the product ships.
It's impossible to address 100% in C++ without breaking backwards compatibility. "Modern C++" is significantly more safe, but still isn't completely safe.
Very unlikely. I tried, a decade ago, to get the C++ standards committee interested in safety issues. They were fixated on new template features. I once suggested, around 2002, when there was much public worry about attacks, that failure to address the safety issues in C++ constituted material support of terrorism. That really upset some people on the C++ committee.
I think that higher-kinded types will probably help a lot with this if they make it into the language, and it's likely that there's a way to get around the errors and get the full benefit from the algebraic data types and monadic functions that I just haven't found yet. The language is enough of a level-up from my C++ days that I'm sure I'll stick to it, even if those things aren't true.
I thought this quote from the article really nailed it regarding my recent experience:
"Expressiveness or elegance is not a goal of Rust. It’s certainly not bad in this regard, just not as wonderful as you may wish if you care it a lot."