For those skipping to comments based on title alone, this isn’t really much discussion about a trade off. It starts that way and then goes “but with my favorite language you can have both!” And spends 3/4 of the article telling you about it.
The most value from this article is the clickbait title yielding some interesting discussion from people on that question outside of the actual article contents.
This article isn’t bad but the name is terrible and a bit clickbaity
Ha, funny, because I thought: "Both, of course. Like for example Schemes." and while Racket is not a Scheme according to the maintainers, it certainly shows the similar safety and power that other languages like it show.
I would say more convenient than Scheme. I can't think of any Scheme implementations that come with as much stuff as a standard Racket installation (before adding in extra third party libraries). Gauche, maybe, or Guile.
The biggest difference is that in Racket, cons cells, and thus lists, are immutable (the Scheme #langs like r5rs use a different incompatible type for their conses). There's other more subtle changes but that's the one most likely to bite people used to Scheme. Well, that and `if` taking exactly 3 arguments.
The literature on programming languages contains an abundance of informal claims on the relative expressive power of programming languages, but there is no framework for formalizing such statements nor for deriving interesting consequences. As a first step in this direction, we develop a formal notion of expressiveness and investigate its properties. To validate the theory, we analyze some widely held beliefs about the expressive power of several extensions of functional languages. Based on these results, we believe that our system correctly captures many of the informal ideas on expressiveness, and that it constitutes a foundation for further research in this direction.
in practice, turing complete doesn't necessarily mean powerful or equivalent. brainfuck and minecraft are turing complete but no one is trying to write business software in it. ergonomics matter
If I was to interpret that "strictly more powerful" statement, I would interpret it to be about how Racket's macro system works and what information its syntax objects carry. Or interpret it to mean the different languages it provides, like Typed Racket for example.
On the other hand I found using multiple cores much easier in GNU Guile.
I agree the title is a bit clickbaity but I don't think it's justified to dismiss the entire article on that basis. The author makes some very good points and while I had heard of Racket, I wasn't aware of how its macro system differed from Lisp & Scheme. Moreover, discovering you can implement type systems as macros[0] alone made reading the blog post worth my while. But as always YMMV.
Haha, yeah it’s a little clickbaity. Less clickbaity might have been “Safety and power needn’t be a trade-off”.
I wrote this out of some frustrations I had from a conversation with a user on this site about macros, actually. This is why I spend so much time on Racket. I’m also a PL researcher and I work with Racket a bunch.
It’s not just Racket where the trade-off shouldn’t be necessary, but Racket is a good example because it’s like one big exercise in making some powerful language features safe.
The thesis that safety and power are incompatible seems mostly a straw man, because there are very few cases when a tradeoff between them may be necessary.
For example adding to a programming language array operations in the APL style greatly increases its power while simultaneously increasing its safety by eliminating all bugs that can occur due to mishandling of indices a.k.a. subscripts that would be needed otherwise to access array elements.
There are a lot of other such examples where adding high-level operations increases the power of a programming language while also eliminating some classes of bugs.
The only kind of "power" that can conflict with safety is the necessity of implementing various kinds of low-level operations whose functionality is not provided by any high-level operation of that programming language, e.g. some that may need to access various data types as bit strings or to use directly certain features of the CPU that are normally ignored by that programming language.
In this case it is not actually "power" that is in conflict with safety, but the desire of portability, i.e. of independence on the architecture of the target CPU, which prevents the addition to the programming language of features corresponding to everything that a CPU can do.
For most features that are unsafe in practice, it might have been possible to define safer higher-level methods to use them, but the obstacle has been the lack of standardization, i.e. the fact that they may be implemented differently in every computing system, so it may be hard to design an equivalent feature included in a general-purpose programming language that could be mapped in an efficient way to what a certain hardware does.
Much of the lack of safety of some modern programming languages is not caused by too much power, but by too little power, because they have inherited the lack of power of the minicomputers and microcomputers of 50 years ago, where the restricted programming languages available for them had abandoned many of the powerful and safe features of the older programming languages, from 60 to 70 years ago, where it had been inconceivable to have a program where any computation error, like integer overflow, would not generate an exception, or where it would be possible to access memory through indices or pointers in areas not intended, also without generating an exception.
The lack of runtime error checks for a program is a lack of power, not a sign of power (while I use frequently languages like C or C++, I always compile them in the right way, enabling all runtime checks).
Many people care more about shipping (perhaps buggy) software rather than the theoretics of programming languages. I used to despise that, these days I think I understand it.
Still, I'm sad the industry is stuck with languages such as Python and Java, rather than say Racket and Idris...
Regardless of the safety constructs or lack thereof in a programming language/environment, our job as software engineers is to construct programs that perform as per their dataflow requirements. That is the only kind of safety I"m concerned with.
That can be accomplished for most software projects in most programming environments and their languages, to more or less extent within their limitations, given a proper amount of hard graft to choose the proper constructs and then brutally test them.
Power, on the other hand, must be dependent on what is required to wring safety out of the limitations, weaknesses, and pitfalls of the chosen implementation tooling. Power can include speed of development, speed of execution, amount of concurrency, fault tolerance, and so on. Regardless, power must be built upon safe software constructs, safe in the sense that the resulting system performs as intended.
A civil engineer will know that a specific kind of bridge constructed out of certain materials using certain techniques will be able to withstand certain events like an earthquake of a certain magnitude and will be able to perform at a certain level, such as tons of traffic at a time or somesuch. As well, they will have a pretty good idea of how long it will take to build it. Such is the maturity (and relative simplicity!) of bridge building.
We have no such framework in our industry. Focusing on programming languages shows the immaturity of our industry, which is the most difficult we have yet undertaken as a technological civilization. The proof of that statement is that all our technologies depend on software systems.
It doesn’t help that the customer asks us for a bridge for a bicyclist, uses it as a highway for semi trucks, and refuses to perform scheduled maintenance on it while attackers shoot rockets at the foundation.
There are many parts of our discipline that are immature, in addition to the implementation bits.
> For one thing, our specifications, themselves, are sorely lacking in specifications!
you cannot ask a customer of a building to specify what strength concrete needs to be used.
They specify the color of their kitchen, how much sunlight they want, and how many rooms. And as a demanding customer, it makes sense to want to change their minds.
I don't see the lack of specification, nor the lack of adherance to it, a problem. You, as an engineer, is responsible for solving these problems - including telling them they've forgotten to think about XYZ. You're not a human language to code translator.
That would indeed be the way if they would pay for all of that; often it also has to be cheap and fast. It happens with buildings and bridges too, but somewhat less as broken software feels less urgent (as it normally doesn't cause immediate death).
> you cannot ask a customer of a building to specify what strength concrete needs to be used.
Indirectly, you can. They'll specify how many floors there needs to be, and combined with automatically-specified things like gravity and local weather phenomena, you can derive what building materials you can safely use.
If only I could upvote this comment more. The software engineering industry has a loooong way to go, and we'd be much better served if more people actually got it and thought like you instead of wasting time on things like language wars. That's not to say we shouldn't continually strive for better languages, just as we continually strive for better materials to build bridges with, but proselytizing for a language is a complete waste of time when half of the so-called engineers in this so-called engineering discipline do not even realize that engineering is not mere programming. It'd be analogous to the civil engineering discipline spending thousands of hours debating which materials are better before even figuring out how to actually construct a bridge, out of any material consistently and within boundary conditions.
The fact that we call software engineering "engineering" is still a complete joke in the majority of cases (there are exceptions of course) at this point in time.
Well, to a point. It feels like some engineers are proposing to start building bridges out of wood, and others want to keep building them out of spoons and toothpaste, just like their pappy did.
I have a hard time understanding why programmers so much want to be called software engineers. Lawyers don't want to be called contract engineers. Writers don't want to be called text engineers. Film directors don't want to be called movie engineers. The industry has a long way to go for sure, but maybe it's not to become better engineers but to better understand that they aren't engineers after all.
Why do civil engineers want to be called engineers? They don't work with engines.
Computing is full of metaphors. There are also software architects, who don't design buildings, but "software designer" is not the term the industry settled on. Nobody would care if it did, it just didn't.
Some software is engineered but not all of it by a long shot, IMO. I'm not an engineer, I'm a programmer.
Engineering is all about certifications and contracts and laws and codes of conducts. Engineers don't build the bridges themselves, there are many layers of delegations to the guy who actually operates the crane or puts in the rivets.
Most programming done today is craftsmanship not engineering. We have the freedom to implement the same thing in multiple wildly different ways, there are famous "masters" with lots of opinions, on the job programming is still taught in an apprenticeship kind of way, there are consultancies that act effectively like studios or workshops, I type my own code with my own hands, etc. Software is basically still in the artisan era.
I for one much prefer the current state and would probably do something else if software was fully professionalized. I would hate to just design some high level spec out of well-known reusable parts, with massive legal and contractual requirements, to be implemented by massive teams of contractors.
This perspective is pretty pervasive from what I have seen. However, I think the analogy should be taken to its logical conclusion. There are many, many examples of long lived structures built by carpenters and other craftspeople (in England there are residential dwellings dating back 1,000 years). The materials and maintenance have proved themselves and show that it does not take the large legal framework of engineering to design a structure fit for purpose. However, craftspeople and apprentice style training are not acceptable for the design and certification needed for modern skyscrapers, large format factories, bridges, or transportation infrastructure.
I am more than willing to use, support, and even participate in software that is not critical and think a more-bright line separation should exist to delimit the arena for which craft scale software is appropriate. The other side of that is I am willing to acknowledge the existence of arenas not suited to less than formal and legal requirements for the design and ‘construction’ of software artifacts.
Controversially, I think societally we use software in too integrated and too many areas that are better left sequestered and siloed. There really is no reason that a general computing device should be used for both untrusted internet connectivity and storing and accessing banking or medical information. The ubiquity of protected purpose built hardware should have been a goal, with a more general open computing hardware environment used by informed choice.
I'm getting pretty tired of hearing about "safe" rust. And I love Rust. The language's promoters are pushing a vision of a world that's just not true. And while memory bugs are an important class of issues, logic bugs are even larger.
No argument there. Rust is the first language that checks all the boxes for me — memory safe, compiled to machine code, open source, widely used, ergonomic — but if you told me you’ve been doing all that in Haskell, righteous! Rust didn’t invent any of those.
sadly ADA never got the kind of love rust does. people say its hard to write code in but I don't see how its worse than rust. Rust just came at a time where the requirements and stakes for writing high performant and correct software became high enough that prioritizing it into the language design became valued. ADA was a little too early for its time.
I think one of the reasons why someone might think it's hard to write code in Ada is its Pascal-style syntax and verbosity, which might give the false impression that it's an "outdated legacy language". I've written a few hobbyist projects in Ada (SPARK to be specific) during my spare time and now I greatly enjoy the language, but my very first impression of it wasn't enthusiastic (coming from a C background). I don't fully understand why Ada lacks popularity compared to Rust, outside of Rust having a more vibrant community.
Nim (https://nim-lang.org also mentioned elsethread) was largely inspired by trying to make a more concise Ada with Python-like lexical sensibilities/ergonomics and support for first class meta-programming. Compilers for Nim are sadly not as mature as things like say the gnat Ada compiler.
I cut my teeth in the industry on the test side at MS; it was a bit adversarial with the dev side. And there was a third side to get us both on documentation.
It was frustrating but, I think we made Good Stuff (NT4, Win2k).
Yes, this was another boring shill article for Haskell/Rust. The hard part of software engineering is correctly understanding and modeling the domain. Most other issues are trivially resolved by good practices. This only solves issues for codebases that have a train ran on them by interns and junior devs, which is great if you're in that situation, but is unbelievable frustrating for people that don't have that issue
Programming languages should be safe by default, except in clearly delimited sections of code (as small as a single expression) where the programmer requests less safety.
Safety is the property that erroneous situations are intercepted and diagnosed, rather than running off with incorrect results or behaving unpredictably. It can be a continuum based on the number and kinds of situations that are treated safely.
I bet we disagree on what unsafe means. Many here think tåhat shipping a non-final/open/inheritable class is unsafe. I think doing so stunts your code base’s and ecosystem’s evolution.
And when push comes to shove and you have to address an issue in a 3rd party library, and you are unable to override a single thing that would fix your problem because everything is closed by default, then you will tell your manager it can’t be done, and serious money or opportunity will be wasted and customers will suffer for longer, because you are coding in a really safe language. Is this progress for humanity? I think not!
That's a slightly different issue. I don't think people consider visibility attributes under the rubric of safety, usually, except in sandboxed contexts.
That's more a problem of build tooling. It should be possible to modify visibility/finality levels in libraries as you download them. Gradle can do something like this with some work, but someone should make a plugin to automate it. That way the upstream can set things private/final/sealed as correct for their API guarantees and design, and downstream can apply hacks to open things up if that's easier than forking the source code, and there's a single place in the build system that collects all the places where there's a mismatch.
I've made it explicit what I believe safe means. I could discuss it in more detail, but I won't be changing my definition.
I would never do such a barbaric thing as adding a non-inheritable class to an object system. Or, if I were convinced to, it would be purely for performance, not as a way of restricting the programmer; i.e. for speeding up things due to the compiler knowing that the class is final.
I disagree. Programming languages should teach you how to program safely.
It's the same with construction, here's your brick. You can either use cement or not. Both will build a tower but one will topple and one will not.
Programming is the same. Here's your compiler, program your tower without garbage collection and watch it topple.
If you don't wish to learn how to program safely then choose a language that does it for you. But don't moan because you refuse to learn how to program safely.
Most languages are derived from C anyway so your coding in C but just in the way how the developer wants you to code in C.
> > Programming languages should be safe by default
> I disagree. Programming languages should teach you how to program safely.
One of the best things about safe-by-default languages is that they do teach you to program safely! If your program fails to compile you then have to go and find out why, and that's a fantastic learning opportunity.
In languages where there are no guard rails, it's much more difficult to work out what you don't know that you don't know.
> In languages where there are no guard rails, it's much more difficult to work out what you don't know that you don't know.
I don't disagree. But the above is what's not taught anymore. Which what Computer Science taught you about; a subject that has now been watered down.
It's great to have languages that have guard rails. And sure helps you learn shaving time off your project/life.
However if you're using syntax you have no idea about that can cause system damage than the fault is on the programmer not the language. It exists to be used, but use it wrong it becomes unsafe.
I know i'm not right, but i'm not wrong either. It's a double edged sword.
> It's great to have languages that have guard rails. And sure helps you learn shaving time off your project/life.
It also means that beginners can reasonably use the language for production projects. Including those who have never had the opportunity to study computer science.
To me, it reflects poorly on a language if pretty much the only reasonable way to learn it such that you can use it safely is via computer science course (or another similarly structured environment - perhaps a work place with senior mentors).
Of commonly used languages, that's pretty much true only of C and C++.
In electronics, we have parts which need their SOA (safe operating area) respected by the design, or they will catastrophically fail (melt, vaporize, explode, catch fire). We also have safe parts that have features like thermal shutdown, or output current limiting.
That's kind of similar to safe versus unsafe language features.
You still have to respect the operating parameters, because if the part goes into thermal shutdown or current limiting, then your device doesn't perform properly.
When the device doesn't work properly, even though there is no smoke coming out of it, it can still take time to track down the problem.
When using a safe language, we similarly don't necessarily want the user to be exposed to the raw safety mechanism going off, which could have consequences like terminating the image, with the loss of unsaved data, and interrupted workflows.
Typically, in modern safe languages we have some decent choices for handling and recovering from situations like that, so we don't have to be cluttering every operation with conditionals that check for exceptional situations.
Let's compare to building (this is in the UK, but I think most countries have similar rules).
If someone asks you to build a house, you need to make a plan, and get it signed off. There are lots of rules on what you can, and can't do.
If you build a house without cement, by just piling bricks up, you will get fined and be told to take it down and build it again. If that house you badly built falls down and kills someone, you go to jail.
On the software front - there may finally be some progress on prosecuting the people responsible for the Horizon system, whose faults directly led to the Post Office scandal? https://www.bbc.co.uk/news/articles/cvgr19lwgv0o
You are right that there are terrible issues in building (Grenfell being an obvious, terrible example).
But I'd say, in general, building is in a much better state. There are fairly standard rules, which are mostly followed, and most building stay standing, year after year, without the need for constant maintence. On the other hand, software is still really pre-fire of london (at best) in terms of quality and standards.
You can't program safely in an unsafe language. You can program correctly so nothing bad happens, but it's still unsafe code. Just like a tightrope walk between two buildings without a safety net is unsafe even if you're very good at it and never fall down.
A programming language has multiple safety modes and is safe by default could be used as a tool for teaching a range of concepts.
Of course you can, Rust is an unsafe language but people tend to use the safe subset of it. Are you telling me that is impossible and they are writing unsafe code?
I'm telling you you've not read the entire thread, which I started with:
"Programming languages should be safe by default, except in clearly delimited sections of code (as small as a single expression) where the programmer requests less safety."
In terms of the word semantics of the situation, people writing in a safe subset of a language are writing safe code. Once they use the unsafe bits, then they aren't.
If nothing bad happens when something unsafe happens then code is safe. If you don't use the feature set that declares the code as unsafe then it's safe.
A knife is unsafe, but when a chef uses it correctly it's safe. As is code. It's only when you use syntax incorrectly does it become unsafe.
A tightrope is still unsafe and now you're relying on the safety net to catch your fall. What declares the safety of the safety net?
However both standards should apply. But as programming is now more rapid rather than a dedicated skill, prototyping of new code landing in production.
We are tend to rapidly program ignoring the correct way to program and now relying on the language to do the checkup. That itself is still just as unsafe and now passing the buck of lousy code on to as a fault of the language itself.
Safety doesn't come from the language. It can, mainly it comes from the code. Use syntax incorrectly and you have unsafe code in any language.
We need to accept that humans are not capable of writing safe code in unsafe languages. Brilliant, dedicated, diligent programmers still write busted C code all the time. If the OpenBSD team still makes critical mistakes today, what hope do the rest of us have?
Almost all code in an unsafe language will misbehave on bad inputs.
The program is put together in a way such that it avoids stepping on those cases. For instance, by validating and diagnosing inputs so as to avoiding executing that code.
You would think that a program which has been deemed "correct" cannot misuse any of its own code with undefined results. Alas, that is not the case. Such a program is called a "robust" program. A correct program is one that meets all of its functional requirements; does eveything right with correct inputs. Whether robustness is required for correctness depends entirely on how the program is specified: does the specification explicitly call for robustness (grace on bad inputs) or not?
For instance, the memcpy() function in your C library is a correct program (hopefully). When you use it correctly, it does its specified job. Most implementations of it are not robust, though. Most memcpy functions cheerfully accept overlapping objects and do something, and are easily crashed by bad pointers.
Robustness is related to security. It is often in the areas of the software that lack robustness that attackers find exploitable surfaces!
Safe languages contribute robustness to programs. It's as if the requirements were added to the spec of every program. The programmers working with these languages, and their users, get accustomed to this and then expect robustness out of programs as a tacit requirement.
Note that an incorrect program can be robust! A program that due to some coding mistake or misunderstanding calculates the wrong thing for correct inputs (not meeting a functional requirement) can still be impervious to misuse.
So in a nutshell, safety is more concerned with robustness than correctness. That's why we can talk about languages and libraries being safe, in the absence of a program being specified. Those tools are still safe when they are used to put together a program that doesn't meet its functional requirements.
The safety mechanisms in safe languages may not be enoguh for some desirable level of robustness. Users don't want programs to stop with a diagnostic message because they gave bad inputs. Modern safe languages have tools like exception handling.
Some wrongheaded programmers think that just because the default robustness provided by safe languages is not always good enough for many kinds of professional or industrial applications, those robustness features are therefore not worth it. ("If I'm going to have to check for bad inputs and take special action, what's the use of the language doing the checking robust.") The safe language can make it much easier to do that, though. For instance, we can often let an entire module execute freely and just catch an exception in one place. Say, inside a CAD program, some calculation over a model could divide by zero. We just catch that somewhere, put up a dialog (or log something in a log window) and the CAD program keeps running.
Isn't a language being "powerful" mostly about having comprehensive libraries ("import antigravity" and such)?
And then of course "safe" means you can't express things you shouldn't want to express, like buffer overflows or arithmetic overflows or SQL injections, or these days prompt injections.
Ime, language “power” is usually used to describe the expressivity and flexibility of the language itself.
As an extreme example, Starlark isn’t a powerful language, it isn’t even Turing complete afaik. This is advertised as a positive attribute for a build description language (I don’t have an opinion, never used it myself).
Conversely, C would classify as a powerful language even considering how barebones it is from a “feature” perspective . Poke any memory however you like is a pretty powerful power
Both. It should be a inverted pyramid,starting at the likes of lua, condense to something c#, then the training wheels come off for deep low level access and stranger high level concepts.
All within one engine, your elders giving and taking your freedom.
def foo():
"""use strict; use no_func_calls; use idempotent; use no_loops; ..."""
...checked python here...
...or being able to make assertions within a body of a for loop or something.
I feel that programming languages need to evolve to deal with a hostile dependency ecosystem, eg:
import leftpad as lp with {
network: false,
filesystem: false,
environment: false,
max_cycles: 100_000,
max_mem: 10_000,
...etc...
}
...this is purely arbitrary and hypothetical, of course, but programs become safer when made from safer, more-structured components.
We're long past the "single programmer write all the code" phase, and in to the spiral of exponential code+complexity, but no programming languages give you the ability to wrestle with that complexity.
for me personally, the trade off between me writing in memory safe rust vs comparably "powerful" C++ is coding speed. I think C++ is a mess and have been using rust for my personal projects for the past 2 years but I still feel like I code faster in C++ because I don't lose time wrestling with the borrow checker. Rust has some fantastic design decisions and features (despite it being a relatively immature language) so I've always thought it was weird how people discuss the trade-off as safety vs powerfulness
I wonder how it will pan out in aggregate when you amortise the productivity of sprinting to produce something vs crawling to fix the problems. In C++ although you are not wrestling with a borrow checker you do find yourself going fast down deep dark tunnels that take ages to get out of. In rust it might be you move instantaneously slower but faster on average but it might take time to gather that data.
For exampe I once had to help someone debug some C++ memory leaks in a vast codebase in jobs that took hours for each valgrind run. It was two of us for two weeks to eventually find and fix the problem and the fix was maybe (from memory) 10 lines tops. Not exactly productive. But the person writing the couple of hundred lines that created the problem probably felt quite productive while doing so.
I have met a style of programmer who thinks, on some level, that generally reusable code is unsafe -- like the `sum` function (stupid reductive example) is not okay unless it has a bunch of business logic restricting the range of the numbers and takes a logger just in case, and the function name is eight words describing exactly what's happening, and the base and the mantissa have different types
and you end up with no general code anywhere because reusability is not even an afterthought, it is counter to the project
if you've never met this person I sound insane but I swear they exist
As mentioned by others, the article does not really tackle the question.
But it is an interesting question, and I think it would be great to have a language that you could adjust the safety without major rewrites. I.e. various degrees of strict modes.
At the times of LLMs, I wonder if anyone looked into some sort of sliding-scale transformation of code from safer, to easier to write. (although I am not sure there are good languages that are expressive enough for that).
You would likely be interested in Jamie Brandon's work on the Zest language [1] which has planned "lax", "strict", and "kernel" modes for varying levels of performance/safety
C# is the first thing to come to mind. If you so choose you can more or less inline C and trample all over memory, or you can write very strict, statically typed, safe code.
C# has unsafe, which extremely powerful. See ravens repo for heavy usage of that.
Otherwise python is also a good example of that. First start out in python. Not fast enough? Bring in python, which compiles down to c. Id that's not enough, drop to raw c.
I mostly use python, but switching from python to C is IMO too much work. What would be great if within one language, there was an access to much stricter/faster language. I.e. imagine writing a python function that is strictly typed, and does not use any of the python-isms with __methods that would be more verifiable and faster. I guess cython has a bit of that approach, but it's not that compatible with regular python.
Is it really programming if you can't do pointer arithmetic and poke memory addresses? I suppose so but you're not really programming for a machine you're playing in some language developer's bubble-wrapped sandbox.
In my experience that it’s uncommon to find yourself thinking “I wish this language was less powerful” - far more common to find yourself thinking “I wish this language was less safe.”
Less safe more powerful gives you more options.
You can always make things less safe - it’s hard to make them more powerful.
> Rust doesn’t have null like in Java, but you know when you get a pointer, you can safely dereference it and you don’t have to do all the null checking that you have to do in Java.
thread 'main' panicked at 'called `Option::unwrap()` on a `None` value'
When you call `unwrap` you know that you’re opting-in to unhandled panics at that specific point. At no time to you have to assume something is null—the type system prohibits consuming a value of type Option<T> as if it were a naked T.
grep for .unwrap() and you now have a handy list of every (potential) “null ptr” deref in your rust codebase, try doing the same in Java. You’d need to grep for “.”, I’m sure that’ll be a nice short list
But the great thing about Rust is that the panic traces back to the exact place where I thought something couldn't be None, but it was. In Java, I frequently found it mysterious why some variable or parameter was unexpectedly null.
Kotlin fixes that, fwiw. Nullity is checked and casted away at the earliest possible time (modulo some ergonomic constraints like a.b.c but modern JVMs give better errors for cases like that).
This article is mostly about macro systems. Those are usually troublesome. But they are independent of run-time safety issues.
Macros tend to be hard to read. LISP macros and C++ templates are the usual bad examples. As the article points out, Rust procedural macros are not all that great. They do, though, power Rust's "derive" facility and the serialize/deserialize functions. Those have a simple user interface which hides the internal complexity.
I know, based on your very informative and large post history, that your point is surely more nuanced. But this just reads as any macros that are not Rust macros are wrong, even when they are very similar.
I just do not understand the implied difference (and I have seen comments similar to this sentiment more than just this once). Rust has macros, both of the procedural and ‘syntax-case’ style. Semantically, AFAIK, they are true macros like in Racket or any other language with full meta-syntactic programming options. They are not better typed nor more resilient to errors, nor are they used less pervasively that other languages with macros. Why is it that Rust gets a pass, when other language receive the ‘Macros is bad’ treatment?
Alright, I'll assume you know better; what should we use? Preferably on a variety of OSs; ex. I imagine you can write Windows or Haiku drivers in C++, but I'm only aware of C being an option in any BSD, and Linux support for anything but C is spotty (obviously Rust is making headway, but last I heard you could only write certain kinds of drivers using it as the underlying support is built out).
Oh nice - I'd heard they were using it internally but I didn't know they were working to make it an (official?) option for 3rd parties. I'll also say that Windows has an advantage in the form of a stable driver ABI, which AIUI should make it easier for people to write drivers in arbitrary languages.
Whatever else your platform supports. Literally anything else would be better, other than raw assembly.
Usually there are some options. Rust is supported (to a degree) on both Linux and Windows. Both macOS and Windows support C++, which if used correctly is at least safer than C.
If your platform is supported by LLVM, Zig is worth serious consideration. It is able to target any of the operating systems you mention, from any of the operating systems you mention. Linking to the syscall interface / libc / C libraries is straightforward, exporting functions which respect the local ABI is quite simple.
It's true that, as a pre-1.0 language, choosing Zig means taking on the implementation burden of changing code to keep up with language changes. But it's already being used in production in a surprising number of places.
Wanting to wait until 1.0 to get started is a respectable choice. I prefer to get a head start, compile against master every few weeks, and change what needs changing.
At some point it becomes a bit ridiculous to be pre 1.0 for nearly a decade. It is symptomatic of a level of complexity and confusion about the core language use cases* that is a nonstarter for me.
* In other words, I think it's trying to do way too much and would have benefitted from aggressive scope reduction years ago.
I've wanted to try writing a Linux driver in Nim for fun.
I've ported and used Nim in a couple of RTOS'es and it's pretty straightforward. Of course there wouldn't be official support or anything, but it wouldn't be too hard. Mapping pointers to Nims RC refs would be a bit tricky, but doable.
Couldn't that work? Upload by chunks via static buffer (which I kinda thought is how most web servers actually do uploads?), save to a file, then likewise load+process+store in 1k chunks or whatever. Depending on what processing you're doing and the workload characteristics (most obviously, number of clients) it might be painful, but I think it could work fine with a fixed memory footprint?
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
Strong disagree. Unless you're talking about using an LLM to help transform C into e.g. Rust. I don't see formal methods being unseated by probabilistic techniques (LLMs included) any time soon.
Funny, I would expect the opposite, or at least there would be a different class of problems with LLM assisted C. What guardrails would prevent this, and if there are no tradeoffs with the power of C, why haven't we put those guardrails on for human crafted code?
Generated C code can be safer than hand written C code if you can prove properties of the generator or carefully test and review the generator itself.
2 examples that come to mind are wuffs and wasm2c. Both produce C code that can be shown to be safe due to the properties of the code generator and its input language.
I am not sure LLM output is amenable to same kind of analysis, but you could conceive of an LLM trained on exclusively safe code.
I don't understand why people downvote this comment. It is perfectly reasonable and we can assume with no doubt that AI based analyzers become much better in future in analyzing large code bases and discovering unhandled states, or deviations from specifications, or contradictions or specification gaps.
C is unsafe by design, especially once you think about things like your code calling and being called by other code which you don’t develop. An IDE can only help so much unless what we’re saying is that you’re using it to convert logic into an actually-safe language like Rust.
A lot of things in our daily lives are "unsafe by design"; just go to a kitchen and grab a knife; our common solution is to keep those things out of the hands of children until they have learned to safely use it.
Concerning an IDE: you vision apparently doesn't reach far enough; imagine AI tools which are able to understand both, the requirements and the code, and are able to recognize logic deviations and other unmitigated risks. Btw. Rust is far from "actually-safe"; it helps avoid some memory issues (not all), but there is still a wide range of possibilities to create safety issues, even in Rust.
C isn’t unsafe like a kitchen knife, it’s unsafe like a folding knife with a hinge which collapses while you’re using it. We’ve been saying programmers need to be better for decades but it’s not just skill but also the undefined behavior, unsafe APIs, and backwards compatibility which have made that much less successful than hoped.
Now, the techno-mysticism will eventually become true when we get the AGI you describe but that’s both far beyond what we have now and would immediately raise the question of whether it’d be effort better spent migrating to a less poorly-designed language.
The most value from this article is the clickbait title yielding some interesting discussion from people on that question outside of the actual article contents.
This article isn’t bad but the name is terrible and a bit clickbaity