Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One big advantage is regression testing - which is why you don't just test it in a REPL and forget about it.


Totally agree here, manually verifying a chunk of code once, for example early on in the dev lifecycle, then having it exist inside a big system is really the "EE" way of doing things.

But the reality is you gotta dig deep and change stuff, after all how often does an engineer dive in and change how a transistor works (laws of physics be damned!)

I feel like overall software quality has been harmed by the emphasis of mathematical proof of code. The problem is that it's not currently possible (for a variety of reasons, including underspecified languages, math, research and many others) to formally verify, end-to-end, a real world system (eg: Oracle Database, Salesforce.com CRM, etc).

Over promising on software proof, and under delivering has diverted attention away from relatively simple and "boring" practices to increase code quality.

While people in this thread are mocking TDD, the reality is most code I have seen in a production setting is really bad. Unit tests that aren't, test runs that take 4 hours, complete lack of tests, opaque tests that are hard to debug, and hard to fix. A lot of TDD and agile code/clean code says "let's make the code easier to read, have unit, real unit tests, for everything and refactor and get rid of cruft as often". While these are TOOLS and judicious application applies, in a production code scenario there is often no justification, other than ignorance (always fixable!) to disregard good advice!

To summarize the good advice:

- make code easy to read at a glance

- code should be self-documenting (refinement of the above)

- units should be testable independently

- tests should exist at higher levels, such as integration and end-to-end tests

Just like the 10 commandments, you don't have to believe in Agile or XP to agree with these basic things.


I'm sorry, but I don't believe this point has any merit with regards to this discussion. Formal verification was seen as the future for large systems for some short time in the latter half of the 20th century, and since then no one has seriously suggested verifying programs of millions lines of code. The point that is made here (quite sensibly) is that in many cases, by verifying small parts of your codebase (a few hundred lines, perhaps), you can rule out whole classes of bugs.

To give some useful real-life exampled, in OCaml (or at least in Standard ML), if a program compiles one can know that it will never crash (i.e. leave the control of the system - it might still abort) unless the compiler has a bug (and there's now a verified ML compiler for x64). Other useful properties might be to prove that a driver terminates (Microsoft SLAM/T2) and so will never hang the system, or a compiler that produces code semantically equivalent to the spec of the language.

You say that software quality has been harmed by the emphasis of mathematical proof of code - I just don't see how this can be the case. In a typical education, a student will see two types of proving code - in an algorithms class, where it is required in order to show that the algorithm solves the problem (and totally separately to actually building software systems), and in the formal verification/model class, which I doubt is taught at many universities and is almost certainly optional.

If someone enters the workforce without sufficient knowledge of test driven development, isn't taught at their place of work and writes bad code, there's a nice long list of people they can blame (their bosses, the people reviewing their code, their style book, their examiners, their lecturers, etc.) but it's not valid to blame formal verification for this, no more than it is valid to blame the hypothetical course on FPGA programming they took at some point.

Yes, test driven development is great, but it is totally complementary to this sort of thing.


> if a program compiles one can know that it will never crash

that is not true, you need much stronger guarantees than just typechecking gives you. you also have a large wodge of native code for the runtime that is not written in the strongly typed language.


Ok, so in retrospect I didn't mean that for OCaml. Certainly it is true for Standard ML [1, 2, 3], or rather, the parts of the language that use the underlying native code (IO etc) are generally smaller pieces of code for which the correctness properties are hopefully more easily tested and verified. In any case, I believe my point still stands on the whole :)

[1] http://www.cl.cam.ac.uk/teaching/1314/FoundsCS/fcs-notes.pdf [2] http://gallium.inria.fr/~fpottier/publis/fpottier-regis-gian... [3] https://en.wikipedia.org/wiki/Type_safety#Standard_ML


friends of mine have still segfaulted mlton-compiled SML programs. as I understand it, you need to have a bisimulation-based proof that shows equivalence in states between ML and x86, and as far as I know that just hasn't been done. until then you have the risk of a bug in some aspect of the compiler, which has happened, emitting code that produces a crash or something else bad.


My first opportunity to whip this one out :) https://cakeml.org/popl14.pdf


nice!


It has been done: https://cakeml.org/.


Does Safe Haskell guatantee enough, or do we need more?


I don't think so, there have been bugs in ghc before, the problem would be a bug in ghc that emits code that crashes.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: