Hacker Newsnew | past | comments | ask | show | jobs | submit | more saghm's commentslogin

Yeah, I'm not sure I've ever seen NaN called or as an example to be emulated before, rather than something people complain about.

Holy shit, I'd love to see NaN as a proper sum type. That's the way to do it. That would fix everything.

I suspect that this would result in a lot of .unwrap() calls or equivalent, and people would treat them as line noise and find them annoying.

An approach that I think would have most of the same correctness benefits as a proper sum type while being more ergonomic: Have two float types, one that can represent any float and one that can represent only finite floats. Floating-point operations return a finite float if all operands are of finite-float type, or an arbitrary float if any operand is of arbitrary-float type. If all operands are of finite-float type but the return value is infinity or NaN, the program panics or equivalent.

(A slightly more out-there extension of this idea: The finite-float type also can't represent negative zero. Any operation on finite-float-typed operands that would return negative zero returns positive zero instead. This means that finite floats obey the substitution property, and (as a minor added bonus) can be compared for equality by a simple bitwise comparison. It's possible that this idea is too weird, though, and there might be footguns in the case where you convert a finite float to an arbitrary one.)


> I suspect that this would result in a lot of .unwrap() calls or equivalent, and people would treat them as line noise and find them annoying.

I was thinking about this the other day for integer wrapping specifically, given that it's not checked in release mode for Rust (by default at least, I think there's a way to override that?). I suspect that it's also influenced by the fact that people kinda expect to be able to use operators for arithmetic, and it's not really clear how to deal with something like `a + b + c` in a way where each step has to be fallible; you could have errors propagate and then just have `(a + b + c)?`, but I'm not sure that would be immediately intuitive to people, or you could require it to be explicit at each step, e.g. `((a + b)? + c))?`, but that would be fairly verbose. The best I could come up with is to have a macro that does the first thing, which I imagine someone has probably already written before, where you could do something like `checked!(a + b + c)`, and then have it give a single result. I could almost imagine a language with more special syntax for things having a built-in operator for that, like wrapping it in double backticks or something rather than `checked!(...)`.


> Have two float types, one that can represent any float and one that can represent only finite floats. Floating-point operations return a finite float if all operands are of finite-float type, or an arbitrary float if any operand is of arbitrary-float type. If all operands are of finite-float type but the return value is infinity or NaN, the program panics or equivalent.

I suppose there's precedent of sorts in signaling NaNs (and NaNs in general, since FPUs need to account for payloads), but I don't know how much software actually makes use of sNaNs/payloads, nor how those features work in GPUs/super-performance-sensitive code.

I also feel that as far as Rust goes, the NonZero<T> types would seem to point towards not using the described finite/arbitrary float scheme as the NonZero<T> types don't implement "regular" arithmetic operations that can result in 0 (there's unsafe unchecked operations and explicit checked operations, but no +/-/etc.).


Rust's NonZero basically exists only to enable layout optimizations (e.g., Option<NonZero<usize>> takes up only one word of memory, because the all-zero bit pattern represents None). It's not particularly aiming to be used pervasively to improve correctness.

The key disanalogy between NonZero and the "finite float" idea is that zero comes up all the time in basically every kind of math, so you can't just use NonZero everywhere in your code; you have to constantly deal with the seam converting between the two types, which is the most unwieldy part of the scheme. By contrast, in many programs infinity and NaN are never expected to come up, and if they do it's a bug, so if you're in that situation you can just use the finite-float type throughout.


> By contrast, in many programs infinity and NaN are never expected to come up, and if they do it's a bug, so if you're in that situation you can just use the finite-float type throughout.

I suppose that's a fair point. I guess a better analogy might be to operations on normal integer types, where overflow is considered an error but that is not reflected in default operator function signatures.

I do want to circle back a bit and say that my mention of signaling NaNs would probably have been better served by a discussion of floating point exceptions more generally. In particular, I feel like existing IEEE floating point technically supports something like what you propose via hardware floating point exceptions and/or sNaNs, but I don't know how well those capabilities are actually supported (e.g., from what I remember the C++ interface for dealing with that kind of thing was clunky at best). I want to say that lifting those semantics into programming languages might interfere with normally desirable optimizations as well (e.g., effectively adding a branch after floating point operations might interfere with vectorization), though I suppose Rust could always pull what it did with integer overflow and turn off checks in release mode, as much as I dislike that decision.


> Get rid of treating things differently, make everything the same and make everything work with everything, code should just be data.

"Code should just be data" doesn't imply the converse, though; there's arguably utility in having data that isn't code, even with the premise that code should be data.


Regardless of should's or should not's, data is always code.

> Look at the farms that have the houses of that era standing on them and you'll soon notice that they are all mansions.

> There are no plantations around here.

FWIW you haven't really stated where "here" is for you. It's not necessarily going to be the same for everyone, and based on the parent comments, the potential area under discussion could include the entirety of the US and Europe (although the initial comment only mentioned UK specifically, it doesn't seem clear to me that it's explicitly only talking about that). I'm not sure you can categorically state that no one in this conversation could be talking about areas that have plantations.


Yeah, this is very confusing to me. I don't see how someone can conflate Go implicitly deciding whether to promote a pointer to the heap based on escape analysis without any way for the programmer to tell other than having to replicate the logic that's happening at runtime with needing to explicitly use one on the APIs that literally exist for the sole purpose of allocating on the heap without either fundamentally misunderstanding something or intentionally being misleading.

When it comes to our ability to write bug-free code, I feel like humans are not actually not that good at writing bug-free code. We just don't have any better way of producing software than that, and software is useful. This doesn't mean we're particularly good at it though, just that it's hard to motivate people to spend effort up front to avoid bugs when the cost of them is easy to ignore in the short term when they're not obvious. I feel like the mindset that languages that try to make them more apparent up front (which I honestly would not include Go as one of) are somehow getting in the way of us is pretty much exactly the opposite of what's needed, especially in the systems programming space (which also does not really include Go in my mind).

If you're willing to do what you're saying in Go, exposing the errors from anyhow would basically be the same thing. The only difference is that Rust also gives all those other options you mention. The point about other people saying not to do it doesn't really seem like it's something you need to be super concerned with; for all we know, people might tell you the same thing about Go if it had the ability for similar APIs, but it doesn't

OT, but I learned Lua this year in order to be able to write a mod for a game, and maybe this is due to it being a while since I last used a dynamic language regularly, but Lua really feels like it's basically what JavaScript was intended to be. Both use a map-like data structure for basically everything, with integer keys to make them act like arrays, function values to make them act as objects, but Lua using an explicit function call in `for ... in` loops avoided needing a separate construct to be added later on for iterating over arrays in order (or having to resort to manually iterating over the numbers rather than the array itself). Lua's module system reminds me a lot of how Node's `exports` works (although nowadays I understand there are other ways of importing/exporting stuff in JavaScript), and it's not obvious to me that the power of prototypes in JavaScript are worth the extra complexity over using the module system for the pre-ES6 model of OO that JavaScript used. I feel like Lua basically already has solved most of the stuff that JS has needed to add a lot of new features for in recent years. I imagine this is something that a lot of people were already aware of, but at least personally, even being cognizant of the flaws that JS had been trying to fix, I hadn't realized an already well-established language had a design that solved most of them without also having a lot of additional scope beyond what JS was trying to do (e.g. Python having full-fledged class-based OO) or at least superficially looking a lot different (e.g. some form of lisp, which I know had been at least talked about in the early web days as a potential option but might have faced more of an uphill battle for adoption).

Based on the link someone put in a different comment about them suing Deno, at least in Oracle's case the answer is presumably "being able to sue people and get money from them".

Even if that weren't the case though, I think part of the problem is that even if the trademarks literally never brings any value, it also potentially costs them nothing to retain them (unless someone tries to get it invalidated, at which point there's some cost to trying to defend it). Arguably the cost to establish in the trademark in the first place is also low enough that companies at that scale don't have much incentive notto establish them in the first piece; they already have lawyers and trademarking things isn't really out of the ordinary for them, so the marginal cost of having them file one more isn't very high.

It's worth considering whether the point you make about there not being much of a realistic concern around someone else attempting to copy the name is something that would be obvious to non-developers. Sometimes what might be obvious to a developer might not be obvious to a lawyer, and at the end of the day, the legal team is probably in charge of deciding things like this at these companies, so in the absence of pressure from someone who understands this point enough influence to make it happen (like maybe a C-level exec), it might not matter that the concern is realistic if it's theoretically plausible.


Oracle's sole interest is extracting money from its assets through whatever tactics are most effective, regardless of technical merit (not specific to JavaScript I guess though)

I had no idea this was a thing! I'm surprised this didn't attract more attention.

My bad, after reading more it seems Deno is trying to get Oracle's trademark revoked, but I found out that "Rust for Javascript" devs have received a cease and desist from Oracle regarding the JS trademark, which may have triggered Deno to go after Oracle.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: