Hacker Newsnew | past | comments | ask | show | jobs | submit | cb321's commentslogin

PMunch and summarity both already said this, but because maybe code speaks louder than words (like pictures?)... This works:

    from strutils as su import nil
    echo su.split "hi there"
(You can put some parens () in there if you like, but that compiles.) So, you can do Python-style terse renames of imports with forced qualification. You just won't be able to say "hi there".(su.split) or .`su.split` or the like.

You can revive that, though, with a

    template suSplit(x): untyped = su.split x
    echo "hi there".suSplit`
That most Nim code you see will not do this is more a cultural/popularity thing that is kind of a copy-paste/survey of dev tastes thing. It's much like people using "np" as the ident in `import numpy as np`. I was doing this renaming import before it was even widely popular, but I used capital `N` for `numpy` and have had people freak out at me for such (and yet no one freaking out at Travis for not just calling it `np` in the first place).

So, it matters a little more in that this impacts how you design/demo library code/lib symbol sets and so on, but it is less of a big deal than people make it out to be. This itself is much like people pretending they are arguing about "fundamental language things", when a great deal of what they actually argue about are "common practices" or conventions. Programming language designers have precious little control over such practices.


The only way to really test out a programming language is by trying it out or reading how someone else approached a problem that you're interested in/know about.

There are over 2200 nimble packages now. Maybe not an eye-popping number, but there's still a good chance that somewhere in the json at https://github.com/nim-lang/packages you will find something interesting. There is also RosettaCode.org which has a lot of Nim example code.

This, of course, does not speak to the main point of this subthread about the founder but just to some "side ideas".


In this case, it compiles & runs fine with floats (if you just delete the type constraint "Fibable") because the string "1" can be implicitly converted into float(1) { or 1.0 or 1f64 or float64(1) or 1'f64 or ..? }. You can think of the "1" and "2" as having an implicit "T(1)", "T(2)" -- which would also resolve your "doesn't work for me" if you prefer the explicitness. You don't have to trust me, either. You can try it with `echo fib(7.0)`.

Nim is Choice in many dimensions that other PLang's are insistently monosyllabic/stylistic about - gc or not or what kind, many kinds of spelling, new operator vs. overloaded old one, etc., etc., etc. Some people actually dislike choice because it allows others to choose differently and the ensuing entropy creates cognitive dissonance. Code formatters are maybe a good example of this? They may not phrase opposition as being "against choice" as explicitly as I am framing it, but I think the "My choices only, please!" sentiment is in there if they are self-aware.


But given the definition of Fibable, it could be anything that supports + and - operators. That could be broader than numbers. You could define it for sets for example. How do you add the number 1 to the set of strings containing (“dog”, “cat”, and “bear”)? So I suppose I do have a complaint about Fibable, which is that it’s underconstrained.

Granted, I don’t know nim. Maybe you can’t define + and - operators for non numbers?


Araq was probably trying to keep `Fibable` short for the point he was trying to make. So, your qualm might more be with his example than anything else.

You could add a `SomeNumber` predicate to the `concept` to address that concern. `SomeNumber` is a built-in typeclass (well, in `system.nim` anyway, but there are ways to use the Nim compiler without that or do a `from system import nil` or etc.).

Unmentioned in the article is a very rare compiler/PLang superpower (available at least in Nim 1, Nim 2) - `compiles`. So, the below will print out two lines - "2\n1\n":

    when compiles SomeNumber "hi": echo 1 else: echo 2
    when compiles SomeNumber 1.0: echo 1 else: echo 2
Last I knew "concept refinement" for new-style concepts was still a work in progress. Anyway, I'm not sure what is the most elegant way to incorporate this extra constraint, but I think it's a mistake to think it is unincorporatable.

To address your question about '+', you can define it for non-SomeNumber, but you can also define many new operators like `.+.` or `>>>` or whatever. So, it's up to your choice/judgement if the situation calls for `+` vs something else.


That’s fair. Sounds like the example was composed in haste and may not do the language justice.

I think the example was chosen only for familiarity and is otherwise not great. Though it was the familiarity itself that probably helped you to so easily criticize it. So, what do I know? :-)

FWIW, the "catenation operator" in the Nim stdlib is ampersand `&`, not `+` which actually makes it better than most PLangs at visually disambiguating things like string (or other dynamic array, `seq[T]` in Nim) concatenation from arithmetic. So, `a&b` means `b` concatenated onto the end of `a` while `a+b` is the more usual commutative operation (i.e. same as `b+a`). Commutativity is not enforced by the basic dispatch on `+`, though such might be add-able as a compiler plugin.

Mostly, it's just a very flexible compiler / system.. like a static Lisp with a standard surface syntax closer to Python with a lot of parentheses made optional (but I think much more flexible and fluid than Python). Nim is far from perfect, but it makes programming feel like so much less boilerplate ceremony than most alternatives and also responds very well to speed/memory optimization effort.


Thanks for the discussion! I know a lot more about nim than I did this morning.

There is no direct argument/guidence that I saw for "when to use them", but masked arrays { https://numpy.org/doc/stable/reference/maskedarray.html } (an alternative to sentinels in array processing sub-languages) have been in NumPy (following its antecedents) from its start. I'm guessing you could do a code-search for its imports and find arguments pro & con in various places surrounding that.

From memory, I have heard "infecting all downstream" as both "a feature" and "a problem". Experience with numpy programs did lead to sentinels in the https://github.com/c-blake/nio Nim package, though.

Another way to try to investigate popularity here is to see how much code uses signaling NaN vs. quiet NaN and/or arguments pro/con those things / floating point exceptions in general.

I imagine all of it comes down to questions of how locally can/should code be forced to confront problems, much like arguments about try/except/catch kinds of exception handling systems vs. other alternatives. In the age of SIMD there can be performance angles to these questions and essentially "batching factors" for error handling that relate to all the other batching factors going on.

Today's version of this wiki page also includes a discussion of Integer Nan: https://en.wikipedia.org/wiki/NaN . It notes that the R language uses the minimal signed value (i.e. 0x80000000) of integers for NA.

There is also the whole database NULL question: https://en.wikipedia.org/wiki/Null_(SQL)

To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view.


>To be clear, I am not taking some specific position, but I think all these topics inform answers to your question. I think it's something with trade-offs that people have a tendency to over-simplify based on a limited view.

That's fair, I wasn't dimsissing the practice but rather just commenting that it's a shame the author didn't clarify their preference.

I don't think the popularity angle is a good proxy for usefulness/correction of the practice. Many factors can influence popularity.

Performance is a very fair point, I don't know enough to understand the details but I could see it being a strong argument. It is counter intuitive to move forward with calculations known to be useless, but maybe the cost of checking all calculations for validity is larger than the savings of skipping early the invalid ones.

There is a catch though. Numpy and R are very oriented to calculation pipelines, which is a very different usecase to general programming, where the side effects of undetected 'corrupt' values can be more serious.


The conversation around Nim for the past 20 years has been rather fragmented - IRC channels, Discord channels (dozens, I think), later the Forum, Github issue threads, pull request comment threads, RFCs, etc. Araq has a tendency to defend his ideas in one venue (sometimes quite cogently) and leave it to questioners to dig up where those trade-off conversations might be. I've disliked the fractured nature of the conversation for the 10 years I've known about it, but assigned it to a kind of "kids these days, whachagonnado" status. Many conversations (and life!) are just like that - you kind of have to "meet people where they are".

Anyway, this topic of "error handling scoping/locality" may be the single most cross-cutting topic across CPUs, PLangs, Databases, and operating systems (I would bin Numpy/R under Plangs+Databases as they are kind of "data languages"). Consequently, opinions can be very strong (often having this sense of "Everything hinges on this!") in all directions, but rarely take a "complete" view.

If you are interested in "fundamental, not just popularity" discussions, and it sounds like you are, I feel like the database community discussions are probably the most "refined/complete" in terms of trade-offs, but that could simply be my personal exposure, and DB people tend to ignore CPU SIMD because it's such a "recent" innovation (hahaha, Seymore Cray was doing it in the 1980s for the Cray-3 Vector SuperComputer). Anyway, just trying to help. That link to the DB Null page I gave is probably a good starting point.


There is also https://en.wikipedia.org/wiki/HumancentiPad (which is almost surely an homage to the movie) which was 2011 and tied in all kinds of tech-aspects like licensing and iPads.

Indeed. One of the main reasons STL defaults to a non-moving hash table is because "deleting in the middle of an iteration" was viewed as an important property. Like it or not (I do not), the entire container part of the original 1990s Stepanov STL was oriented around "iteration/iterators" as the high priority abstraction.

Trying to think like a DB engineer is probably helpful here. Point or range query workloads absolutely afford different optimization/specializations than full table scan iterations.

Any mistake here is more in thinking "one size fits all workloads". That may as much be a mistake of user communities as of language designers. (E.g. 'expert' advice like "just always use X" instead of "well, it depends...".) So, the charitable reading of tialaramex's "most people" is a weakly articulated assertion about workloads (though I absolutely agree with your specific point).


I'm not at all convinced I need to be read "charitably". I think it's just straightforwardly true that most people writing C++ with a std::unordered_map do not use this property of std::unordered_map.

In particular the "deleting arbitrary other entries while walking the unordered map" is a weird thing to be doing. It's not necessarily wrong but it's weird to need precisely this (which it so happens the std::unordered_map can do) but not e.g. inserting arbitrary stuff into the map (which might grow and thus invalidate your iterator)

Note that "Delete yourself if you want" is an operation we can afford in the otherwise more optimal containers, so if that is the only deletion you want then you again over-paid by choosing this weird container type with pointer stability.

I do not believe that Stepanov chose this particular hash table because this one had properties he specifically wanted, maybe you have a reference which can prove me wrong? I think this was the type he was most familiar with, and the STL is illustrating generic programming - an important idea - not showing off the most optimal data structures and algorithms.


The primary reason you might want reference stability is because you might have another pointer to it. Reference stability is generally important for the STL and carefully documented; even for vectors stability is guaranteed on many modify operations. I have seen plenty of code that use reserve+push_back specifically to preserve stability for example.

The primary reason for unordered_map to preserve stability is that std::map does and the ability of unordered_map to be a drop-in replacement for std::map for code that doesn't need ordering was considered at that time a better tradeoff than the absolute optimal implementation.

You can of course build a pointer stable map on top of an unstable one by heap allocating every object and storing unique_ptrs to the objects in the map, but when hash_map was initially added to the STL (remember than unordered map is just standardizing the non-standard hash_map), there was no unique_ptr (we do not speak of auto_ptr).


I tried to defend you against a hard to support "most people didn't want this" with a workload (either static or dynamic) assertion. You come back saying you don't need such charitability while simultaneously clarifying in terms of a static workload! Seemingly, "most written C++" (again charitably moving from "people writing" to "written code" because the latter is probably more relevant and surely more auditable/studyable - "source code files easier than people").

I think you should be more careful about claiming you know what people want or know. Many people I know don't seem to know what they want.. it's some melange of desiderata with various eternal tensions. As just one example, I would doubt most people even want performance above all else. I really doubt most people understand well how their various static and dynamic workloads relate to their performance. I doubt I'm alone in having known many C++ programmers who, from spoken conversation, revealed they thought STL map was a hash table, for example.

I only said "one of the main" and almost added "with pointer stability being the bigger", but it had already been observed by rurban. Delete-while-iterating has close ties with cell motion (though convoluted workarounds may exist, they may well cost more than you personally would like, it sounds). How to best explain all these things is always questionable, but I was trying to supplement. It's generally not easy without visual aids and definitely not if people are being combative. Python has ever & always since the late 80s used open addressing but only more recently tried to `raise` on dict edits during iterations.

It seems to be flip-flopping that you are now calling separately chained hash tables a "weird container type" when 3 months ago it seemed to be news to you that they were NOT the main strategy for decades before open addressing on some Nim -ish comment on a Carbon thread: https://news.ycombinator.com/item?id=44754821

Anything else I would say, such as Stepanov not even choosing a hash table in the first place, gpderetta said very well in a sibling. For the record, I still very much disagree with the early (& later) design of the STL and am no fan of WG21. Have a good weekend.


Rather than evidence of flip-flopping what I'm talking about there is that I consider this to be weird today in 2025. Things change, there's innovation in data structures and algorithms and the hardware we're likely targeting also changes to favour some structures and algorithms over others.

In 1975 this wouldn't be weird, but also C++ doesn't exist then.


For a while now, I've been summarizing the ease with which everything turns into a "Humanity Complete" problem via: "Delegation affords so much, but trust sure is tricky."

This has been observed forever in various forms/contexts. Planning & policy people call them "Wicked Problems" (https://en.wikipedia.org/wiki/Wicked_problem). The Philosophy of Science one goes by the Demarcation Problem (https://en.wikipedia.org/wiki/Demarcation_problem) { roughly, in the sense that the really hard nugget connects to "trust" }.

At least one aspect of all of it is that trust is a little like money/capital and "faking it" is a bit like "stealing". The game theory of it is that since faking is virtually always vastly cheaper there are (eventually) huge incentives to do so, at some point by someone(s). So, almost any kind of trust/delegation structure has a strong pull toward "decay", from knock-off brands to whatever. It just takes a sadly small fraction of Prisoner's Dilemma defectors to ruin things/systems thereof. 2nd law of thermo makes order cost energy and this decay feels like almost an isomorphic (maybe even the same..?) thing. It's not just product/tech enshittification, but that might be yet another special case/example.

Anyway, I have no great answers or as some responder to me a while back said, if I did, I'd "have a Nobel and possibly be the first president of the united planet".


As a slight refinement of your point, C does have storage map based N-D arrays/tensors like Fortran, just with the old column-major/row-major difference and a clunky "multiple [][]" syntax. There was just a restriction early on to need compile-time known dimensions to the arrays (up to the final dimension, anyway) because it was a somewhat half-done/half-supported thing - and because that also fit the linear data model well. So, it is also common to see char *argv[] like arrays of pointers or in numerics sometimes libraries which do their own storage map equations from passed dimensions.

Also, the linear memory model itself is not really only because of Algol/Turing machines/theoretical CS/"early" hardware and mechanical sympathy. DRAM has rows & columns internally, but byte addressability leads to hiding that from HW client systems (unless someone is doing a rowhammer attack or something). More random access than tape rewind/fast forward is indeed a huge deal, but I think the actual popularity of linearity just comes from its simplicity as an interface more than anything else. E.g.s, segmented x86 memory with near/far pointers was considered ugly relative to a big 32-bit address space and disk files and other allocation arenas have internally a large linear address/seek spaces. People just want to defer using >1 number until they really need to. People learn univariate-X before they learn multivariate-X where X could be calculus, statistics, etc., etc.


Like Python, R is a 2 (+...) language system. C/Fortran backends are needed for performance as problems scale up.

Julia and Nim [1] are dynamic and static approaches (respectively) to 1 language systems. They both have both user-defined operators and macros. Personally, I find the surface syntax of Julia rather distasteful and I also don't live in PLang REPLs / emacs all day long. Of course, neither Julia nor Nim are impractical enough to make calling C/Fortran all that hard, but the communities do tend to implement in the new language without much prompting.

[1] https://nim-lang.org/


You might enjoy https://nim-lang.org/ which has a Python-like syntax with even more flexibility really (UFCS, command-like calls, `fooTemplate: stuff` like user-defined "statements", user-defined operators, term-rewriting macros and more. With ARC it's really just about as safe as Rust and most of the stdlib is fast by default. "High quality" is kind of subjective, but they are often very welcoming of PRs.

Anyway, to your point, I think a newbie could pick up the basics quickly and later learn more advanced things. In terms of speed, like 3 different times I've compared some Nim impl to a Rust impl and the Nim was faster (though "at the extreme" speed is always more a measure of how much optimization effort has been applied, esp. if the language supports inline assembly).

https://cython.org/ , which is a gradually typed variant of Python that compiles to C, is another decent possibility.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: