Hacker Newsnew | past | comments | ask | show | jobs | submit | more cb321's commentslogin

"Popularity" probably has more to do with Apple moving to Zsh than anything else. Zsh has been more powerful than Bash for literally the entirety of the existence of both. It surely was back in like 1993 when I first looked at them. The "emacs of shells" might not be the worst summary.

Fish is a more recent addition, but I hate its `for loop` syntax, seemingly copied from BSD C Shell, which this Ion shell seems to have copied (or maybe Matlab or Julia?). Baffles me to impose a need for `end` statements in 2025. In Zsh, for a simple single command, I need only say `for i in *;echo $i` - about as concise as Python or Nim. In the minimalism aesthetic, Plan 9 rc was nicer even before POSIX even really got going (technically POSIX was the year before Plan 9 rc) for quoting rules if nothing else.

I think it's more insightful to introspect the origins of the "choosing something outside bash" rule you mentioned. I think that comes from generic "stick to POSIX" minimalism where Bash was just the most commonly installed attempt to do only (mostly) POSIX shell.. maybe with a dash of "crotchety sysadmins not wanting to install new shells for users".

Speaking of, the dash shell has been the default on Debian for a long while. So, I think really the rule has always been something "outside POSIX shell", and its origins are simply portability and all those bashisms are still kind of a PITA.


> "Popularity" probably has more to do with Apple moving to Zsh than anything else. Zsh has been more powerful than Bash for literally the entirety of the existence of both. It surely was back in like 1993 when I first looked at them. The "emacs of shells" might not be the worst summary.

It's my impression that Apple switched to zsh because it's permissively licensed, so they could replace the now-ancient last version of bash to use GPLv2 (instead of v3). Obviously it helped that they could replace it with something even more feature-rich, but I expect they would have taken the exact same functionality under a more permissive license.


The fit diagnostics at the top of the plot are inadequate. This needs at a minimum error estimates on the estimated parameters (probably bootstrap) and ideally some kind of "error envelope" on the plot.


I don’t think you can do anything sensible here without making much stronger modelling assumptions. A vanilla non-parametric bootstrap is only valid under a very specific generative story: IID sampling from a population. Many (most?) curve-fitting problems won't satisfy that.

For example, suppose you measure the decay of a radioactive source at fixed times t = 0,1,2,... and fit y = A e^{-kt}. The only randomness is small measurement error with, say, SD = 0.5. The bootstrap sees the huge spread in the y-values that comes from the deterministic decay curve itself, not from noise. It interprets that structural variation as sampling variability and you end up with absurdly wide bootstrap confidence intervals that have nothing to do with the actual uncertainty in the experiment.


These are all big topics, but any "parametric curve fitting" like this tool uses is parameter estimation (the parameters of the various curves). That already makes strong modeling assumptions (usually including IID, Gaussian noise, etc.,) to get the parameter estimates in the first place. I agree it would be even better to have ways to input measurement errors (in both x- & y- !) per your example and have non-bootstrap options (I only said "probably"), residual diagnostics, etc.

Maybe a residuals plot and IID tests of residuals (i.e. tests of some of the strong assumptions!) would be a better next step for the author than error estimates, but I stand by my original feedback. Right now even the simplest case of a straight line fit is reported with only exact slope & intercept (well, not exact, but to an almost surely meaningless 16 decimals!), though I guess he thought to truncate the goodness of fit measures at ~4 digits.


I think we are just coming at this from different angles. I do understand and agree that we are estimating the parameters of the fit curves.

> That already makes strong modeling assumptions (usually including IID, Gaussian noise, etc.,) to get the parameter estimates in the first place

You lose me here - I don't agree with "usually". I guess you're thinking of examples where you are sampling from a population and estimating features of that population. There's nothing wrong with that, but that is a much smaller domain than curve fitting in general.

If you give me a set of x and y, I can fit a parametric curve that tries to minimises the average squared distance between fit and observed values of y without making any assumptions whatsoever. This is a purely mechanical, non-stochastic procedure.

For example, if you give me the points {(0,0), (1,1), (2,4), (3,9)} and the curve y = a x^b, then I'm going to fit a=1, b=2, and I certainly don't need to assume anything about the data generating process to do so. However there is no concept of a confidence interval in this example - the estimates are the estimates, the residual error is 0, and that is pretty much all that can be said.

If you go further and tell me that each of these pairs (x,y) is randomly sampled, or maybe the x is fixed and the y is sampled, then I can do more. But that is often not the case.


What methods can you use the estimate the standard error in this case?


The radioactive decay example specifically? Fit A and k (e.g. by nonlinear least squares) and then use the Jacobian to obtain the approximate covariance matrix. The diagnonal elements of that matrix give you the standard error estimates.


Any article about this topic that does not at least mention https://arcan-fe.com/about/ feels incomplete. (Also, Arcan is already available.)


whoa this looks fascinating, i've never heard of it before! thank you for the link :)


To add to lproven's point.

An article called "A Spreadsheet and a Debugger walk into a Shell" [0] by Bjorn (letoram) is a good showcase of an alternative to cells in a Jupyter notebook (Excel like cells!). Another alternative a bit more similar to Jupyter that also runs on Arcan is Pipeworld.

[0] https://arcan-fe.com/2024/09/16/a-spreadsheet-and-a-debugger... [1] https://arcan-fe.com/2021/04/12/introducing-pipeworld/

PS: I hang out at Arcan's Discord Server, you are welcome to join https://discord.com/invite/sdNzrgXMn7


I was thinking of Arcan and the Lash#Cat9 setup by the end of your second paragraph. I'm very surprised you had not met it: it seems so directly aligned with your interests and goals, but all you seemed to talk about was Jupyter, a tool which I tried and TBH discarded after 10min.

It is very hard to explain Arcan but I tried:

https://www.theregister.com/2022/10/25/lashcat9_linux_ui/

I talked to Bjorn Stahl quite a bit before writing it, but he is so smart he seems to me to find it hard to talk down to mere mortals. There's a pretty good interview with him on Lobsters:

https://lobste.rs/s/w3zkxx/lobsters_interview_with_bjorn_sta...

You really should talk to him. Together you two could do amazing things. But IMHO let Jupyter go. There's a lot more to life than Python. :-)


"Great" smells very subjective. I went to https://forum.nim-lang.org/ . Put "flask" in the search box. Second hit (https://forum.nim-lang.org/search?q=flask) is this: https://forum.nim-lang.org/t/11032 . That mentions not one but 2 projects (https://github.com/HapticX/happyx & https://github.com/planety/prologue).

If either/both are not "great enough" in some particulars you want, why not raise a github issue? (Or even better look into adding said particulars yourself? This is really the main way Python grew its ecosystem.)


Besides your chart, another point along these lines is that the article cites Azhar claiming multiples are not in bubble territory while also mentioning Murati getting essentially infinite price multiple. Hmmmm...


I can confirm that I just do it over ssh fine all the time -- gnuplot, img2sixel as an "image-cat", etc. (`st` with patches to add sixel support as discussed in various places in these comments.)


`st` used to have a patch set for sixel graphics on its web site. I use an old version all the time to do gnuplots in terminals with nice scrollback. It seems to have been retired in favor of the kitty graphics protocol.


This[1] is an up-to-date fork with sixel support, and a few other patches.

IMO it's unfair to compare barebones `st` with fully-featured terminals. The entire point of `st` is for users to apply their own patches to it, which does make it difficult to compare, since there's no standard version of it.

`st` is a pretty great terminal. I switched to `foot` when I migrated to Wayland a few months ago, but not for any practical reasons other than wanting to rely less on Xwayland.

[1]: https://github.com/veltza/st-sx


I 100% agree `st` is pretty great and comparing bare bones is unfair.

Thanks for that link! I suppose I should have provided a link to the variant I use which is https://github.com/bakkeby/st-flexipatch though I do have like 14 of my own private patches. :-) Because it really is a simple, hackable codebase.

I will say, though, that I doubt there are many unicode conformance patches floating about. I don't know though, and I haven't looked.


While I agree with everything you wrote, the "better ways" have a "cap" on how effective they can be. The root causes -- both in-group preference[1] and laziness/delegation to the "smart loudmouth contemporaries/smart-enough predecessors" -- will be things for the foreseeable future among humans. Some might even call them eternal / instinctive. Our whole civilization is based upon delegation/layering, but trust sure is tricky! Even the smartest humans fall prey to Gell-Mann amnesia[2] on topics beyond their expertise. Personally, I think most of what you wrote all connects to the cluster of wicked problems[3] that I think of as "Humanity Complete" (after NP-Complete transformability).

[1] https://en.wikipedia.org/wiki/In-group_favoritism

[2] https://en.wikipedia.org/wiki/Gell-Mann_amnesia_effect

[3] https://en.wikipedia.org/wiki/Wicked_problem


Very nicely put. Thanks for that.

I'd offer solutions, except for the trivial implementation detail that I don't have any. But then, if I did, I'd have a Nobel and possibly be the first president of the united planet.


Since anything/0 = infinity, these kinds of things always depend upon what programs do and as a sibling comment correctly observes how much they interfere with SIMD autovectorization and sevral other things.

That said, as a rough guideline, nim c -d=release can certainly be almost the same speed as -d=danger and is often within a few (single digits) percent. E.g.:

    .../bu(main)$ nim c -d=useMalloc --panics=on --cc=clang -d=release -o=/t/rel unfold.nim
    Hint: mm: orc; opt: speed; options: -d:release
    61608 lines; 0.976s; 140.723MiB peakmem; proj: .../bu/unfold.nim; out: /t/rel [SuccessX]
    .../bu(main)$ nim c -d=useMalloc --panics=on --cc=clang -d=danger -o=/t/dan unfold.nim
    Hint: mm: orc; opt: speed; options: -d:danger
    61608 lines; 2.705s; 141.629MiB peakmem; proj: .../bu/unfold.nim; out: /t/dan [SuccessX]
    .../bu(main)$ seq 1 100000 > /t/dat
    .../bu(main)$ /t
    /t$ re=(chrt 99 taskset -c 2 env -i HOME=$HOME PATH=$PATH)
    /t$ $re tim "./dan -n50 <dat>/n" "./rel -n50 <dat>/n"
    225.5 +- 1.2 μs (AlreadySubtracted)Overhead
    4177 +- 15 μs   ./dan -n50 <dat>/n
    4302 +- 17 μs   ./rel -n50 <dat>/n
    /t$ a (4302 +- 17)/(4177 +- 15)
    1.0299 +- 0.0055
    /t$ a 299./55
    5.43636... # kurtosis=>5.4 sigmas is not so significant
Of course, as per my first sentence, the best benchmarks are your own applications run against your own data and its idiosyncratic distributions.

EDIT: btw, /t -> /tmp which is a /dev/shm bind mount while /n -> /dev/null.


While it's ecosystem probably does not even match Julia's let alone Python's or the C/FORTRAN-verses, since Nim has been around for almost 20 years and publicly since 2008, there are still a lot of accumulated packages. Some are listed at: https://github.com/ringabout/awesome-nim for really a large (and even so still incomplete!) list of things you might be interested in. Hard to say how well maintained they are. That said, you do probably have to be prepared to do a lot of work yourself and work around compiler limitiations/bugs. Also, binding to C libs is very straightforward with a near trivial FFI.

I suppose it very much depends on the programmer & setting, but like 3 times I've looked for Rust projects similar to Nim ones and found the performance of the Rust quite lacking. Of course, any language that allows you access to assembly makes things ultimately "only" a matter of programmer effort, but the effort to get performance out of Nim seems very competitive in my experience. I've seen at least one ancient 1990s C project be more flexible and much faster in Nim at like 6% the LOC (https://github.com/c-blake/procs for the curious).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: