Actually echo is not a statement - Nim's syntax is just much more flexible than Python so what looks like a statement in Python is actually just a UFCS/Command-Line "call" (of macro/template/generic/procedure aka "routine"). It is super easy to roll your own print function [1] and there is no penalty for doing so except that the std lib does not provide a "common parlance". So, that wheel might get reinvented a lot.
A lot of things like this in cligen because it is a leaf dependency (the literally 1..3 identifier CLI "api") and so many "drive by" PLang tester-outers might want to roll a little CLI around some procs their working on.
Also, beyond the echo x,y is same as echo(x,y) or x.echo(y) or x.echo y, the amount of syntax flexibility is dramatically more than Python. You can have user-defined operators like `>>>` or `!!!` or `.*`. There are also some experimental and probably buggy compiler features to do "term re-writing macros" so that your matrix/bignum library could in theory re-write some bz*ax+y expression into a more one-pass loop (or maybe conditionally depending upon problem scale).
I sometimes summarize this as "Nim Is Choice". Some people don't like to have to/get to choose. To others it seems critical.
Someone even did some library to make `def` act like `proc`, but I forget its name. Nim has a lot more routine styles than Python, including a special iterator syntax whose "call" is a for-construct.
Nim compile+link times can be sub-100 milliseconds with tcc [1] (just saw 93 ms on my laptop) which can yield pretty reasonable edit-compile-test cycles.
The grandparent's scientific/Go interests suggests a need for a large, working ecosystem and there are probably several places in ArrayMancer which need some love for "fallback cases" to work with tcc (or elsewhere in the as you note much smaller Nim eco-system).
EDIT: E.g., for ArrayMancer you need to make stb_image.h work with tcc by adding a nim.cfg/config.nims `passC="-DSTBIW_NO_SIMD -DSTBI_NO_SIMD"` directive, though. And, of course, to optimize compile times, just generally speaking, you always want to import only exactly what you really need which absolutely takes more time/thought.
I would say the situation is worse as this "subscription-esque" model is "spreading" to areas beyond software. Exercise equipment like ellipticals and bicycles - whose software is/could be borderline +/- resistance level trivial - has been moving to "only works with an online subscription" business models for a long time.
I mean, I have had instances that controlled resistance with like a manual knob, but these new devices won't let you set levels without some $30+/month subscription. It's like the planned obsolescence of the light bulb cartels of the 1920s on steroids.
Personally, I have a hard time believing markets support this kind of stuff past the first exposé. I guess when you don't have many choices or the choices that you do have all bandwagon onto oligopoly/cartel-like activity things, pretty depressing, but stable patterns can emerge.
Heck, maybe someone who knows the history of retail could inform us that it came to software "from business segment XYZ". For example, in high finance for a long-time negotiated charging prices that are a fraction of assets under management is not uncommon. Essentially a "percent tax", or in other words the metaphorical "charging Bill Gates a million dollars for a cheeseburger".
EDIT: @terminalshort elsethread is correct in his analysis that if you remove the ability to have a platform tax, the control issues will revert.
That planned obsolescence thing on light bulbs isn't the entire story. Light bulbs will last longer if driven less hard, due to the lower temperature. But that lower temperature also means much lower efficiency because the blackbody spectrum shifts even further into the infrared. So some compromise had to be picked between having a reasonable amount of light and a reasonable life span.
But yeah agree, this subscription thing is spreading like a cancer.
I'm not an expert on the case law, but supposedly United States v. General Electric Co. et al., 82 F.Supp. 753 (D.N.J. 1949) indicates that whatever design trade-offs might have existed, corporate policy makers were really just trying to screw consumers [1] (which is why they probably had to agree on short lifespans as a cartel rather than just market "this line of bulbs for these preferences" vs. "this other line for other people" -- either as a group or separate vendors). I keep waiting for the other shoe to drop where they figure out how to make LED bulbs crappy enough to need replacement.
Leds are already awful. I already lost 4 of 10 led light bulbs I boughtast year. I hope they will be replaced. It's because every led bulb has a small transformer inside and it fails quite quickly
I think its a heat dissipation issue. I have some overhead LED lights that replaced some halogen bulbs and they have huge metal heat sinks on the back and have all lasted 10+ years. Unfortunately they are no longer sold but I did buy a few spare just in case.
It depends a lot on the bulbs. When we moved into our current house 11 years ago, we replaced everything with LEDs. Many of those original bulbs are still going strong, including all of the 20 or so integrated pot lights we put in to replace the old-school halogen ones. Others died within a year, and replacements have been similarly hit and miss.
To some extent you get what you pay for; most of the random-Chinese-brand LEDs I've picked up off of Amazon have failed pretty quickly. Most of the Philips and similarly expensive ones have lasted. Also the incandescent-looking ones that stuff all the electronics into the base of the bulb tend to fail quickly, as do anything installed in an enclosed overhead light fixture, due to heat buildup.
> as do anything installed in an enclosed overhead light fixture, due to heat buildup
This is my problem. My house has a lot of enclosed overhead light fixtures, and LEDs just do not last long in them. And renovating all of them to be more LED friendly would be quite expensive.
I got one of these free energy audit things which included swapping out up to 30 or so bulbs with LEDs. Whatever contractor did it seems to have gotten the cheapest bulbs they could, and the majority of them have failed by 4 or 5 years later. So far so good on the name brand ones I replaced them with.
Yes, but the compromise didn't have to be an industrywide conspiracy with penalties for manufacturing light bulbs that were too long-lasting and inefficient. But it was. Consumers could have freely chosen short-lived high-efficiency bulbs or long-lived low-efficiency ones.
In fact, they could have chosen the latter just by wiring two lightbulb sockets in series, or in later years putting one on a dimmer.
"That planned obsolescence thing on light bulbs isn't the entire story."
Whilst that's certainly true the Phoebus cartel's most negative aspect was that it was a secret organisation, its second was that it was actually a cartel. These disadvantaged both light bulb consumers and any company that wasn't a member of the cartel—a new startup company that wasn't aware of or a member of the cartel would be forced out of business by the cartel's secret unfair competition.
Without the cartel manufacturers could have competed by offering a range of bulbs based on longevity versus life depending on consumers' needs. For example, offering a full brightness/1000h type for normal use and a 70% brightness/2000h one for say in applications where bulbs were awkward to replace (such product differences could even be promoted in advertising).
Nowadays, planned obsolescence is at the heart and core of much manufacturing and manufacturers are more secretive than ever about the techniques they've adopted to achieve their idea of the ideal service lives of their products—lives that optimize profits. This is now a very sophisticated business and takes into account many factors including ensuring their competition's products do not gain a reputation for having a longer service life or better repairability than their own (still a likely corrupting factor that originally drove the formation of the Phoebus cartel).
Right, the philosophy's not changed since Phoebus but the sophistication of its implementation has increased almost beyond recognition. There's not space to detail this adequately here except to say I've some excellent examples from the manufacture of whitegoods and how production has changed over recent decades to manufacturers' advantage often to the detriment of consumers.
In short, planned obsolescence and the secrecy that surrounds it has negative and very significant consequences for both consumers and the environment. When purchasing, consumers are thus unable to make informed decisions about whether to trade off the reduced initial costs of products with a short service live against those that have increased longevity and or improved repairability. Similarly, shortlived products only add to environmental pollution, witness the enormous e-waste problem that currently exists.
As manufacturers won't willingly give up panned obsolescence or secrecy that surrounds it, one solution would be to tax products with artificially shortened service lives. In the absence of manufacturing information governments could statistically determine product tax rates based on observable service lives.
The reason subscriptions are spreading everywhere is that stock markets and private investors usually value recurring revenue at a much higher multiple than non-recurring revenue. The effect can be so large that it can be better to have less recurring revenue than more non-recurring revenue, at least if you are seeking investment or credit.
It creates a powerful incentive to seek recurring revenue wherever possible. Since it affects things like stock prices and executives and sometimes even rank and file employees often have stock, it's an incentive throughout the organization. If something is incentivized you're going to get more of it.
In the past it was structurally hard to do this, but now that everything is online it becomes possible to put a chip in anything and make it a subscription. We are only going to see more and more of this unless either consumers balk en masse or something is done to structurally change the incentives.
All very true and "balk en masse" is what I meant by "first exposé". (Ancient wisdom, even, if you think about individuals and mortages/car loans and having a steady job, etc. rather than just businesses.) Maybe we'll anyway see some market segments succeed with "pay 2x more for your screwdriver, but it will at least be your screwdriver" slogans, and then have screwdrivers to do with what we will, like the proverbial "pound sand". ;-)
I agree, but why you buy it then? Everyone should be allowed to price how they want it. If they price at 1m + 100k/month would sell much less. Therefore the price they charge is “reasonable” for correct customers
Being truly unordered like a set is not something you can do in a physical computer program unlike a mathematical abstraction. Anything stored is "ordered" in some way, either explicitly by virtual (or physical) memory addresses or implicitly by some kind of storage map equation (like a bitmap/bitvector or N-D C/Fortran array). It just might not be a useful ordering. (I.e., you may have to loop and compare.)
One might qualify such as "system-ordered", or in the Python insert-ordered dict, qualify with "insertion-ordered", though hash tables in general are sort of a melange of hash-ordering. The same container may also support efficient iteration in multiple orders (e.g., trees often support key order as well as another order, like VM/node pool slot number order).
So, in this context (where things are obviously elements of a computer program), it isn't obvious that hair-splitting over ordered vs. sorted in their purest senses is very helpful when what is really missing is qualification.
Of course, like in many things, people tend to name computer program elements after their abstractions. This is where confusion often comes from (and not just in computing!) .. borrowing the names without all the properties (or maybe with more properties, as in this case, though that is all probably a bit iffy with the frailty of how you enumerate/decompose properties).
EDIT: In a similar way, in a realized computer, almost any "set" representation can also be a "map". You just add satellite data. Even a bit-vector can have a "parallel vector" with satellite data you access after the bits (which could even be pointful in terms of cache access). This can cause similar confusions to the "ordered" vs. "sorted" stuff.
I used to do this, but unary kind of sucks after 3; So maybe others might like this better before their fingers get trained:
..() { # Usage: .. [N=1] -> cd up N levels
local d="" i
for ((i = 0; i < ${1:-"1"}; i++))
d="$d/.." # Build up a string & do 1 cd to preserve dirstack
[[ -z $d ]] || cd ./$d
}
Of course, what I actually have been doing since the early 90s is realize that a single "." with no-args is normally illegal and people "cd" soooo much more often than sourcing script definitions. So, I hijack that to save one "." in the first 3 cases and then take a number for the general case.
# dash allows non-AlphaNumeric alias but not function names; POSIX is silent.
cd1 () { if [ $# -eq 0 ]; then cd ..; else command . "$@"; fi; } # nice "cd .."
alias .=cd1
cdu() { # Usage: cdu [N=2] -> cd up N levels
local i=0 d="" # "." already does 1 level
while [ $i -lt ${1:-"2"} ]; do d=$d/..; i=$((i+1)); done
[ -z "$d" ] || cd ./$d; }
alias ..=cdu
alias ...='cd ../../..' # so, "."=1up, ".."=2up, "..."=3up, ".. N"=Nup
and as per the comment this even works in lowly dash, but needs a slight workaround. bash can just do a .() and ..() shell function as with the zsh.
That minimalist post mortem for the public is of what sounds like a Rube Goldberg machine and the reality is probably even more hairy. I completely agree that if one wants to understand "root causes", it's more important to understand why such machines are built/trusted/evolved in the first place.
That piece by Cook is ok, but largely just a list of assertions (true or not, most do feel intuitive, though). I suppose one should delve into all those references at the end for details? Anyway, this is an ancient topic, and I doubt we have all the answers on those root whys. The MIT course on systems, 6.033, used to assign reading a paper raised on HN only a few times in its history: https://news.ycombinator.com/item?id=10082625 and https://news.ycombinator.com/item?id=16392223 It's from 1962, over 60 years ago, but that is also probably more illuminating/thought provoking than the post mortem. Personally, I suspect it's probably an instance of a https://en.wikipedia.org/wiki/Wicked_problem , but only past a certain scale.
I have a housing activism meetup I have to get to, but real quick let me just say that these kinds of problems are not an abstraction to me in my day job, that I read this piece before I worked where I do and it bounced off me, but then I read it last year and was like "are you me but just smarter?", like my pupils probably dilated theatrically when I read it like I was a character in Requiem for a Dream, and I think most of the points he's making are much subtler and deeper than they seem at a casual read.
You might have to bring personal trauma to this piece to get the full effect.
Oh, it's fine. At your leisure. I didn't mean to go against the assertions themselves, but more just kind of speak to their "unargued" quality and often sketchy presentation. Even that Simon piece has a lot of this in there, where it's sort of "by defenition of 'complexity'/by unelaborated observation".
In engineered systems, there is just a disconnect between on our own/small scale KISS and what happens in large organizations, and then what happens over time. This is the real root cause/why, but I'm not sure it's fixable. Maybe partly addressable, tho'.
One thing that might give you a moment of worry is both in that Simon and far, far more broadly all over academia both long before and ever since, biological systems like our bodies are an archetypal example of "complex". Besides medical failures, life mostly has this one main trick -- make many copies and if they don't all fail before they, too, can copy then a stable-ish pattern emerges.
Stable populations + "litter size/replication factor" largely imply average failure rates. For most species it is horrific. On the David Attenborough specials they'll play the sad music and tell you X% of these offspring never make it to mating age. The alternative is not the https://en.wikipedia.org/wiki/Gray_goo apocalypse, but the "whatever-that-species-is-biopocalypse". Sorry - it's late and my joke circuits are maybe fritzing. So, both big 'L' and little 'l' life, too, "is on the edge", just structurally.
https://en.wikipedia.org/wiki/Self-organized_criticality (with sand piles and whatnot) used to be a kind of statistical physics hope for a theory of everything of these kinds of phenomena, but it just doesn't get deployed. Things will seem "shallowly critical" but not so upon deeper inspection. So, maybe it's not not a useful enough approximation.
I am not sure you'll be satisfied but this article by mhoye has a section on Watson and typewriters that seems relevant and is fun to read regardless: https://exple.tive.org/blarg/2019/10/23/80x25/
Parenthetically, I think much discussion around this neglects the constraints / true ultimate causes of human eye resolving power (in minutes of arc not DPI|millimeters) and cognitive "line tracking" (think narrow newspaper columns). I.e., they are from the perspective of device manufacturers / producers not receiving brains / consumers. At least in theory, the former is trying to please the latter after all, but eyes/brains haven't really evolved much in this respect since antiquity / the dawn of writing. This is just a pet peeve of mine that maybe you share, and clearly in the realm of 2x..4x not few% and so not on track with your question like the article I linked :-) TLDR - while a "standard viewing distance" is good enough for eye charts, I guess it's too complicated for "marketing hardware" and it's all too easy to get caught up in manufacturer framing.
Even better, in Nim these little CLI tools could use https://github.com/c-blake/cligen and have had terminal colorized, auto-generated help for many years now with much less dev-effort than raw argparse. Start-up time of statically linked Nim programs is like O(100..500 microseconds, just like C programs).
> do the right thing by checking if the standard output is an actual tty (isatty)
This is a very questionable heuristic. I'm not sure the exact date that support began, but I have been piping color output to `less -r/-R` for decades. This can be nice even for less than multi-terminal-page output just for "search".
isatty(stderr) would actually be more accurate for my specific use cases for when I don't `|&` or maybe even `isatty(stdin)`, but those are also imperfect heuristics.
The point is, since "auto" is a questionable heuristic, it is not so crazy/wrong to just default to color-on and have an off switch and that off switch is what NO_COLOR is about (as explained by the very first sentence in the linked article). Desirable defaults ultimately depend upon the distribution of your user's tastes (as always, more|less).
FWIW, physical dimensions like meters were the original apples-to-oranges type system that pre-dates all modern notions of things beyond arithmetic. I'm a little surprised it wasn't added to early FORTRAN. In a different timeline, maybe. :)
I think what is in "the" "stdlib" or not is a tricky question. For most general/general purpose languages, it can be pretty hard to know even the probability distribution of use cases. So, it's important to keep multiple/broad perspectives in mind as your "I may be biased" disclaimer. I don't like the modern (well, it kind of started with CTAN where the micros seemed meant more for copy-paste and then CPAN where it was not meant for that) trend toward dozens to hundreds of micro-dependencies, either, though. I think Python, Node/JS, and Rust are all known for this.
A lot of things like this in cligen because it is a leaf dependency (the literally 1..3 identifier CLI "api") and so many "drive by" PLang tester-outers might want to roll a little CLI around some procs their working on.
Also, beyond the echo x,y is same as echo(x,y) or x.echo(y) or x.echo y, the amount of syntax flexibility is dramatically more than Python. You can have user-defined operators like `>>>` or `!!!` or `.*`. There are also some experimental and probably buggy compiler features to do "term re-writing macros" so that your matrix/bignum library could in theory re-write some bz*ax+y expression into a more one-pass loop (or maybe conditionally depending upon problem scale).
I sometimes summarize this as "Nim Is Choice". Some people don't like to have to/get to choose. To others it seems critical.
Someone even did some library to make `def` act like `proc`, but I forget its name. Nim has a lot more routine styles than Python, including a special iterator syntax whose "call" is a for-construct.
[1] https://github.com/c-blake/cligen/blob/master/cligen/print.n...