Hacker Newsnew | past | comments | ask | show | jobs | submit | restalis's commentslogin

I was thinking about this too. When you read a program, there is this information payload, which is the metaphorical ball you have to keep your eyes on, and more or less forget about the rest as soon as it isn't relevant any more. In the functional paradigm it's like seeing the juggle of a bunch of such balls instead (plus the expectation to admire it), but that's just wasteful on reader's attention.


Most likely, it didn't happen (yet) due to kernel related stuff being still actively worked on¹ and (more importantly) due to a shortage of developers willing and capable to tackle that kind of challenge.

¹ At the time of this writing, there are open PRs like this one: https://github.com/reactos/reactos/pull/8422


"putting yourself and your hard work in legal risk"

Like what? I'm genuinely curious what personal risks faces anyone from contributing to ReactOS. I also am curious what kind of legal risk may threaten the work? I mean, even in the unlikely scenario that something gets proven illegal and ordered to be dismissed from the project, what would prevent any such particular expunged part to be re-implemented by some paid contractor (now under legally indisputable circumstances), thus rendering the initial effort (of legal action) moot?


"A good example I think has adjusted the behavior of the ecosystem is Rust: it makes certain things much easier than before and slowly the complex bug-mired world of software is improving just a little bit because of it."

From a software design prospective, the functionality that should go into a compiler is code compilation only. Taken it to extreme (as in Unix philosophy), if the code compiles, then the compiler should just build you the binary or fail silently otherwise. The code checking and reporting various aspects of the quality of the code is supposed to be a static code analyzer's job. (In reality, pretty much all compilers we have are doing compilation coupled with some amount of lighter code checking before that, and the static code analyzers left only with the heavier and more exhaustive code checking.) What Rust does is to demand its compiler to perform even more of what a static analyzer is supposed to do. It's a mishmash of two things (which still manage to stay separate things when it's about other programming languages, because that makes sense) and masquerades that as revolution.

So, (even when it's about code in blamed languages like C & C++) the "the complex bug-mired world of software is improving just a little bit" by not skipping the static analyzer kind of expensive checks, the kind that Rust happen to make impossible to skip.


The transition from kilobytes to megabytes is not comparable to the transition from megabytes to gigabytes at all. Back in the kilobytes days, when the engineers (still) had to manage bits and resort to all kind of tricks to somehow make it to something working, a lot of software (and software engineering) aspects left to be desired. Way too many efforts were poured not so much into putting the things together for the business logic as were poured into overcoming the shortcomings of limited memory (and other computing) resource availability. Legitimate requirements for software had to be butchered like Procrustes' victims, so that the software could have a chance to be. The megabytes era accommodated all but high end media software, without having to compromise on their internal build-up. It was the time when things could be properly done, no excuses.

Nowadays' disregard for computing resource consumption is simply the result of said resources getting too cheap to be properly valued and a trend of taking their continued increase for granted. There's simply little to no addition in today's software functionality that couldn't do without the gigabytes levels of memory consumption.



This tendency of requirement overloading, for what can otherwise be a simple solution for a simple problem, is the bane of engineering. In this case, if security is important, it can be addressed separately, e.g. for the underlying text treated as an abstract information block that has to be packaged with corresponding error codes then checked for integrity before consumption. The UTF-8 encoding/decoding process itself doesn't necessarily have to answer the security concerns. Please let the solutions be simple, whenever they can be.


Whereas it may be used in outdoors-like natural lighting, it is not (always) true that it is "usable in the sun". On one such product¹ there's this "Important Notice" advising users to "avoid exposing the E-ink screen to direct sunlight or intense ultraviolet rays, as this may cause irreversible damage to the screen."

¹ https://shop.dasung.com/products/dasung-25-3-e-ink-monitor-p...


The world is dependent on plastic by now, and needs a lot of it. The negative stigma it got lately is due to playing fast and loose with it in the past, but it is a necessary class of materials, and having sustainable means of mass producing it is a good thing.


The main benefit for me with this approach is that the boundries are not transparent anymore. That content printing is such a boundry. Your data is about to exit through there and you're summoned to handle that. The inconvenience that comes with it is as any other when security enters the play. The same with the data management responsibilities - who handles what, for how long, and with whom. Without data type distinctions everything is (more or less) common, with vague or broadly defined ownership.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: