I of course agree that a focus on high level abstraction with little knowledge of the underlying layers is a very problematic approach to programming. But I see the problem as something very different.
To me the issue are two destructive thoughts patterns, in both developers and users. Developers believe that just because they can make something it deserves to exist. Users believe that just because it exists they are supposed to use it.
I believe there is no better example than the "WordArt" feature in Word. It is something which obviously took a very significant amount of effort on the part of the developers, tens of thousands f man hours I would guess. It is also terrible and nobody should ever use that feature, not a single document ever gains anything from it. Instead of making a title large and bold, the existence of that feature tempts users into making it a wave, with an outline and a shadow, instantly making the document worse.
So much of software just should not exist, so many features should be removed and users need to stop thinking that the existence of something is justification for its use.
"tens of thousands" is a crazy over estimation. The whole point of Microsoft Word is to align stylized text on a canvas. The Word Art feature does not appear to use much, if any, functionality that does have some more mundane purpose elsewhere in the application.
On top of that, I think you may be forgetting how common shitty banners were in the early 90's when this feature first came out. There was a clear market for this feature at the time, and they've only made incremental changes to it since.
I think it is very likely much more time. Fonts are hard and the mere existence of text which doesn't move in a straight line adds enormous complexity. Also don't forget that it's existence requires competitors to implement similar things.
>On top of that, I think you may be forgetting how common shitty banners were in the early 90's when this feature first came out. There was a clear market for this feature at the time
No, I am not forgetting that. I am saying that this exact thought process is why we have so much awful software.
WordArt makes word stand out. I'm certainly not the only one that some spent hours playing with WordArt as a child. That means I grew up being familiar with word. Maybe it increases the number of users in the long run.
I do not think that matters. It is a feature which should never be used.
>I'm certainly not the only one that some spent hours playing with WordArt as a child.
Exactly that is what I am talking about. It is a totally irrelevant feature, which has exactly zero use to the application itself and only adds a mountain of complexity.
As much as I’m agreeable to your general observation, I feel WordArt is a really poor example to illustrate your point: few software features have had such an aesthetic, meaningful, and human impact in the course of the Internet Age.
Oh how I yearn for a world of Geocities and FrontPage over Admiral anti-ad-blocker and GDPR cookie popups.
Yeah it's a baffling example to me. WordArt was really popular! People liked using it! The fact that it has aesthetically gone out of favor, years later, has nothing to do with software quality.
A friend of mine has an expensive 15-inch Dell laptop. He does not know much about tech specs, but since much of his work life revolves around his laptop, he simply bought the maxed out version. In the hope it makes his life easier.
In the hope that every tool he installs makes his life easier too, he installed so much stuff on it, that the battery in this newish laptop lasts less than an hour.
So he lugs around this beast of a laptop, with a beast of a charger. And even when he just uses a browser, after 30 minutes, he gets nervous: "I need a power outlet!".
Every time I watch this drama in confusion, I ask him why he doesn't look up what software drains his battery, and what he can do about it.
That makes him very uncomfortable, and he proclaims: "No, it's Dell's fault! And I can use the laptop with a power outlet! No problem!".
Somehow, analyzing what software does seems to make some people so uncomfortable that they never look into it and therefore never blame bad software for its behavior.
Many computer science programs begins with high-level languages, and quickly move on to using frameworks. Students don't know what those frameworks are doing - it's all just magic incantations. The frameworks are massively inefficient, and bring in zillions of dependencies, because they are such generic "do anything" tools.
Looking at a browser app: putting a character on the screen requires an insane amount of computing power: server side running Spring (or whatever), client side running React (or whatever), plus the browser interpreting insanely complicated CSS.
The benefit of frameworks is that they enable low-ability developers to create stuff without understanding the nuts-and-bolts of what they're doing. Eliminate the frameworks, and you eliminate probably 90% of the developers out there.
> Many computer science programs begins with high-level languages, and quickly move on to using frameworks.
Is that really true of any universities? At the intro to CS course at my university, right after teaching loops and assignment statements in Python, they demonstrated basic algorithms like bubble sort and binary search. I've heard that the focus on frameworks comes more from those coding bootcamps that focus on getting trainees hired ASAP.
I strongly agree with this point and believe there's a generational aspect at play. For newcomers, the knowledge required to avoid being labeled a "low-ability developer" and to write your own frameworks is significantly higher than it used to be. I'm fortunate to have learned in a simpler era, without many of today's bloated frameworks. However, my equivalent challenge is that I find much of what compilers and assembly do to be "magic incantations."
Two notes: one is how much a surgeon need to know about molecular biology to properly work, a construction workers about the concrete chemistry, a bus driver about mechanics and so on. The other is that in the past "frameworks" was just the OS, with accessible sources, easy to be explored and end-user programming in mind. There was some, like Smalltalk from Xerox Workstations (take a look at a modern Pharo demo to see them or this https://youtu.be/M0zgj2p7Ww4 old commercial demo) or LispM (for instance https://youtu.be/RQKlgza_HgE to give a bit of eye candy) etc.
Now if students start high level in these systems well, they do learning digging as much as they want a bit at a time, nothing hidden. Instead we have pushed for commercial reasons a gazillion of "isolated layers" of crap just to been able to sell bits a bit more. Do you imaging more realistic today students learning ASM to be run on CPUs now complex enough to have a full OS inside or coming back to a single app-OS-framework like in the past? As a hint: look the evolutionary path from widgets GUIs to WebUIs, from visual menus to search&narrow, NotebookUIs, "prompts" etc.
There will be an immense amount of work to do of course, something that can happen in FLOSS world backed by academia and large research labs, definitively not by some even giant enterprise, but we are anyway going there, the quick we realize the less damage, pain and waster works we will make.
A framework is just a tool like any other and has its place. I could write my own UI using the browser canvas, but alas, I don't get paid to write code. I get paid for delivering features.
> The benefit of frameworks is that they enable low-ability developers to create stuff without understanding the nuts-and-bolts of what they're doing. Eliminate the frameworks, and you eliminate probably 90% of the developers out there.
This is sort of meaningless though, isn't it? The frameworks exist, and will continue to do so. The developers who rely on them occupy a niche – a big one, probably.
These developers are out there growing the pie, the appetite for software. Either it's good enough, or, sans anticompetitive forces, someone with a niftier solution comes along and outcompetes them. As we should all be aware though, there's more to building software than just knowing how to code.
Getting hooked on computers is easy—almost anybody can make a program work, just as almost anybody can nail two pieces of wood together in a few tries. The trouble is that the market for two pieces of wood nailed together—inexpertly—is fairly small outside of the "proud grandfather" segment, and getting from there to a decent set of chairs or fitted cupboards takes talent, practice, and education
Sadly it seems like a lot of software out there sits barely above the "proud grandfather" threshold and well below the "engineering"+ threshold.
+ as in, "build a bridge that does not crumble at the first gust of wind" or "hurl several metric tons of metal over a mix of gravel and bitumen at twice a cheeta's top speed for several hours and reliably so over 10ish years" or "put a metal can in the air with a couple hundreds occupants and get across half of the third rocky ball from the sun"
This resonates with my reaction when I read that the 1st step of a project consisting of like 50 lines of code for a 3cmx2cm microcontroller board is "install VSC".
Because of users. They ask all software to do all kind of wild and unrelated tasks.
Is like you have a workshop to do your job. Then you want to have a bathroom and shower. Ok, so you can wash up at work. Then you want a kitchen, not only a fridge, so you can make fresh meals. Then you ask for some space for exercising, like a mini gym. Also you want a small garden to plant your own vegetables.
They believe everything is related to their primary job so the workshop space should address it and then is when chaos is ensued.
I humbly submit that you've got the right idea but the wrong culprit - it's an endless drive to increase sales that leads to feature creep and bloat being pushed down from on high. Or maybe it's both and we're caught in the middle.
I'm sympathetic to the goal of making software less terrible, but there's no actionable content here.
Personally a lot of the code I've written has turned out to be pretty terrible because I didn't know any better. Sometimes an environment is just conducive towards building software products in a certain way which doesn't consider performance as seriously.
Copyright has probably hindered the evolution of software as well, resulting in a lot of duplicate efforts to recreate solutions to problems that have already been solved reasonably well by someone.
I think what's really amazing is that tech companies worth billions or sometimes even trillions of dollars are still producing trash-tier software. I've heard tons of people complaining about Microsoft Teams being trash. Why can't Microsoft make a good chat app? Big tech companies should really be shamed more in public for shipping terrible products, they have no excuse with their near-infinitely deep pockets.
> what's really amazing is that tech companies worth billions or sometimes even trillions of dollars are still producing trash-tier software.
The site addresses that, obliquely, and goes to academia, and the way programming is taught - among other causes.-
Not only do I totally resonate with the site’s points, the site is pretty substantial with different areas for various purposes, like showcasing projects, joining discussions, and learning from tutorials, all of which adhere to the stated goal and philosophy of the organization behind.-
The homepage introduces the site’s purpose and offers links to key sections like projects, articles, and forums. The projects section showcases various software projects created by community members. There are 41 different software projects currently listed. They show - by example - what they mean, in practice.-
(These projects range from game engines to utilities and are all focused on being hand-crafted and deeply understood by their creators.-)
They have a "symposium" kind of section, where find detailed posts about different computing topics and techniques. This is where members share their knowledge and experiences about developing software, often providing in-depth tutorials and insights. They all come from member-motivated subjects, "surfacing" and "codifying" otherwise ephemeral chats from Discord.-
They also have forums are a place for discussions. Members can ask questions, share updates about their projects, or discuss a variety of topics related to the ethos . It’s a community-driven area where people help each other out and share their progress.-
There’s also a showcase area, where members post about their completed projects. This section is meant to highlight the work done by the community and inspire others to start their own projects. There are 41 different software projects currently linked on the site.-
I love how the whole project is laid out, and how the stated goals are brief, clear, cogent and to the point, with the showcased software and "goalpost" posts making and substantiating everything else.-
The site's also organizes events to further the goals, all in all, pretty solid.-
I'd agree that the current educational models for programming aren't the most effective, but I don't think any superior alternatives have been proposed or been made available. If someone developed a superior alternative then they would probably be able to quickly out-compete a bunch of companies, and eventually that new model would become widespread.
Personally I think that current platforms have too many layers of cruft that has never been cleaned up, so people have been forced to continue building bad abstractions on top of everything. We need base layers that integrate the lessons from the last couple of decades.
For all of the faults with the Web and tools like Electron, they've helped enable a massive amount of people to ship real useful software. Ideally there would be superior alternatives to something like Electron while still remaining accessible. One interesting example is ImGUI; I tried it out a while back and was amazed by the level of performance, but still had to deal with a lot of useless details related to build tools and cross-platform idiosyncrasies.
I think there's some serious rose coloured glasses going on here.
I started my first job as a developer in 1994. My first task was to add some XDR to our RPC code to ensure big-endian and little-endian machines could talk to each other. This was in the C language. As in, tiny standard library, manual memory management, etc. No type safety. I wrote a UI using X/Motif. It was a pretty simple UI with a File menu for loading and saving, and some other edit menus etc. It took ages. No type safety meant that accidentally passing a pointer to a button to a Motif library function that expected a pointer to a label caused a segmentation fault. I still love C but not if I'm building an actual application. I absolutely do use C if I'm curious how well llvm can optimise a loop for counting prime numbers or calculating pi.
Then along came C++. Now I had some actual data structures and algorithms. Memory management was easier but still error prone. But the compiler we were using was very slow when using a lot of STL code (Standard Template Library) so once our codebase was big enough, building and linking would take over an hour. There were some workarounds but they were clunky and caused their own issues.
I didn't like Java when it came along because I thought the whole idea of compiling to JVM bytecode and running a JVM in production rather than a compiled executable was obviously slow and wrong.
Java has been around for years and lots of mistakes have been made along the way but it's a really solid environment with access to a HUGE number of libraries and frameworks.
Web apps used to be simple - there was only so much you could do. But then javascript and the infamous XMLHttpRequest came along and people could suddenly make really nice interactive sites. That kicked off another years-long period of trying to figure out how best to write large, complex applications with web technologies. But don't kid yourself. The benefits of web apps far outweigh the complexities and "slowness".
Other developers would have experienced other journeys. A modern word processor is vastly more complex than early word processors. I used to really like word perfect 5.1 because it was responsive and very capable and never crashed but when Microsoft Word came along, people obviously preferred it, even if it was slow and crashed all the time.
So I'd say modern software is complex and has problems that come with complexity but the slowness isn't really it. I used to edit code in vi. It was pretty responsive. You can bet your bottom dollar that it's really fast still. But I prefer JetBrains IDEs. Sometimes they're slow or a little buggy but the trade off is worth it.
Tcl/Tk would've saved you. Would've taken you a few minutes. We built all kinds of touchscreen hardware for factories, entertainment, PoS etc with Tcl/Tk, Perl and C in those days. It was lovely, without the rose coloured glasses; it is still rock solid and fast, just no-one accepts that type of UI anymore. Many of these still run without updates for decades.
We developed with Turbo Pascal in the second half of 80s for 50+ b2b applications which later were ported to Delphi and grew to 100+ b2b applications. It was excellent. Many of them still run, updated and recompiled with Lazarus.
When Java and the web came (for me they started at the same time; I went ‘simple 90s server side cgi’ web years later unfortunately; I think explained by our clients not wanting anything web at first, at all), things went a bit crappy imho. I made good money out of Java by starting a company immediately when the beta was released, but resource hogging on client and server and sluggishness became normal then, at around the same time that, first because of Applets I think, people started to want non-standard UIs.
> Sometimes they're slow or a little buggy but the trade off is worth it.
Exactly that. You want the features, so you take the slow and buggy with it. I refuse (mostly because the 1995-2005 java ptsd period I had), which means I have to write many things myself but at least it's fast and does what I want; when there are bugs, I fix them.
> when Microsoft Word came along, people obviously preferred it, even if it was slow and crashed all the time
I am sure it's "overdetermined" but why -is- that?
I'd call it the "tragedy of the losing better option": When two alternatives exist and, often, the lesser/worse/worst wins. You find that all over the place.-
Microsoft Word was faster than WordPerfect’s slow, buggy, late release on Windows. Ami Pro was around and also perfectly usable. WP honestly felt like bloatware to my memory, no doubt it had a lot of features, and all I was doing was homework and D&D campaigns, but I don’t remember being unhappy with the options available.
> When two alternatives exist and, often, the lesser/worse/worst wins.
More likely is the one that will appeal more people to win, which in software means a good mix between being the easiest and quickest (hence cheapest) to develop and market, and the easiest for users to learn, and the values aren't to me absolute but rather relative: I don't think there's much difference between the people who can't live without a touch screen today and those who 40 years back could work 8 hrs/day in WordStar only through key sequences.
> I don't think there's much difference between the people who can't live without a touch screen today and those who 40 years back could work 8 hrs/day in WordStar only through key sequences.
But maybe there is a difference? In disposition or even capacity/willingness to exert attention towards a task? Of course, expectations do play a role, and "touch" is (almost) everything these days ...
Flash filled a need to install software in a single click, which the web browser made possible. Flash unironically became so widespread because Microsoft dropped the ball in OS development.
If you actually use software intended for users it is slow. There are simply too many layers. Nowadays even something like writing regular javascript is considered being close to the metal, and high level means you install a library that does what you want out of the box, and you need to initialize it. Those are maybe useful things for speed of development, but all those things piled on top of each other might not work well in the end.
As an example, USA Today created a dedicated EU website to comply with GDPR. It was and probably still is incredibly fast. It's a simple document with text and images that are news articles. You click and get a page instantly. There are no ads and no trackers, you do not need to load JS templates and JSON to speed things up etc. You do not need AMP. I am not a big html document purist, and I think that's a dead end, but in terms of user experience for what they sell -- news articles -- it was head and shoulders above their regular offering.
> something like writing regular javascript is considered being close to the metal,
Good grief.-
> and high level means you install a library that does what you want out of the box, and you need to initialize it.
And, again. Grief.-
Someday, knowledge of the actual functioning of these systems might become so obscure so as to, basically, be lost - kind of like (in a way) the Apolo project know-how is fading away ...
> Someday, knowledge of the actual functioning of these systems might become so obscure so as to, basically, be lost - kind of like (in a way) the Apolo project know-how is fading away ...
We're already way past this point since the 70s. Do you read the official specifications for every hardware platform your software runs on? Do you even have access to and a deep understanding of the inner workings of your target OS? Do you know the intricacies and quirks of all the network-, display-, and input protocols used in your devices? Are you deeply familiar with every audio and video codec, encryption method, online data stream compression scheme?
Point is, none of that matters depending on your actual software. A sculptor doesn't need to be an expert in chemistry, mechanical engineering, and mining to create a beautiful stone statue after all. Intimate knowledge of the hardware is required if you build drivers, firmware, low-level embedded software, and critical systems (e.g. ABS, flight control systems, etc.) but not for building word processors, interactive websites, business apps, data entry systems, content creation tools and all the things users interact with on a daily basis on their PCs.
It'd be way too much for a single human being to know and understand in depth and unnecessary (as in not helping with getting the job done). Why is it that everyone understands and accepts and is comfortable with the fact that scientists and medical doctors for example, have their specialist fields and aren't familiar with the full spectrum of their respective jobs, while people expect software developers to be deeply familiar with 100+ years of cumulative knowledge and developments in hardware and software?
You example is flawed. Static (a in non-interactive) content is not a program.
As soon as data collection, tracking, and user metrics enter the picture, the simplicity of static content goes out the window (one more reason to dislike these things).
The problem with opinion pieces like this awful article just goes to show once again that way too many developers these days lack the fundamentals, e.g. mathematics.
A naive programmer will look at a problem and categorise is as "easy" and "simple" if it's already been solved and they now how to do it. Actual simplicity, however, isn't always obvious: No three positive integers satisfy a^n + b^n = c^n for n>2. Simple to show, right? So simple in fact, that many folks tried to use high-school maths to tackle it. Yet it took 358 years and over 110 pages ("bloat"!) of dense, advanced mathematical concepts ("layers of abstractions"!) and reasoning to prove it.
There are many other seemingly innocent questions of the same nature, like the Goldbach Conjecture or the Collatz Problem that may take centuries and breakthroughs in various areas of maths to finally be solved.
It's the similar with software; only that, unlike in mathematics, problems are often poorly defined, constantly changing, and developers aren't given decades to ponder them.
To go back to your very first observation
> If you actually use software intended for users it is slow. There are simply
too many layers.
you'll find it doesn't hold much water upon closer inspection. You basically missed the mark entirely. Why and where do these layers exist? Your CPU executes undisclosed µ-ops instead of opcodes specified by the architecture and emitted by compilers. Every memory access goes through an MMU instead of directly to RAM on all processors but the most basic microcontrollers. Hardware access is mediated by BIOS, drivers, and OS kernels.
In short: "close to metal" is a vague and relative term in the 21st century and if your target platform is the browser, then yes, JavaScript is "close to the metal", because the "metal" in this case would be a W3C-compliant browser engine.
Finally, there's economics. Better is the enemy of good and especially of good enough. Back in "the good ol' days" programmers were cheap and the iron was expensive. Today it's the other way around and the way modern software is built reflects that. Optimising for a myriad of diverse hardware and software platforms is tedious and expensive. The ones that win the competition aren't always the "best" solutions, but the ones that are first to market, deliver features quickly, or offer added value (support, documentation, user-friendly interfaces, etc.).
You could've saved yourself a lot of words and just skipped to the end. Better is the enemy of good or good enough. Sure. But it's not good enough, which can be confirmed by simply using it. Speed is not my only concern either. For instance, I was some time ago arriving to a foreign country and I wanted to check which stores are open on a Sunday. There was a big supermarket chain that had a store locator and so I opened that page, added my zip code and wanted to see the results. You could not scroll the page! If it was a one-off bug I would say shit happens, but I see this all the time. Things jumping up and down, things you try to use but don't react or react very slowly etc. I am quite sure that project had a ton of people working on it, it had a CI, it had very high engineering standards (at least according to what goes by engineering standards today), but it had a page that could not scroll.
You don't need to explain me the shtick of expensive labor and cheap machines, we've heard that one ad nauseam, it's hardly original thought.
Whether or not it's a program is irrelevant, if it's not a program and it does the job, why are you writing a program? And if you want to write a program to fulfill other business goals, why are you not doing a passable job?
For me is because of the nature of software itself as it can be endlessly changed (for good or bad). So the very same property that was looked for at the dawn of CS, it fires back to us each and every single day.
Software is terrible because people keep proliferating a bunch of packages from many vendors, any of which can break your stuff or even introduce a backdoor after you run your package manager. The same goes double for websites that pull in a ton of different scripts from who-knows-where and execute them in their “trusted environment”.
Instead, one cohesive operating system designed by one group (eg BSD) can be a pleasure to use.
If you want to minimize code over the wire, it is also great if it would involve some modularity and tree-shaking.
Software isn't terrible. People just focus on the rare programs that are and then over-generalize for clicks. The Windows calculator app needs a splash screen because Windows has been in a spiral of tech debt and decline for decades, and their new frameworks are all badly optimized. The calculator apps on every other platform start fast and work fine. I just started Calculator.app on an old Intel MacBook and it took less than a second.
We all use mountains of software every day that isn't terrible. Think about the gazillions of lines of code that go into anything made by Apple, or Google. It all works well and we don't think about it. The article complains about "language servers", which suck because they were invented for VS Code which is still relatively new and it's all open source so there isn't much of a business model. But that's not "software", that's a poor choice of IDE. Buy a JetBrains IDE and marvel at the fact that it isn't terrible.
Reality is that software is way better than it once was. Standards are so much higher now. When I started with computers it was expected that to get any program to run at all you'd need to find a local wizard first. I spent way too much of my childhood frantically repacking MS-DOS 640kb low mem to try and get the latest game demo running, or reallocating IRQs to try and get the new Sound Blaster to work at the same time as my mouse. Five years later Plug'n'Play was mostly working, Windows mostly worked, but it was expected that every app you bought would crash every five minutes. Consumer software that didn't crash like crazy was a sort of mythical science fiction scenario. Additionally, EVERY app had a splash screen, because hard disks were so freaking slow and RAM was so scarce that starting even an app that'd be considered very simple by today's standards would spend a minute just flushing everything else you had loaded out to swap, including things that you probably needed, like the Start button.
In 2024 things like starting fast and not crashing all the time are table stakes, which is why programs that fail to meet that standard stick out like a sore thumb. Software that's considered actually good will probably have a gazillion features, be updated every week without ever failing, will back up all your data indefinitely and automatically for free, maybe have collaboration features, be translated into a bazillion languages, have responsive UI, work on every device you own, and be insanely cheap or even free.
Friend JetBrains IDEs are brutally slow, they do a lot but VSCode does 90% of what they do, supports more things and starts faster despite being a freaking Electron app, and is also terrible.
You're not making a good point. Delphi started about as fast as a JetBrains IDE on a slow hard drive with a single core CPU whose speed was counted in MHz... And it has a much better form editing experience than anything these days.
I used Delphi back in the day and I use IntelliJ today. The latter starts faster than the former despite doing many orders of magnitude more things. Delphi back then didn't support refactoring at all, which is 90% of the CPU a modern IDE is using. It certainly isn't brutally slow.
Also, if you want a better than Delphi form editor just use IntelliJ to create a Swing app. That's a built in form editor with more features. I like Scene Builder a bit better, but form editors that are good are out there for the taking if you want them.
IntelliJ has orders of magnitude more hardware to work with, there isn't even a comparison.
Why is 90% of the CPU a modern IDE using for refactoring if I'm not refactoring?
If you think an IDE being this slow is in any way acceptable in an age where we have 8+ core CPUs with 3+ GHz speeds, high-speed SSDs, DDR5 RAM and GPUs taking on part of the load?
It uses most of the CPU power because to be able to offer refactoring services the IDE needs to maintain a indexed representation (database) of the source code. Because of that indexing work, even in large codebases you can call up all the call sites of a method or many other things and the results appear in a fraction of a second. If the IDE had to read every source file to do that every time, refactoring support would be impractical.
The flip side of that is that most of the work the IDE does is understanding the source code as it changes and maintaining those indexes, including indexes of all the dependencies. Delphi just did not even try to do that work, and Delphi apps typically had few dependencies if any anyway.
IntelliJ being slow isn't my experience. But I know it can get very slow in some cases, like if it's running out of RAM. Make sure to respond to notifications if it says it's running low on memory.
But this thread starts with the article complaining that those external processes have to be restarted every twenty minutes. Exaggerating, I'm sure. But a few days ago I helped a colleague who was using both IntelliJ and VS Code on his laptop. The whole machine has dragged to a halt, a kind of slowdown I never see on Macs normally. A quick trip to the Activity Monitor showed that VS Code processes had been in a spin loop for days. Somehow they were trashing the performance of the entire machine.
The ironic thing is, I came over to take a look because my colleague had been complaining IntelliJ was slow. Kill off the errant Electron processes and suddenly IntelliJ was fast.
At any rate, startup time isn't the be all and end all for an IDE. I only (re)start my IDE when it needs an upgrade.
I don't believe people like the author of the OP are just over-generalizing for clicks. Based on the other responses on this thread, they're expressing something that many of us feel, and that they presumably feel themselves. I'm not sure what to do about that feeling, though. The tools coming out of the Handmade network, at least the ones I've seen, tend to go too far, throwing out necessary things like accessibility. Some of the complexity of modern software is truly necessary.
It's easy for those of us who have used computers for a while, or have read some computer history, to look back at earlier feats of software compactness and efficiency, such as an IDE starting and running fast on a processor with a tiny fraction of the power of current CPUs (90s Delphi as raised by a sibling comment), a MIDI sequencer with karaoke-style lyrics display in less than 48 KB of machine code (Diversi-Tune on the Apple IIGS in 1988), or a BASIC interpreter in less than 4 KB (the original Altair BASIC from Microsoft). It's easy to look at these things, compare them to the size of a hello-world program produced by a modern toolchain, and conclude that modern software is indeed terrible. But of course, we look back on those past artifacts with rose-colored glasses. I don't doubt that, for some subset of modern software, bloat is indeed out of control. I'm not sure what to do about it though, or if we can ever get back to those feats of efficiency that tend to impress us in retrospect.
Right. But I think it's easy to misremember the past. Delphi 5 might start quickly if you run it on today's hardware, but back then it took a good 10-15 seconds to start. I remember staring at those splash screens and listening to the endless frustrating clicking of the HDD head far too well.
I think a good reality check is to try and bring splash screens to mind. How many splash screens can you think of for modern apps? I can only think of Slack. Back in the Delphi era every app had a splash screen. Office apps, IDEs, html editors, even web browsers! One reason IE4 won over Netscape 4 is the latter showed you a big splash of a lighthouse where the former didn't need one. SSDs are truly miraculous devices but also software engineering just got better. Like, modern client devs know to keep slow operations off the GUI thread whereas back then very few apps were threaded so hung windows that wouldn't repaint were standard.
Software is definitely bigger now, but that's mostly because hardware is vastly bigger so nobody cares. Like, on an average Swiss internet connection the bottleneck for installing an app is now the decompression and GateKeeper hashing/anti-malware checks, not the download itself! Why optimize for size when users don't complain about it?
> Why optimize for size when users don't complain about it?
Pride in one's work? Especially when comparing it to the "good old days". Maybe we'd have better morale if we were allowed (or allowed ourselves, for those of us running our own companies) to indulge in such "useless" optimizations. But no, the pressure is on to keep cranking out more, more, more, with no time for quality that we can be proud of.
Edit to add: You're right though that splash screens are less common now.
I'd argue that the primary source of bloat in software today is when there's a mismatch between what developers need and what the underlying platform provides. The most extreme case of this can be found in desktop operating systems where relatively few apps now use the underlying platform at all, largely because nobody wants to write GUI code multiple times. What developers want and what the underlying operating systems provide are almost entirely different so everyone ships a giant runtime to replace what's the OS comes with. Modern mobile platforms have a similar problem, where things like Swift UI or jetpack tend to be bundled with the app or at least were historically up until recently, and that yields bloated packages too.
In contrast people get the vapors if a web app is more than 5 MB of download and this happens because the web platform is significantly more responsive to what developers really want even though it has poor technical foundations. Where bloat does occur in web apps, it's because the browser makers have not responded quickly enough or at all to growing mismatches, for example the vdom diffing algorithm react uses could be implemented in the browser itself, but isn't, so lots of apps ship it bundled.
Apps for Windows 95 didn't seem bloated, but that's because they relied so heavily on what already came with the computer when you bought it. People just had a suck up the fact that the underlying API was pretty bad, because they couldn't afford to ship anything better and till the CD era came along and then you did start to see apps leave behind the underlying platform and get more bloated.
A truly next-gen operating system would make it very difficult to talk precisely about bloat, I think, because if you really went beyond the Unix or NT designs your operating system would download and manage code modules automatically, a little bit like browsers did before the introduction of cache partitioning. How bloated an app appears to be then would depend entirely on what you had previously used and downloaded, and thus to some extent would be in the eye of the beholder. After all, if you measured all of the code paged in by running a native Mac app, or even something like delphi, versus the amount of code paged in by an electron app, it probably wouldn't be that different. It's just that the Mac app is accessing code that is shared with other operating system utilities.
Thanks for steering the discussion in a more constructive direction than where I was going.
There are C++ UI toolkits wrapping Win32, Cocoa, etc., and sometimes such toolkits are statically linked into the executable, meaning that dead code elimination should be able to minimize the executable size. I'm thinking in particular of wxWidgets. But even with C++ and static linking, it's easy for a library to be designed in such a way that you end up paying for stuff you don't use. The last time I did a project in C++ with wx, 10+ years ago, my program itself was quite trivial, and it only used very basic widgets, but the statically linked Win32 executable still ended up being ~2.5 MB. I didn't get very deep into why, but my understanding is that code for a bunch of features (printing, drag and drop, etc.) got linked in, even though my application wasn't using it, because the window procedure had to handle messages related to those features regardless. I guess, to optimize for minimal executable size, the API would have to be designed to require the application developer to explicitly initialize only the features they need, and in this case, the message handler would have to be more dynamic.
Sometime I'd like to design a new toolkit inspired by wx and SWT, in either Rust or Java. In the latter case, I'd use the new Foreign Function and Memory API to call the native APIs. I'd be interested to see how much GraalVM Native Image could optimize an application written with such a toolkit; I recall you've said good things about how much Native Image can optimize program initialization in particular.
There seems to be a broad consensus, though, that Win32 in particular isn't suitable for "modern" desktop apps, and so, by extension, neither are wx and SWT. Certainly in the Rust GUI space, all the popular toolkits are drawing their own widgets, and even reimplementing lower layers of the stack like text rendering, thus contributing to the trend you talked about where applications bring along their own runtime. I assume there are actual deficiencies in OS-supplied APIs that are leading to this, though a sufficiently jaded person might argue that developers are in fact wasting their time on unnecessary things for one reason or another. Anyway, if the classic Win32 widgets are still good enough for some parts of the OS itself (although, admittedly, fewer and fewer with each release), then presumably they're good enough for some subset of applications as well, and if the developers of those applications had a convenient and lightweight abstraction layer over those classic widgets, then we could start to reduce bloat.
Yeah, static DCE is a limited optimization. I've personally given up on it. It's not useless, but it's a big lift for developers and the app size will keep growing anyway. These days I'm more interested in dynamic code paging systems as a way forward, but, at the moment I'm not able to work on that right now. Think about how "bloated" an app like Google Maps or Search is, yet it still feels fast because it downloads what it needs on demand. Everything should work that way.
The Conveyor installer is a native Win32 app written in C++. It's pretty trivial, it mostly just invokes some Windows APIs, works around bugs in those and ties the results to a progress bar. It's a classic app that uses HWNDs, window messages and the like. The bulk of the logic is in one file. Despite that it's about a 500kb EXE. Where do the bytes come from? Partly, it's C++. Native code is verbose and template instantiation creates a lot of it. You don't need to instantiate many std::vectors before suddenly binary size has crept up on you. The widespread adoption of C++ increased binary sizes significantly over C, because suddenly you were using value typed containers everywhere instead of a single shared linked list utility library.
GraalVM native executables start extremely fast, but they aren't small. Partly this is the mismatch problem again: operating systems don't provide stuff like built in garbage collectors, even though GC is a fundamental service nearly every app needs, so, programs ship their own. Partly it's because GraalVM makes apps start fast by pre-initializing the app, so the binaries come with a heap image that it just mmaps and starts using. But then you have to download and store the initial heap instead of recomputing it at startup. It's a disk space/startup time tradeoff. And partly it's because a static analysis must be inherently conservative. Anything that could potentially run has to be included, even if in reality it never does (think exception handlers, assert messages...)
So IMO the right fix is to just assume an always-on internet connection and then stream code in as it's needed, with pre-emptive paging in the background to get apps into a state where you can disconnect. The system would aggressively deduplicate data to avoid redownloading stuff you already have. A smarter version of what browsers do. This lets you have apps that are hundreds of gigabytes in size but they still start fast and sip resources on the client side where you're most constrained.
I think modern languages ignore Win32 partly because it sucks and partly because virtually no working developers have ever used it. Many modern devs won't even realize it's there. How many devs that slung window messages in the 90s are still coding? I think a lot have retired or moved into management roles, or were hired to work on Chrome. There are Microsoft security engineers, working on Edge, who announced in a blog post that they had no idea how to use COM and had to rely on AI to write the needed code for them! [1] And a lot of devs aren't developing on Windows, they're probably writing these Rust toolkits on macOS or Linux.
Anyway, Win32 UI is something I'd not recommend using. It's been unmaintained for decades. Conveyor uses it because, well, it's an installer that downloads the rest of the app, so size matters more than anything else, and the UI needs are simple. The moment you want table stakes stuff like High DPI support, responsive layout, dark mode, vector icon support, data binding etc, you end up needing big frameworks that sit on top and/or you end up in a nightmare of twisty barely documented APIs.
> Like, modern client devs know to keep slow operations off the GUI thread whereas back then very few apps were threaded so hung windows that wouldn't repaint were standard.
Maybe things are going great among people who write native software that uses native UI libraries. In games, users of the most popular game framework have been asking the framework developers for the ability to poll inputs on a thread that isn't also the render thread for 14 years and counting.
It's not, it's drastically faster. The TI-83 is a device I have to pick up, probably fetching it from a drawer first. Calculator app starts in a few hundred milliseconds from the moment I finish typing "Cal" in Mission Control, which itself is only a few seconds after I decide I need a calculator. Generic tools can pay a performance price for that generality and still win. A dedicated device just cannot beat that speed no matter how many ASICs it uses.
Would like to see a blog post like this aimed at end users, not programmers.
It might be impossible to get programmers to stop writing garbage software and trying to get other people to use it. It might be possible to get end users to reject it.
That doesn't make sense to me. Unless users are presented with alternatives, they will have no choice but to use the garbage they are given - nor will they be able to recognize how much better software can be.
Better software has to start with the programmers. We're the only ones who ACTUALLY know what the hardware is capable of, and the only ones capable of setting those new expectations. This is why we aimed the manifesto at programmers. It is not the user's fault.
Hardware physically exists. So when you're creating new hardware, you study nature. Software is fantasy. It's like you've been hired to write page 500 of a 500 page fantasy novel. Most of your time is wondering what the heck the people who wrote the other 499 pages were thinking.
To me the issue are two destructive thoughts patterns, in both developers and users. Developers believe that just because they can make something it deserves to exist. Users believe that just because it exists they are supposed to use it.
I believe there is no better example than the "WordArt" feature in Word. It is something which obviously took a very significant amount of effort on the part of the developers, tens of thousands f man hours I would guess. It is also terrible and nobody should ever use that feature, not a single document ever gains anything from it. Instead of making a title large and bold, the existence of that feature tempts users into making it a wave, with an outline and a shadow, instantly making the document worse.
So much of software just should not exist, so many features should be removed and users need to stop thinking that the existence of something is justification for its use.