I think these programmers were forced to understand their subject matter much better than today.
Say for (a silly) example you want the first 10 000 digits of Pi. It's pretty easy to just store that today. But back then you didn't just have to know what Pi is, you had to have the smallest program to calculate it that you can think of.
Would be interesting to hear too how Tekken 3's coders managed to work with 2 MB of RAM.
Which is why I find so ironic that many still think only languages like Assembly, C or C++ have a place in IoT on devices like ESP32.
Sure, we used Assembly when performance was the ultimate goal, but also plenty of high level languages, including stuff like Clipper for database front ends.
512 KB with a couple of MHz are already capable of doing a lot of stuff, one just needs to actually think how to properly implement it.
> Which is why I find so ironic that many still think only languages like Assembly, C or C++ have a place in IoT on devices like ESP32.
Short answer, Parkinson's law.
When I browse the Web, open my Windows File Explorer, open Photoshop, open Visual Studio, open just a graphical application that needs GPU acceleration, or do whatever thing that should not be an issue, I find that Parkinson's law very much applies.
Wtf. Did you see how fast VS6 started on a machine from almost 20 years ago. Today I get depressed whenever I need to open Visual Studio, and I don't even know that I use any more functionality that wasn't available 20 years ago.
Many many programmers (in my perception at least) are extremely dissatisfied with today's state of computing, and the reason why we got here is the popular opinion that we don't need to worry about performance.
Nobody should optimize representations of Pi or count CPU cycles by default. That's not the point. But if you claim that Java is always fast enough for example, then I think we're having a strong disagreement. It's partly confirmation bias and being unaware of vast parts of the landscape, but it's also a fact, that I couldn't name you a complex GUI written in Java with satisfying ergonomics.
Java (the language) is not especially slow; back in the mid 2000s I was optimising a Java GUI to display hundreds of thousands of nodes for chip design clock trees.
Java (the culture) makes it hard to be performant. There's a great tendency to use all sorts of frameworks and elaborate object systems where much simpler code would give you at least 90% of the functionality for 10x the performance.
But if you get some programmers with experience beyond Java who care about performance and are rewarded for performance, it's certainly possible.
Yep I agree. Java isn't slow if you build a trivial application. Or an HTTP server. Or something for batch processing. Or if you're a masochist working around performance issues until you reach about ~C level of performance and performance "robustness".
The problem is what Java encourages you to do. And I didn't want to single out a language; Java is just one that I revisit from time to time and I'm always astonished how awkwardly hard it is to do the simplest things in a straightforward (and CPU-efficient) way.
And yes, there are a lot of slow C++ programs, and even slow C programs. There's just a clear tendency for C programs to be faster and more responsive.
I was involved in a massive rewrite of a website from C to Java a while back. One coworker observed that, when they were coding in C, it took a lot longer to get anything working, but once you did, it was pretty solid: C had a tendency to just crash if anything was wrong, so you'd work for quite a while before you got something that didn't crash consistently. Java, on the other hand, allowed you to get something working (that is, running without crashing) much quicker: but the things that were out of place were still there, they just caused harder-to-find problems that were much more likely to become customer-facing before they were caught.
I totally agree that the problem is lack of mechanical sympathy.
A big problem is simply that software development is too hyped, and there is too much money in the industry, so there are too many unexperienced developers bringing bad software. Also, projects are too ambitious, leading to compartmentalization of implementation, leading to slow and unreliable code. The problem often starts already with the things that people want to build.
And my guess is that the use of Java and other object languages is highly correlated with these developments.
Legacy products like visual studio and photoshop also suffer from age. Those codebases are both over 20 years old and have a gazillion features that customers start to rely on. Add in that and codebase that old will be slower to make changes on then something fresh and you end up where they are.
Visual studio had a UI rewrite to .net about a decade ago.
VSCode is electron.
If they had kept maintaining the old thing maybe they would experience less performance issues, which you say is suffering from age, but actually they threw out the old thing and rewrote on costlier and more recent frameworks.
The bizarre part is that if you mention to the people making visual studio that every version is slower and has more latency and lag in the interactivity, they seem to have no idea what you are talking about and ask what specific situation you are running in to. They either don't know or don't acknowledge the evolution of the performance of their software.
I read once that adobe acrobat has thousands of static variables that initialize on startup. Multiple pdf readers like sumatraPdf are successful largely because of their bloat and start up time.
It's crazy what happens when programming teams have no priority for not wasting user's time and computer resources.
Not necessary, when I did my degree, our Algorithms and Data Structures class was done in C, a couple of years later the same course was upgraded into Java.
However, I know from former university friends, that eventually became TAs, that was what required from students was kept at the same level of requirements.
The professor responsible for the class, had a battery of tests (almost a decade before unit tests became a known term), and having the tests green was the first requirement to even accept the class projects.
Those tests did not only test for correctness, performance, memory consumption and execution time were all part of the acceptance criteria.
This is what is missing from many teaching institutions.
Folks crap on Electron apps and rightfully so, but I absolutely LOVE one thing about them relative to native GUI apps.
I can hit cmd+ and cmd- to scale their content and UI up and down.
That is dang near a killer feature for me.
Very handy for presenting, screen sharing, when I move my apps to an external monitor with a slightly different UI, or for moments when my eyes are simply tired and I want something easy to read.
You can say that I shouldn't need hundreds of MB of RAM to run Slack's desktop client, and you ain't wrong, but I've got plenty of RAM. My eyes and my time are much more finite resources.
Writing Electron apps for me, as a matter of fact, brings back the time when one had to be hyper-mindful of CPU cycles, just like back in the day when I was writing Windows apps in C++.
Everything is going through the prism of "but how much will it cost in terms of performance? Granted, this should be the case for server software as well, but in case of clients, you don't know what ghetto shit your code will run on - you write for the worst possible case. Server specs are unknown.
Honestly dude, I know you are trying to joke and all but this just sounds incredibly rude. This minimization of someone else's skill is petty and absurd. Electron apps serve their purpose and have their place, hating on them will not get you anywhere.
Electron solves a very important problem. It efficiently prevents those pesky Linux users from using your software. Most electron apps simply do not work with Wine.
It seems to me that the point of Electron is, in many cases, to leverage existing knowledge of client-side web programming paradigms and tools. Multi-platform support is just a bonus that comes with it.
I'm not sure if you saw the interview on Polygon [1], but I really liked hearing about doing the first Tekken port from arcades to PS1 and dealing with the newer memory constraints.
Say for (a silly) example you want the first 10 000 digits of Pi. It's pretty easy to just store that today. But back then you didn't just have to know what Pi is, you had to have the smallest program to calculate it that you can think of.
Would be interesting to hear too how Tekken 3's coders managed to work with 2 MB of RAM.