Absolutely. Anyone building on top of performance critical systems should have a healthy bit of gratitude for the engineers who were interested to specialize there.
That said, I think it's good to remain as objective as possible about the actual impact of optimizing for performance in different domains.
So for instance, the impact of attention paid to performance in the codecs used by a music/video player, or the v8 runtime, or rendering or networking subsystems in e.g. macOS or Chromium is huge.
However, should we expect the impact of optimizing to be similar in application-level code for consumer apps? I would argue no (granting that exceptions exist). At this layer the computations are for business and display logic, and calling into highly performant subsystems. Additionally, they are typically 'leaves,' not dependencies of other systems (which would cause their performance choices to ramify).
This is not to say consumer apps are able to ignore performance concerns: you can still make garbage that way. But you'd be deep into the region of diminishing returns if you poured as many resources into performance for application-level code on something like Spotify as you did on e.g. codecs it uses or low-level rendering code it depends on.
And that's the reason tech like Electron is so often selected by folks whose bottom line is massively affective by their ability to be objective about these issues.
There's a huge difference in the performance levels you're talking about, which creates a risk of unintentional equivocation.
I've written long rants about this in the past, so let me draw a picture instead:
codecs,
chrome old spotify slack, teams,
renderer | current spotify
| | |
|--v----------v----x----------v--------|
FAST | SLOW
slow enough
to notice during
casual use
\________/\________/\________/\________/
| | | |
overkill | bad UX |
| you're just being
good UX mean to users
(your app should be here)
First is that this is fundamentally a question of tradeoffs, which means a single axis diagram like this is fundamentally misleading.
For instance, we have the conclusion about 'being mean to users' toward the slow end of the scale. But if the tradeoff means the app costs more, or has fewer accessibility or language features, or doesn't run on the user's chosen OS—which is more mean?
Second, this topic is contentious not because Spotify is slow but because many readers believe that building on Electron implies your app will be slow and is a basically negligent technology decision, a blight on the field of software engineering, and so on and so on... So, while I agree with your placement of Teams (though not Spotify incidentally), saying that because a couple Electron apps are not optimally snappy in this context reinforces the (imo) mistaken attribution of non-snappiness to Electron: afaict, of the major Electron apps out, there are at least as many that are snappy, and of the ones somewhat lacking in this department there are no native apps with feature parity to compare against (i.e. for Slack or Teams; Spotify otoh, maybe it is really bad for some people—not my experience, and the sample of 1 wouldn't prove much).
In any case, I like the diagram and largely agree with its assertions in isolation.
Edit: I'd be happy to take a look at one of your longer rants if you want to point me to one. I am genuinely interested in better understanding the situation if I've missed something.
Ripcord is lovely, I'm a paying customer[0] and highly recommend it. I used it for years with Slack, and have only good things to say about the experience. It's also a living proof that all the bloat in Slack and similar apps is absolutely not necessary.
That said, the issue with Ripcord and similar is that they're all living on borrowed time - and using them means risking your own account. It's only a matter of time before Slack and Discord start to ban people using alternate clients - and then poof, Ripcord is dead.
--
[0] - Fancy way of saying "bought a license when the dev finally enabled throwing some money towards them".
> It's only a matter of time before Slack and Discord start to ban people using alternate clients
Pretty sure in the case of Slack that their business customers will have words when a bunch of their developers can't suddently communicate with the business...
There isn't, as long as you're willing to tolerate the risk of getting your account banned for blatant ToS violation. Companies like Slack tend to ban first, ask question never, and good luck reaching customer support if you aren't a paying business customer.
(FWIW, I judged the risk was wort the effort in my case, and was a happy Ripcord user for the years I had to use Slack.)
That said, I think it's important to consider how much development time is spent on iterating as a new product's design is figured out. The company developing Slack has probably done the net work of building it 20 times over as its feature set developed and morphed over the years—and I'm sure Slack's actual complete feature set eclipses Ripcords (most of the difference coming in via features not essential to most individual users, but key to the business).
The workspace and tab counts are also hard to read much meaning from since most of the memory usage is dependent on the media that's been loaded into the app. How many web page previews, videos, images, etc. are in those tabs?
In any case, at the end of the day our sample sizes here are just too small to draw the conclusions people on here so often do about Electron. We know you can move fast with it (development speed), and that apps built with it can be fast (e.g. VSCode, Github desktop client, Discord)—but people can also build slow apps with it (no surprise), and there is a somewhat large constant factor for install size (~50mb base).
In my mind that does not stack up to merit the kind of complaints about Electron that can be found here every day.
> apps built with it can be fast (e.g. VSCode, Github desktop client, Discord)
I can't speak to the other two, but there is no way I would describe Discord's client as "fast". Starting the application takes 8-10 seconds during which it pops up several different windows on top. Switching between channels has a delay of about 1 second or so. If that channel isn't in cache, then it adds another 3-6 seconds of delay.
Switching between views isn't a new task, and isn't an infrequent one. If I think back to pidgin 20 years ago, the startup time was much faster, and switching between tabs had no perceptible delay. Discord has the advantage of 6 years of Dennard scaling followed by another 14 years of Moore's Law, so it has zero excuse for not being able to perform those same tasks to the same standard.
You raise very good points, that together make the picture even more complex :).
> The company developing Slack has probably done the net work of building it 20 times over as its feature set developed and morphed over the years
That's definitely true, especially in areas where they were innovating (or at least experimenting with features that were not common in the space). There's a cost to R&D, and I'll agree that preferring velocity is important to minimize that cost, which justifies the use of "nice for devs, bad for UX" tools[0].
> I'm sure Slack's actual complete feature set eclipses Ripcords
That's true. Ripcord doesn't replicate Slack 1:1, there were few "sparkly" that were cut (at least when I used it ~2 years ago), and of course a lot of the chrome that got removed could be considered features by some. But at least by metric of productivity, Ripcord's UX eclipses that of Slack.
> (most of the difference coming in via features not essential to most individual users, but key to the business)
In this particular case, I'd say it was 90% just removal of resource-intensive bloat. But that's a good observation in general: one of the reasons some users are dissatisfied with official apps is because of what they[2] deem as user-hostile features, existing to exploit the user instead of aiding them. Obviously, they're put there because they're the key to the business. Plenty of obviously bad UX can be easily explained when one looks at how the vendor actually makes money.
> How many web page previews, videos, images, etc. are in those tabs?
Ripcord either doesn't load those or is lazy about it. But when it does, you at least get the full picture, instead of having to click through some gallery-like popup interface :).
> apps built with [Electron] can be fast (e.g. VSCode, Github desktop client, Discord)
People will contest with you here because they're using a different reference point for "fast". VSCode is impressively fast... for an Electron app. Not so much in comparison to desktop-native apps implementing equivalent features. And that's the one well-known exception, a typical Electron application is noticeably slower and more resource-intensive than an equivalent desktop app.
> large constant factor for install size (~50mb base)
This is not something that people care about unless you're doing something silly, like an Electron TODO app that weighs 50MB and uses many times more of RAM, where the reference comparison is against a WinAPI app that would weigh 50 kilobytes and use not much more of memory.
This is also why people don't generally complain about VS Code being Electron - it actually makes good use of all the features its platform offers. But most Electron apps? Picking Electron saves developers a little bit of time, at the cost of heavy resource tax for all users. That's annoying. Especially if you have experience with native software that gives you reference points to compare.
--
[0] - I'm definitely guilty of this myself. At my previous job, I developed a prototype for 2.0 version of the company's flagship product in two weeks, in... ObservableHQ[1]. The actual work to reimplement those features in our product took almost a year. I did joke we should probably ship the prototype in the meantime (especially given that our main competitor was an Excel plugin), but we never seriously considered that.
> First is that this is fundamentally a question of tradeoffs, which means a single axis diagram like this is fundamentally misleading.
Agreed on the tradeoffs, and the diagram is a projection of complex parameter space onto a single axis.
> But if the tradeoff means the app costs more, or has fewer accessibility or language features, or doesn't run on the user's chosen OS—which is more mean?
That's a very tricky question, because relationship between performance and those other factors is not straightforward. For example, the cost of making an app in a typical startup has zero relation to what the users pay - development is funded from investor money, and user-facing price is set by whatever shenanigans the business is doing at the moment - e.g. $0 to corner the market or maximize growth, or $10 as a calculated point that maximizes money extraction from a growing user base, etc.
(One would think there's no free lunch, and eventually the price has to come close to costs - but that's not how startups economy works. If you get to the point of having to turn actual profit, you've already missed your exit.)
Related to this is a second point: in a winner-takes-all market, the most successful app will suck the oxygen out of the room, preventing others from doing better work. Success typically isn't determined by the app itself - the app is usually backed by a service, which makes it not commodizable. If you need Teams because of network effects, you won't switch to Slack even though the app is better. You won't dump Spotify for a competitor that doesn't have an equivalent musical catalog. Etc.
The point I'm trying to make is: the trade-offs are often arbitrary choices. Would it be possible for an app to successfully compete with Spotify while having fast, native clients for every platform and full accessibility? Definitely. But it's not happening because it's not possible for that app to break into that market in the first place. Our would-be app can't compete on being a better music streaming player - it has to first reproduce the entire value-offering, including the streaming service, the library, and countless of deals negotiated with labels and musicians. This isn't happening, and so Spotify isn't getting any feedback from the market about their godawful garbage apps.
> because many readers believe that building on Electron implies your app will be slow and is a basically negligent technology decision, a blight on the field of software engineering, and so on and so on...
I'm partial to this view. The way I see it, picking Electron by default lands you smack in the middle of what I labeled as "bad UX" zone. It takes hard work - the kind of work you've described as trading against accessibility or existence - to move it into the "good UX" zone. That work isn't usually done - you don't pick Electron if you want to make a snappy app, you pick it because you care about velocity, cornering your little part of the market. So Electron apps tend to move towards the "now you're being mean" zone - which is where this common view of "Electron = bloat" comes from.
> ... the cost of making an app in a typical startup has zero relation to what the users pay ...
These aspects of where the money comes from are orthogonal to the tradeoffs problem though: at the end of the day you have some quantity of money and spending it on A means you don't spend it on B. If performance was sacrificed, you can't look at that in isolation: whether it was a good decision or not depends on what it was traded for.
> But it's not happening because it's not possible for that app to break into that market in the first place.
That whole situation is unfortunate, but tech like Electron that allows companies to take advantage of it also enables independent developers to build things they wouldn't otherwise have time for. It's purely an increase in power and individuals can use it for good or evil.
> It takes hard work - ... - to move it into the "good UX" zone.
This isn't true. If you are to the point of having noticeable performance problems using the DOM/Chromium to render the kind of desktop application UI you might build with QT—you have seriously fucked up (imo). Out of the box this should be blazingly fast; there is nothing extra you need to do to make it fast. Just like don't run expensive computations on the render thread and don't be sloppy lol.
Where the situation complicates is once 'web development culture' is brought into the picture—and I think that's an interesting topic in itself—but it's separate from attributes inherent in Electron.
> Where the situation complicates is once 'web development culture' is brought into the picture—and I think that's an interesting topic in itself—but it's separate from attributes inherent in Electron.
Fair enough. Myself, I mentally conflate the two - and I suppose so are most of the people criticizing Electron. This position is not without its merit, though: one of the main selling points of Electron is that you can leverage all the libraries developed for the Web.
It's the same thing as with regular web apps - modern browsers are amazingly fast, and you can create snappy experiences with vanilla JS and enough elbow grease. But people naturally reach for modern frameworks - and that very decision is what usually kills snappiness at the spot.
Great thread and discussion. IMO, there's some overhead to using a framework but it's marginal compared to the data model/flow/architecture you've chosen to implement. I don't think reaching for a framework kills snappiness nearly as quickly as reaching for a convenient but inefficient data model.
> you can create snappy experiences with vanilla JS and enough elbow grease ... But people naturally reach for modern frameworks - and that very decision is what usually kills snappiness at the spot
I think there is still a misconception here though: A) no elbow grease is required, and B) just using a popular framework (React, Vue) is not going to slow things down.
RE A, I would recommend just giving this a shot if you haven't: try building a desktop application-like UI with html/css/js executed by Chromium. My expectation of what you would find: it is snappy by default, and it's not even clear how you would write UI code bad enough to create a sluggish experience like e.g. Teams. IMO the explanation probably lies within some organizational structure at Microsoft, not the tech they're building on. (One exception to the rule of things being fast by default: with animations it's fairly easy to shoot yourself in the foot, but it doesn't typically take more effort for performant animations, just basic knowledge of what is quick and what is resource intensive.)
RE B, similar situation here: I think if you try building something with React/Vue you will see that they are also fast by default, and that no extra work is required to make them fast. That said, they (have the potential to) do a lot under the hood, so the potential for triggering something that would cause slowness is higher than without.
After writing this it seems like our disagreement may come down to this: my point is that the tech isn't inherently slow and that a disciplined/experienced programmer can use them to rapidly build high quality software; but maybe from your perspective the more significant thing is that in practice many developers using the technology do end up making sluggish/bloated software, so: that the tech allows people to make slow things fairly easily is more important than its equal potential to make fast things. I.e. the problem is that there are no guardrails + it is the tech of choice of a huge community of developers, many of whom would benefit by the guardrails? (I don't main to blame devs here so much, it's probably more that fault of businesses' priorities than anything)
That said, I think it's good to remain as objective as possible about the actual impact of optimizing for performance in different domains.
So for instance, the impact of attention paid to performance in the codecs used by a music/video player, or the v8 runtime, or rendering or networking subsystems in e.g. macOS or Chromium is huge.
However, should we expect the impact of optimizing to be similar in application-level code for consumer apps? I would argue no (granting that exceptions exist). At this layer the computations are for business and display logic, and calling into highly performant subsystems. Additionally, they are typically 'leaves,' not dependencies of other systems (which would cause their performance choices to ramify).
This is not to say consumer apps are able to ignore performance concerns: you can still make garbage that way. But you'd be deep into the region of diminishing returns if you poured as many resources into performance for application-level code on something like Spotify as you did on e.g. codecs it uses or low-level rendering code it depends on.
And that's the reason tech like Electron is so often selected by folks whose bottom line is massively affective by their ability to be objective about these issues.