So your projects have no dependencies? You just write everything from scratch? Your code is perfect the first time you write it? I'm utterly confused by this comment.
There are plenty of projects which do not need to change once written as long as you leave web-based things. Some examples:
I recently found my old GPS logger, and wanted to pull data from it. I installed "mtkbabel" package and it worked wonderfully. This code was last updated in 2013 [0].
I was recently copying some files from one of my embedded boxes. It had rsync 3.1.2, released in 2015. Still works and no need to upgrade (the page lists some security vulnerabilities for this version, but this is only for "untrusted server" scenarios, which I never have)
The embedded systems not connected to internet will generally work forever without any updates. I do have a 20 year old MP3 player and it still works fine.
I would not say that non-web software is "obscure". You probably interact with it as much as you interact with desktop/phone software -- think vehicles and home appliances. And factories that produce all the stuff you use are famous for having very old software -- I would not be surprised if a factory which makes springs for your chair still has some MS-DOS machines around.
Remember, the post I am replying to says "no commit since the last three years" -> "abandoned". The software life of 3 years is really not that long for many contexts.
I agree with that anything browser-related needs to be constantly updated, especially if you need fancy functionality. But if you do not need this functionality, then HTML 4 based stuff still works and does not need to be updated.
Compilers (and programming languages by extensions) do not need to be updated very often. 5 years ago we had gcc 5 and python 3.5. There is no reason to upgrade them at all if your build system does not require it (for example, if you use buildroot or customized emerge or docker). And you do want to use latest versions, then there is a very high chance your software will work with the latest versions without any changes.
SSL/TLS libraries are important to keep patched. Luckily, the critical faults do not happen that often. For example, last critical vulnerability in OpenSSL was in 2016 -- so you really did not need to update your SSL libraries in the last 3 years.
Operating systems upgrades are probably the biggest drivers for the changes. But again, Ubuntu LTS have full security support for 5 years -- so if you can require a specific OS (embedded device or container) then you can update the software only twice a decade.
The software world is very big. The web / GUI world is most visible of them all, but it does not mean everything else is "obscure".
Hm, right, obscure is not the right terms to use, a mix of "done" plus "very much not in active development" are closest.
I meant obscure as in: except from a very specific persistent and advanced adversary (APTs) no one will even try to hack something like that directly. Sure, it's possible, so it's put behind a lot of firewalls, middlewares, wrappers and message queues. At least that's how one insurance company I work with uses COBOL. And if they can avoid touching it, they won't, because it's so hard/risky/expensive, and the risk of external security intrusion has been deemed low. It's abandoned as a product, as a goal, it's basically an aging power tool at this point, that will eventually give up. (Like embedded stuff.)
I know 3 years is not that long. I'm just saying that the various forces (business and security aspects) that be usually dictate fast turnaround, or at least a certain minimal level of upkeep.
Java, C++, Rust, PHP, JS, etc. all have quite a big velocity nowadays.
Python 3.5 just got EOLed.
Sure, you don't have to upgrade every last piece of python script. After all RHEL and other distros still provide some py2 support too. And if your business is not growing fast, your build system is "done", then you don't have to touch it much. But eventually it'll need some maintenance, maybe just a few touches to keep it future proof, but that again also implies that it's a niche, a custom software that does what you need it to do, that's likely not a high-profile target directly.
If you want stable, long lifespan software, this is exactly how you do it. You are also careful to only build on layers of abstraction that are also designed with this mindset. Yes, it limits what you can do and which features you can rely on, but it is an achievable goal for many critical lower level libraries.
Yep. For example you have to basically air-gap your software, no input/output, especially nothing that touches crypto/TLS/networks/protocols, usually no support for fancy file formats either.
Or you can just go ahead and implement all those by hand. Perfectly.
I mean it's possible, but bumping BoringSSL/Libre/OpenSSL version every few months seems easier.
A project can't be more stable than its dependencies and tooling, but that doesn't mean that the only way to be stable is to have no dependencies.
Some programming languages make guarantees that old code will build in new versions (eg, ISO C even refuses to introduce new warnings for code that would previously build without warnings), while others will introduce backwards incompatible changes in minor releases. How much maintenance a project needs after it is "done" really depends on the tooling environment.
Library and framework maintainers will eventually stop updating previous major versions. So you manage to tread water just fine till your major version reaches EOL, then you've found yourself in the position of having to change over to a completely new API on a schedule set by the maintainer of the dependency.
> Library and framework maintainers will eventually stop updating previous major versions.
And, if it's open source and it's easier than switching to a new dependency, you can just adopt maintenance of the dependency—not necessarily generally, just enough to address any bugs induced in or evolution in needs for the project(s) you have that depend on it.
> A project can't be more stable than its dependencies
Sure it can; there's no reason a project has to take every upstream update, if, for instance, it vendors dependencies, or otherwise doesn't directly depend on the remotely maintained source.
Someone has to mull over whether to apply the update or not. Usually this is a by default "not" for most projects. (And that's not necessarily a bad stance.)
Stability is great as long as you have the luxury of no pressure for new features or better security. (Eg. operating systems, browsers, compilers, critical libraries don't.)
Imagine a CSV parser written in Java. I can imagine quite well that it would rarely need updates, as CSV does not evolve and I find it reasonable to write such code with no external dependencies except the Java standard library, which almost never breaks backward compatibility.
CSV is a very bad example. Yes, it is easy to throw together a simple regex to parse simple RFC4180 CSV strings, but Excel is its own black box with a huge number of hacks.
For example, en-US excel will automagically parse TRUE and "TRUE" to be the logical value TRUE. The way to get Excel to see a literal string TRUE is to make a formula ="TRUE". Many CSV writers implement this hack specifically assuming files will be read back in Excel. So now your parser, if you're trying to process data like Excel, has to do the same.
So then you discover that this is actually localized! If you set your UI language to French (France), Excel will treat VRAI and FAUX as booleans while TRUE and FALSE are treated as literal strings.
What you thought was a simple CSV parser now has to handle localization as well. So that CSV parser library can roll its own dodgy localization support, use a tried and true solution, or just choose not to support the feature. Each choice has its own drawbacks
> Imagine a CSV parser written in Java. I can imagine quite well that it would rarely need updates, as CSV does not evolve
CSV, which isn't even a standard and the closest thing to a standard is a description of the breadth of different behaviors seen under the name at a particular point in time with some notes about their relative frequencies and respective practical issues, does, in fact, evolve.
Alright, fair, but then again what if you're building something more complex, where CSV-parsing is just 1% of what it does? Do you implement every single non-standard library functionality you need?
Usually when bringing in a library dependency, you don't use all features from the library. If the updates to some library relate to things you don't use, and the rest remains compatible, then why update?
Because not every language / environment allows concurrent versions of a single library. PHP definitely does not, and for Java stuff IIRC you can only have one version of a library in one context of Tomcat/whatever application server you use.
Therefore you want to keep your code at least somewhat up to date so that people don't run into weird bugs.
Security teams generally have policies that require dependencies be up-to-date within a certain time frame. This is especially true if the dependency has a CVE somewhere within it - even if that CVE affects functionality that isn't utilised by the project.
As someone who used to work on fairly old embedded systems in a safety critical environment, I am extremely puzzled by your comment.
Why you would need to modify your code because you have dependencies ? Libraries'API are supposed to be stable. You can update them when they need a security patch without having to change anything in your own project.
Thats because web as we know it has been evolving pretty fast, while the architechtures that generally use C or C++ have remained stable for a long time.