Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I wish the was a way to fix this attitude. So that project maintainers didn’t need to write two page apologies for finishing their thing.

One of the problems is that GitHub is full of OSS libraries where the last commit was three years or more ago and you have no idea (without forensically analysing commits, issues, etc.) whether it's because the project is done or because the maintainer(s) lost interest, had other priorities, etc.

And you have no idea because they haven't said. They haven't told anyone: it's just drifted into an unclear state of unmaintained-ness.

I would suggest rather that what the Moment.js team have done here should be the norm: i.e., clear communication of the situation. That situation, as here, might be "done", or it might be "the thing's only half finished but we don't care enough to carry on any more", or something completely different. These are all good reasons to stop working on something even if they're potentially frustrating for users. E.g., in the "don't care" scenario, people have the option to step in and pick up maintenance, fork the project, or simply not use it.

Doesn't matter: understanding the state of a potential dependency in terms of maintenance and development is the key factor that will enable people to make an informed decision about whether to use it or not.



> One of the problems is that GitHub is full of OSS libraries where the last commit was three years or more ago and you have no idea (without forensically analysing commits, issues, etc.) whether it's because the project is done or because the maintainer(s) lost interest, had other priorities, etc.

I don't think there's any at least medium sized project that is considered to be done but not abandoned/legacy and does not receive any commits.

Any non-trivial project that has no commit since the last three years is abandoned. Even in the case a project is considered to be done/finished, in order to not abandon it you have to maintain it: fix security issues, update outdated dependencies, update tooling so that you can still run/compile with non outdated tools and so on.


You might benefit from flipping this around and asking yourself why you're not able to ship software that remains shipped without intervention. And why you can't even imagine a world where that would be possible.

If you spend some time and effort removing whatever obstacles you have in place that are keeping you from being able to do that, you'll have a lot more free time to spend building new things.

For what it's worth, this world does in fact exist. And there are lots of us living in it. Here's hoping you find a way to join us!


> If you spend some time and effort removing whatever obstacles you have in place that are keeping you from being able to do that

This is literally impossible for many JS libraries. Chromium / NodeJS / other JS environments are themselves constantly changing. Irrespective of the evolving timezone info, the core MomentJS can only be "done" for a particular set of browser versions. Each bug pertaining to dates, like https://bugs.chromium.org/p/v8/issues/detail?id=7863 , is a potential browser/engine version for which Moment needs a fix.

> this world does in fact exist

It only exists for certain proprietary software and SaaS developers because of the hard work of open source developers to keep up with the changing landscape. If everyone adopted your attitude, you would be forced to contend with the true nature of the ecosystem directly.


This is true, but also it would be good if the JS environments stopped doing that.


If the Web, browsers, the JS lang, sandboxing, Internet protocols, TLS, etc. were all perfect and done, then JS envs would be too.


So your projects have no dependencies? You just write everything from scratch? Your code is perfect the first time you write it? I'm utterly confused by this comment.


There are plenty of projects which do not need to change once written as long as you leave web-based things. Some examples:

I recently found my old GPS logger, and wanted to pull data from it. I installed "mtkbabel" package and it worked wonderfully. This code was last updated in 2013 [0].

I was recently copying some files from one of my embedded boxes. It had rsync 3.1.2, released in 2015. Still works and no need to upgrade (the page lists some security vulnerabilities for this version, but this is only for "untrusted server" scenarios, which I never have)

The embedded systems not connected to internet will generally work forever without any updates. I do have a 20 year old MP3 player and it still works fine.

[0] https://metadata.ftp-master.debian.org/changelogs//main/m/mt...


This is security (and thus stability) by obscurity basically. By being an obscure userbase, a niche.

Browsers, compilers, SSL/TLS libraries, operating systems and so on doesn't have this luxury, and thus this has a knock-on effect.


I would not say that non-web software is "obscure". You probably interact with it as much as you interact with desktop/phone software -- think vehicles and home appliances. And factories that produce all the stuff you use are famous for having very old software -- I would not be surprised if a factory which makes springs for your chair still has some MS-DOS machines around.

Remember, the post I am replying to says "no commit since the last three years" -> "abandoned". The software life of 3 years is really not that long for many contexts.

I agree with that anything browser-related needs to be constantly updated, especially if you need fancy functionality. But if you do not need this functionality, then HTML 4 based stuff still works and does not need to be updated.

Compilers (and programming languages by extensions) do not need to be updated very often. 5 years ago we had gcc 5 and python 3.5. There is no reason to upgrade them at all if your build system does not require it (for example, if you use buildroot or customized emerge or docker). And you do want to use latest versions, then there is a very high chance your software will work with the latest versions without any changes.

SSL/TLS libraries are important to keep patched. Luckily, the critical faults do not happen that often. For example, last critical vulnerability in OpenSSL was in 2016 -- so you really did not need to update your SSL libraries in the last 3 years.

Operating systems upgrades are probably the biggest drivers for the changes. But again, Ubuntu LTS have full security support for 5 years -- so if you can require a specific OS (embedded device or container) then you can update the software only twice a decade.

The software world is very big. The web / GUI world is most visible of them all, but it does not mean everything else is "obscure".


Hm, right, obscure is not the right terms to use, a mix of "done" plus "very much not in active development" are closest.

I meant obscure as in: except from a very specific persistent and advanced adversary (APTs) no one will even try to hack something like that directly. Sure, it's possible, so it's put behind a lot of firewalls, middlewares, wrappers and message queues. At least that's how one insurance company I work with uses COBOL. And if they can avoid touching it, they won't, because it's so hard/risky/expensive, and the risk of external security intrusion has been deemed low. It's abandoned as a product, as a goal, it's basically an aging power tool at this point, that will eventually give up. (Like embedded stuff.)

I know 3 years is not that long. I'm just saying that the various forces (business and security aspects) that be usually dictate fast turnaround, or at least a certain minimal level of upkeep.

Java, C++, Rust, PHP, JS, etc. all have quite a big velocity nowadays.

Python 3.5 just got EOLed.

Sure, you don't have to upgrade every last piece of python script. After all RHEL and other distros still provide some py2 support too. And if your business is not growing fast, your build system is "done", then you don't have to touch it much. But eventually it'll need some maintenance, maybe just a few touches to keep it future proof, but that again also implies that it's a niche, a custom software that does what you need it to do, that's likely not a high-profile target directly.


If you want stable, long lifespan software, this is exactly how you do it. You are also careful to only build on layers of abstraction that are also designed with this mindset. Yes, it limits what you can do and which features you can rely on, but it is an achievable goal for many critical lower level libraries.


> it limits what you can do

Yep. For example you have to basically air-gap your software, no input/output, especially nothing that touches crypto/TLS/networks/protocols, usually no support for fancy file formats either.

Or you can just go ahead and implement all those by hand. Perfectly.

I mean it's possible, but bumping BoringSSL/Libre/OpenSSL version every few months seems easier.

¯\_(ツ)_/¯


A project can't be more stable than its dependencies and tooling, but that doesn't mean that the only way to be stable is to have no dependencies.

Some programming languages make guarantees that old code will build in new versions (eg, ISO C even refuses to introduce new warnings for code that would previously build without warnings), while others will introduce backwards incompatible changes in minor releases. How much maintenance a project needs after it is "done" really depends on the tooling environment.


Library and framework maintainers will eventually stop updating previous major versions. So you manage to tread water just fine till your major version reaches EOL, then you've found yourself in the position of having to change over to a completely new API on a schedule set by the maintainer of the dependency.


> Library and framework maintainers will eventually stop updating previous major versions.

And, if it's open source and it's easier than switching to a new dependency, you can just adopt maintenance of the dependency—not necessarily generally, just enough to address any bugs induced in or evolution in needs for the project(s) you have that depend on it.


It depends on the language. Fortran last broke backwards compatibility with F90. F77 code is still 100% supported. C is similar.


As far as I know Java too. That doesn't mean it's still best practice to write code that uses unsafe/deprecated constructs.


> A project can't be more stable than its dependencies

Sure it can; there's no reason a project has to take every upstream update, if, for instance, it vendors dependencies, or otherwise doesn't directly depend on the remotely maintained source.


Someone has to mull over whether to apply the update or not. Usually this is a by default "not" for most projects. (And that's not necessarily a bad stance.)

Stability is great as long as you have the luxury of no pressure for new features or better security. (Eg. operating systems, browsers, compilers, critical libraries don't.)


Imagine a CSV parser written in Java. I can imagine quite well that it would rarely need updates, as CSV does not evolve and I find it reasonable to write such code with no external dependencies except the Java standard library, which almost never breaks backward compatibility.


CSV is a very bad example. Yes, it is easy to throw together a simple regex to parse simple RFC4180 CSV strings, but Excel is its own black box with a huge number of hacks.

For example, en-US excel will automagically parse TRUE and "TRUE" to be the logical value TRUE. The way to get Excel to see a literal string TRUE is to make a formula ="TRUE". Many CSV writers implement this hack specifically assuming files will be read back in Excel. So now your parser, if you're trying to process data like Excel, has to do the same.

So then you discover that this is actually localized! If you set your UI language to French (France), Excel will treat VRAI and FAUX as booleans while TRUE and FALSE are treated as literal strings.

What you thought was a simple CSV parser now has to handle localization as well. So that CSV parser library can roll its own dodgy localization support, use a tried and true solution, or just choose not to support the feature. Each choice has its own drawbacks


> Imagine a CSV parser written in Java. I can imagine quite well that it would rarely need updates, as CSV does not evolve

CSV, which isn't even a standard and the closest thing to a standard is a description of the breadth of different behaviors seen under the name at a particular point in time with some notes about their relative frequencies and respective practical issues, does, in fact, evolve.


Depending on your needs RFC 4180 might be your CSV standard.


What if you're working in a domain which does evolve? For example, every domain.


Alright, fair, but then again what if you're building something more complex, where CSV-parsing is just 1% of what it does? Do you implement every single non-standard library functionality you need?


Don’t you have to update the project source code at least a few times a year to add new versions of Java to the CI?


Usually when bringing in a library dependency, you don't use all features from the library. If the updates to some library relate to things you don't use, and the rest remains compatible, then why update?


Because not every language / environment allows concurrent versions of a single library. PHP definitely does not, and for Java stuff IIRC you can only have one version of a library in one context of Tomcat/whatever application server you use.

Therefore you want to keep your code at least somewhat up to date so that people don't run into weird bugs.


Security teams generally have policies that require dependencies be up-to-date within a certain time frame. This is especially true if the dependency has a CVE somewhere within it - even if that CVE affects functionality that isn't utilised by the project.


As someone who used to work on fairly old embedded systems in a safety critical environment, I am extremely puzzled by your comment.

Why you would need to modify your code because you have dependencies ? Libraries'API are supposed to be stable. You can update them when they need a security patch without having to change anything in your own project.


Why do your dependencies break your project all the time that it needs updates itself?


Changing the pinning of dependencies is still a change to a project, if only a minor change.


Fair. Guess shows that I spent a lot of time recently in C and C++ lands, where that's not yet as common to even have.


Thats because web as we know it has been evolving pretty fast, while the architechtures that generally use C or C++ have remained stable for a long time.


Wow! What software is this that is complete and doesn’t need maintenance? Based on my experience, it must be either very simple or very uninteresting.


Embedded systems, particularly small ones, often don't even have a means of being updated.


You mean like TeX?


How do people use TeX nowadays? Is that part of the whole LaTeX ecosystem? How big is the userbase?


>I don't think there's any at least medium sized project that is considered to be done but not abandoned/legacy and does not receive any commits.

As another commenter mentioned, this can be pretty common in math libs.

Here's an example: https://github.com/fommil/netlib-java

This project BLAS/LAPACK/ARPACK bindings for JVM languages. The repository is marked as archived and the owner explicitly states that the project is done. The last commit was 21 Jun 2017.

This project is still heavily used, there are still multiple libraries that use those bindings.


The README.md contains install instructions for Debian and Ubuntu, these tend to outdate (not work) quite fast with newer versions of the distributions. Also if any of the low-level libraries BLAS, LAPACK or ARPACK updates, then the wrapper might not continue to work with newer versions or is stuck on a legacy version including bugs and security issues 4 ever.

> This project is still heavily used, there are still multiple libraries that use those bindings.

Heavily used does not imply that a dependency is not abandoned.


That really depends on what the project is I think (and I guess what you consider non-trivial). I can easily see e.g. a numerical library not really needing maintenance of that sort.


>in order to not abandon it you have to maintain it: fix security issues, update outdated dependencies, update tooling so that you can still run/compile with non outdated tools and so on.

If you step back a little, you can see a "positive" feedback loop. The entire issue is that maintainers have to catch crazy rabbits who have no rest, change things and abandon "old" versions that worked but were not perfect. Perfect is the enemy of good.


I was hoping to use Tex as the counter example here, because it’s a large system that’s remarkably stable. However, it’s not actually a counter example as there are small commits periodically.

So I’ll post here admitting to be wrong in my instinct.

https://www.tug.org/svn/texlive/


That's TeX Live, a distribution of TeX.

TeX: The Program has had its most recent release in January 2014.


Thanks. I was noodling around on my phone trying to find it and landed there incorrectly.


There are a couple of projects I track where more than half of the commits are just updating dependencies or compatibility information.

It's both a responsible thing to do and a form of virtue signalling. Yes, we are still here.


I ended up resurrecting someone's github project by introducing a PR to fix a configuration bug. It sat for a while before the maintainer saw it, but he merged it. Then half a dozen other PRs showed up in the next six months.

You can't tell if a project is dead by the commit dates, but you might be able to tell via open PRs that go unacknowledged.


They explicitly said it will be maintained but they will not be any new features. It's not the same as a project with no commit for three years... I think this post is a bit over the top and shouldn't be required but, I agree, clear communication in OSS librairies is important.


Github having an objective "This isn't being worked on anymore" flag with a few options as to why, and pointers for where to reach a community for support/discussion would be most useful, especially overlaid on the crazy huge dependency graph frontend projects tend to acquire.


I think that is the point of the "archive" functionality: https://docs.github.com/en/github/creating-cloning-and-archi...


That's part of it, but doesn't give any useful data beyond 'archived' to make use of when considering your dependencies.


What’s the value in that vs a small note in README.md or similar?


Well, it would be more obvious. Right now, one of the first things the eyes are drawn to on a repository is the date of the last commit. I know I'm I have an internal bias against projects as soon as I see "Feb 23, 2016" as the last commit, it can make me stop scrolling without thinking about it. It would be nice to have a badge that is more obvious than the last commit date to supercede that.


There have been some attempts to improve communication on OSS projects, such as http://unmaintained.tech/


If an OSS library does what you need, is it not done in any meaningful sense anyway? It makes little difference if the author had hoped to add the kitchen sink, but gave up on it, if you don't need the kitchen sink.


If it does what I need but has a dozen security flaws because the maintainer just doesn't care anymore it makes a difference. You have to dig around to figure this out. On github it's relatively easy to check the issue tracker but still.

One case in point is (was) atftp. Since it's packaged with most distros you might be tempted to assume it's safe to use. But then I encountered a crash on Debian. Tracked down the official project page to sourceforge, found the bug was reported years ago including fix, nothing happened. Found it had several other issues like not checking return values of calls like setuid(). Debian at the time had their own patches for this in sid, since coincidentally someone must have hit the same issue around the time. Checked suse out of curiosity and they also had their own patches which were around for quite some time. Same with gentoo (I think). Obviously all three had different patches for different bugs, because unresponsive upstream. I wish there was a joint effort of distros for such cases instead of duplicating work. Or just drop dead projects with known security issues instead of this half arsed approach.

Sorry, second part is only semi related with the original issue but it's just one more way in which picking the right open source solution for a problem can be difficult because of lacking communication.


> If it does what I need but has a dozen security flaws because the maintainer just doesn't care anymore it makes a difference

Does it? The alternative might be to write your own code, which will carry your own flavour of security issues. No matter what code you adopt, be it your own or someone else's, will require some level of commitment in maintaining it.


Is not ci for this ?


Does clearly-stated and overt semantic versioning not solve this?


How so? Semantic versioning shows no indication of project status. It's just a set of numbers that defines the impact of changes since the last set of numbers.


I'm pretty sure SemVer allows for alpha and beta releases. Why not extend that versioning convention to include an omega release as well? (Is "omega release" even a thing? I'm imagining it as the last stable release - it will be maintained but no new features will be added. Which I think I would consider different from EOL.)


But even a release titled “Final-1.2.8” releases a year ago doesn’t tell you if the last release was because it was abandoned and shouldn’t be used anymore because it’s out of date or if it still works and just hasn’t been updated because there was nothing else to fix right now.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: