If you want to be super minimal, I prefer acme.sh[1] instead. It even comes preconfigured for various DNS providers[2], and you can even create your own hook if there isn't already one[3].
If you have over a thousand lines of bash (or any other kind of shell code really), that is a pretty big red flag that you probably shouldn't be using bash IMO.
Well, if you have to use shell that's one thing, but I would be hesitant to use such a large shell script for something as important as certificate issuance in an environment where I didn't have to.
Just as a counter point, I've been using acme.sh for ~3 years now and it's been rock solid.
I get your point, and was pretty shocked to find acme.sh, but after the certbot PPA made a giant mess of my system I gave it a try. On the other hand, why should I balk at running thousands of lines of bash, but be fine with thousands (or many more) lines of C, Python, PERL...? You can write crappy or beautiful code in any language...
> If you only have shell on your servers then it is time to start looking for a new job.
Perhaps you wish to have "real" certs on appliances like F5s and Isilons (FreeBSD-based) where you can't install extra stuff, but where curl and openssl (and bash/zsh) are present.
Or perhaps you want to run simple software that you can actually audit. While "over a thousand lines of bash" may take a little while to examine, good luck auditing Zope, which is what certbot pulls in as a dependency:
Here we are talking about a lightweight C executable though that doesn't have those dependencies. You are also not limited to provisioning certificates on appliances as well, and in those cases I don't think have a thousand line bash script offers anymore security (probably less) than a full-featured C program.
At my last job I ran an Isilon: I could upload a cert for the HTTP server via the web UI, but there was no ACME client. I could SSH in, drop dehydrated and have it work because all I needed was a shell, curl, and openssl.
Similarly with F5: there is (was?) no native ACME client (at least a few years ago when I first looked at it). So I download dehydrated and used various CLI interfaces to schedule automated runs and importation of the certificates.
There was no pre-compiled binary, and no compilers, on either system, and so talking about a "lightweight C executable" is non-sensical. Further, even if we (managed to) compiled things off-host, when we did an OS upgrade on either system, a whole bunch of libraries would change and we'd have to (remember to) re-compile. There is no such worry with a shell script.
If you want to have ACME-fetched certs on a general computer system, then compiling a C program (large or small) is an option. But there are scenarios where compiled/compiling C programs is not an option, and you telling me otherwise when I have personal experience of these situations takes some chutzpah.
I wouldn't be offended. Many people including me have personal and work experience in this area as well. No one is saying you're wrong, but even you acknowledge there are other ways to upload certificates.. usually involving an API as well. If you want to run unchecked third-party 1000+ line bash scripts on production appliances, by all means go right ahead.
> If you want to run unchecked third-party 1000+ line bash scripts on production appliances, by all means go right ahead.
Again, I have a better chance at reading all the code of dehydrated (which I have, in fact, done), then reading all of the Python code that certbot pulls in via dependencies on Ubuntu/Debian.
If you’re provisioning an immutable VM or a container you don’t want to add unnecessary cruft. The official LetsEncrypt client and its > 100 dependencies is a non-starter.
It has limitations, and quoting takes a hot minute to grok, but those don't come into play for a surprising amount of medium-large projects when used properly.
I use POSIX sh only and get by with maintainability and handling failure modes just fine.
Bash is great for cases where you are gluing together a bunch of other commands. But it also has a lot of pitfalls, that something as large and complex as this will absolutely run into (to be fair, so does c).
As a specific example, the ACME protocol requires working with json. Doing this in bash is very difficult and error prone, especially if you want to avoid a dependency on something like jq, as this does.
On the otherhand, if you're a user and not a developer, you know something written in bash will be written to run on any machine out there. Doesn't matter if it's old or new. Whereas if it's written in C++xx or Rust or the like it'll only compile/run on rolling release distros (or for the 3 months after a normal distro is released that it's up to date).
It might only compile on a machine with a recent compiler if it’s aggressive with using new language features, but why would need to compile a Rust/C++ version from source? A compiler binary will run jut fine on old distro versions.
Typically projects will compile the binaries on systems with a much older version of libc (but the latest compiler) specifically to avoid this problem.
If they’re not doing this then I believe they won’t work on older systems even if compiled with old compiler versions.
Also it can be highly locked down - run as it's own unprivileged user, with access only to directories served by another webserver for the ACME handshake, storing certs, and a tightly restricted sudoer to restart the webserver on cert cycle.
>Sadly, the one non-standard thing that they continue insisting on is Snap.
Snap was and largely is mainly for the server crowd, that they've shoved into the desktop. So in that regard I suppose it's at least consistent for them.
That said, snap is still terrible, but Canonical always just does their own thing.
>The system runs a downsized version of Doom that requires less RAM.
You could even say it's a...light-weight DOOM.
I'll see myself out.
(In all seriousness though, super impressive work! It always amazes me what computing power actually resides in what we deem to be "simple" devices these days - in some ways it feels like computing power gone to waste.)
I actually felt a little bad when I electrified my son's play kitchen to add lights and went for an Arduino Nano. That's a bunch of lights along with some programming (microwave timer and state machine, dimming when idle, undimming upon any input, PWM brightness control for the burners) running on a 16 MHz µC, which is most likely overkill for that. On the other hand it was cheap and programming it was accessible.
The interesting thing is that the microcontroller in the Nano - Atmega328P - is really a massive overkill but for a wholy different reason: price.
Even back in 2015 the AVR based Atmega328P was more expensive than quite a few Cortex M0 based STM32 offerings (when comparing price for same quantities of course). And those MCUs had literaly by an order of magnitude bigger SRAM and the ARM core and STM32 peripheries vere obscenly more capable and I cannot stress this enough.
Now the difference is even bigger and AVR based MCU are expensive almost-unobtanium for legacy designs.
Calling AVR MCUs cheap is my trigger since 2015 when I and my collegues were browsing Farnell eshop looking for a cheap MCUs for various gizmos we were working on.
That's easy, one. But you also have to pay for the scrum master, product owner, bti engineer, test engineer, architect, and 7 other roles that keep the software engineer from rolling out the update ;).
>What is not good, though, is if you, the maintainer, are burned out, or overworked, but stubbornly insist on being a bottleneck in the project under your inept stewardship nevertheless, and just whine and moan about your burnout to get internet points and worldwide pity.
The problem is when there are no other maintainers to the project, and you quite firmly believe the project itself has a future, just that nobody else is able to do so. Yes, you can quit, but if people truly do rely on your project (but nobody is actually willing to take the reigns) then it can become an issue of pride for some. To be clear, I'm not suggesting that people burn themselves out (I myself have gone down that road before), just that "Walk away" isn't always such an easy option for some people.
> you quite firmly believe the project itself has a future, just that nobody else is able to do so.
Looks like some unwarranted feeling of self-importance combined with the sunk cost fallacy.
You still have options: make it paid only, stop taking contributions at all, hire someone to sort it out or find a volunteer to do so. Or walk away and see how quickly Amazon will copycat you.
Or accept that that’s the cost of having your pride.
Otherwise it’s almost a textbook example of emotional blackmailing some parents are known for.
If nothing else, it's impressive that they've actually managed to migrate those Google Talk users over the years (myself included in that, albeit less than I used to). They probably did so extremely begrudgingly, but the path went something like:
Google Talk -> Hangouts -> Google Chat
People I added years ago are still reachable in Chat, which I guess is nice (in spite of everything else).
And Google Chat is pretty decent IMO if you're committed to Google Workspace. How you use it is probably a bit cultural. Where I work--or at least the teams I interact with--it seems to have developed into use Chat if you want a somewhat near-time response rather than email. At least the people I know rarely text these days and essentially never call on voice. Group chats vary by context.
The UX is absolutely horrendous. Don’t even get me started about the 4096 character limit or the fact that URLs in code blocks breaks the code block.
How about not being able to sort the user list by whatever order you want or folding groups? What about it creating “Spaces” for chats with more than one contact and forcing you to “leave” if you want to hide it to keep your list clean?
Maybe we should talk about how they recently started capturing, poorly, formatted text when you paste, forcing a paste with match style to strip it and no way to remove that default.
Maybe we should talk about the horrendous fact that it’s a web app.
Maybe we could add in there’s no user configuration ability at all.
Fuck Google Chat. It’s a horrendous piece of shit and it needs to die, not third party support.
>Maybe we should talk about the horrendous fact that it’s a web app.
I don't need more apps. Unless there's a compelling reason to have one, I'd much rather be able to sit down at any PC and have the same experience. (I do have the app installed on my phone for notifications but I have those turned off on my PCs anyway.)
I don't really need chat apps in general. You can send me an email but Gchat is fine because it's integrated with Gmail.
Indeed, while not ideal, generally they've had a decent migration path. At least for the "message someone inside Gmail" usecase, as far as people are concern, for most people nothing has changed other than some slight UI. Outside of Gmail (especially on android) it is not as clean though.
Hangouts worked excellently for so many years, even after most transitioned out of Talk (aka 3rd party clients) -- available in gmail, available on web, working app etc. Actually with Chat appearing on the scene the transition has been pretty seemless also, with Hangouts still working on web if you wanted it to, and interchangable with Chat versions.
Who says that they managed it?
I went from Google Talk to Hangouts to no Google Services aside from Play store, kept only because of Android devices...
Ideally I would not rely on any Google services, but it is not feasible for me...
It depends on what you're using it for. Maybe it's the best for finding things like restaurants, looking up business hours, reviews, things like that. Basically a spatial view of points of interests. But as a map I find it very lacking, mainly because of their color scheme, but they're also missing data lite small roads, paths, small lakes. White roads on light gray is difficult to see, and overall it's very low contrast. Which is strange, because they have bragged multiple times of using ML to automatically classify areas and coloring them correctly, but it's only used when zoomed out.
Here Maps is best out of those screenshots, but none of them are great.
What I can't stand about Google Maps is how it hides so much stuff if you're not zoomed in 1000x. The worst time was when I was trying to trace a 100km long logging road in northern British Columbia to get to a provincial camp site on a lake, but as soon as I zoomed out to get a broader picture it would disappear. Even though that was the only road within a 15 km radius. Nope, you can only see a 1km section at a time, and it's so thin and low-contrast that it's barely visible.
Next time you need to map out roads like that, try Caltopo. For any _map_ use cases, things like Caltopo and Gaia are miles better. Google Maps is great for following directions they give you, or finding businesses and well known places, but it's a horrible map in any other context.
Maps on Android for me became painfully slow. Every time I went to use it, I'd stare at a half-drawn screen for ages. I had a few year old flagship Android phone and that sort of experience was just silly. Google has heaped more and more into Maps, trying to make it do everything, and it has become a resource hogging monstrosity. When I open Maps, I want to quickly search for something and then likely navigate to it. Not wait for even the screen to get drawn, then wait for autocomplete, then wait for the map to draw, etc.
There are also some really bizarre choices in how the UI works. I don't remember exactly, but I believe when you search, you get a wildly zoomed out regional view, and then when you click on a specific result...you're still stuck at the very zoomed out view. So in order to see if you clicked on the right result, you have to pull the map waaaaaay in.
I just found myself baffled at how bad the usability was. Did the people running Maps actually watch people using their product?
Yes, Maps is the best I've used. I had Apple maps on the other day. I knew the area and knew I was close to the destination. A left turn and about 100 yards down the road. Apple wanted to take me nearly a mile north and then loop back. It does that kind of stuff all the time.
I haven't compared Google Maps to a dedicated device like a Garmin. I wonder how that would do.
My biggest fear, that’s already half realized, is that enough mapping companies go out of business that a ton of map building knowledge just vanishes and it takes decades to have decent solution rise again when Maps finally goes under.
I fear that in particular for local mapping, as right now outside of OSM the only companies putting serious effort in maps are US and China based (to my knowledge at least)
I would switch to Apple Maps if there was an easy way to use it on Windows. Apple has no supported Maps website, though you can view Apple Maps through DuckDuckGo. But that's clunky and doesn't solve my use case of mapping routes on my desktop computer and then pulling them up in Apple Maps on my iPhone.
I own Google stock, I have Google Nest security, Chromecast, Chromebook, Pixel phone, Google Voice, YouTube Music, YouTube TV,... Google Domains... Google Router, ... I had Google Waves, G+, Google Video (before they acquired YouTube), Orkut, ...
I just try to be pragmatic. Google didn't have an easy option to just video chat with my wife with one click so I just bought an iPad to FaceTime.
I forgot about Google Inbox, I loved that.
Google Reader RIP.
Google Desktop Search was great, especially how you could extend it.
Google Answers (that's how my Google Account was verified)
It's actually the WHATWG you need to watch[0], not the W3C. The latter has had basically no power for some time now and only endorses what the WHATWG propose[1].
The great thing about Firefox before the Photon addons apocalypse was that anyone could theme the browser in whatever way they wanted. Sure, it's still possible, but userChrome is a messy hack that Mozilla are trying to get rid of any moment now.
Soon, we'll be stuck dealing with whatever UI Mozilla feels like implementing for the current week. Thanks for your efforts for now at least.
I'm fond of that capability as well. Still it was good to restrict things for security and support reasons. Preserving the integrity of the line of death is hard enough without arbitrary UI changes and interactions among X different add-ons.
[1] https://github.com/acmesh-official/acme.sh
[2] https://github.com/acmesh-official/acme.sh/wiki/dnsapi
[3] https://github.com/acmesh-official/acme.sh/wiki/DNS-API-Dev-...