Hacker Newsnew | past | comments | ask | show | jobs | submit | null_content's commentslogin

> Even back in the 90's Mysql was able to hide the CLI supplied password from the process list. It would be cool if oathtool was able to do the same.

This has always been a brittle and not easily portable approach.

And doesn't protect you from an attacker doing something much simpler: reading your .bash_history file.

Passing passwords as arguments has always been a bad idea.


Web development is such a pain because the standard is sub-par (decided by committee, so no surprises there) and the implementation is even worse (full of inter-vendor inconsistencies and missing features).

Frankly, last time I did web development, I felt tempted to just make the entire page one screen sized canvas and implement my own UI toolkit on it. That's bound to be more reliable (and probably faster) that the current state of affairs.


I, too, have fond memories of ca 1998 web development.


Funny that you bring that up because in 1998, Tcl/Tk already had grid layouts (if I am not mistaken).

So yes, you are finally doing UI development like it is 1998.

Congratulations.

Sarcasm aside, want to know what a proper GUI toolkit looks like? Take some time to play around with Qt.


They are deep in Uranus.


As a general statement, that is impossible in both practice and theory due to the halting problem.

You CAN write proven bug free code, but only if you are willing to accept a long series of limitations.


The halting problem only tells you that there can be no algorithm to prove this automatically for any given program.

It could still be the case that for all practically relevant programs there is a proof in each case that they are secure, even if there is no automatic way to find it.


I'm willing to accept "Might enter an infinite loop", but AFAIK the halting problem says nothing about security.


> the halting problem says nothing about security

It makes it impossible to say whether certain things can be proven to be secure.


That's only if an algorithm would halt if it has a condition to do so.

If you use a finite state machine, You can control the states and make mathematical assurances that your program terminates.

The solution to the halting problem is that you work backwards, instead of forwards. You don't let a program run forever and hope for the answer. We also don't give infinite memory to programs to eventually exit/halt or not - My machines have 8 GB and 32GB ram. And oomkiller does its job rather well.


That does not solve the halting problem. The halting problem is entirely framed around making a program that can specifically compute whether another arbitrary program will return.

Tests work because they are arranged by humans to verify the system under test for validity, but only for one specific system at a time.

The halting problem is basically an admission that one is unable to write a test to verify arbitrary programs on arbitrary, Turing complete hardware. You must be 10% smarter than the piece of equipment in short.

Killing something for taking too long or eating too much memory makes no substantive judgement on whether the computation at hand is ready to return or not, You (or your OS) just unilaterally made the decision that the computation wasn't worth continuing.


So, solution to the halting problem: just kill anything that doesn't halt and return true?


No. AWS Lambda allows up to 10s of processing before killing and returning a (killed, exceeded time) for the api call.

The Halting problem depends on a turing machine. As stated by Wikipedia, a turing machine is:

"a mathematical model of computation that defines an abstract machine, which manipulates symbols on a strip of tape according to a table of rules. Despite the model's simplicity, given any computer algorithm, a Turing machine capable of simulating that algorithm's logic can be constructed."

The first description how to build one is, "The machine operates on an infinite memory tape divided into discrete cells."

Infinite memory? Well, that's already trivial to dismiss then.

And I also said that things that need to be proved should be using finite state machines. Using an FSM means a dimensionality reduction to a thing that doesn't suffer from the halting problem. One can make complicated graphs, yet still not introduce the same pitfalls as a turing machine has.

And, here's a paper arguing that FSM's dont suffer from halting problem. https://pdfs.semanticscholar.org/025b/97d66265dfbcb02d9dd1a2...


This is why some scripting and configuration languages are now toying with the notion that "not Turing Complete" is a feature not a limitation of the system.

I'm kinda holding my breath that they turn out to be right.


    Programmers only care about three numbers:  zero, one, and "out of memory"


We don't tolerate houses collapsing out of nowhere, brakes failing over the course of normal usage and planes falling out of the sky during routine flights.

But for some reason, we HAVE TO tolerate software crapping itself once a year?

I don't accept this logic. This is just a sign of how sloppy the industry has become.

This is the reason your phone becomes obsolete after 2 years, whereas your car can continue to run after multiple decades of abuse.


First: We’re not talking about “out of nowhere” or during “routine” operation. Doing better than 99.99% uptime implies robustness to even extreme, unusual situations.

Second: Air travel could be much, much cheaper if it didn’t have to be nearly 100% reliable. This would be the right trade-off to make in almost any application that doesn’t almost guarantee deaths when it fails.


We actually do tolerate it. Plenty of critical parts in your car are designed to not be 100% available even in all expected cases.

For example, plenty of higher end cars in california come with summer tires that can't be used in cold weather/ice.

Even the brakes you are talking about must be replaced every X miles (depending on how new the car is, this may be between 10k and 50k miles)

Houses are not definitely designed to be 100% available. This is in fact why they fail due to fire or earthquake or other events. The design point is not instant failure, but it's also not "100% available".

Like the SRE book says, they make a tradeoff.


I think this is a false equivalency. If we're talking about "service unavailability", planes break all the time. Houses have to be vacated because of flooding, fire, insect infestation. Brakes do fail. Just like with software, we accept a certain level of risk in exchange for cost/convenience efficiencies (e.g. we don't want our planes to fall out of the sky, but we're okay with getting stranded in phoenix for 24 hours because of a busted landing gear).


Also, brakes contribute to service unavailability. Brake pads need to be replaced on average every 50k miles, which takes the average driver 4 years. And let's say the average length of time your car is at the mechanic's to fix brakes is 3 days. That's 3 days of unavailability every 4 years just for brake pad replacements, or 99.8% availability (two nines!), just because of brake pad repairs. Add in all the other required car maintenance, and depending on the reliability of the vehicle, and you might be down into one nine territory.

Gmail going down is like your car being in the shop. It's not equivalent to a plane crashing; the equivalent there would be the entire contents and history of your Gmail account being unrecoverably deleted, and you yourself had no backups. Of course, I'd still much rather have that happen a hundred times than be in one fatal plane crash ..


Gmail seems to have 3 nines, although I couldn't find a better reference than this [1], where other services are included:

> [Google's infrastructure] delivers Gmail and other services to hundreds of millions of users with 99.978% availability and no scheduled downtime.

[1] https://support.google.com/googlecloud/answer/6056635?hl=en

PS. 99.978% availability translates as a downtime of ~ 2 hours/year total. Not bad! But it's when things break that we realize how performant and reliable they actually are.

Edits: various typos.


I'm consistently amazed how well Google and Facebook are at staying up. They're two services that I don't think I've really experienced a broad outage. Of course with Facebook's data designs, there's sometimes quirkiness as a result, but it's rarely completely off for me.

Google, I think I've only really noticed it offline once in the past 10 years or so. Not complaining at all.


Ok but like.... it takes <3 hours to change brake pads. So point taken, but numbers are more moderate than presented.


Consider that airplanes are relatively self-contained systems, whereas most of the systems we deal with in networking cross many different independent boundaries, each of which can independently fail for any number of reasons. There's more parties involved in regular operations of distributed software than in maintaining airplanes.


Which has do you know of that are constantly being worked on, and grown in size in perpetuity?

It's not a good metaphor, even if it looks intriguing at first sight.


We wouldn't tolerate those things if we all used the one plane, car and house. That's where this comparison falls down.


The e-mail equivalent of your house falling down is data loss. This is unavailability, which is more analogous to losing your keys and not being able to get in for 30 minutes.


That, and Debian would fuck up the implementation.

Lest anyone forgets:

https://www.schneier.com/blog/archives/2008/05/random_number...


Pretty much any device these days can run some form of standalone SSH software or, worst case scenario, a shell + openssh (heck, even Windows can do that these days).

Setting these up is fairly painless, and they are FAR more flexible than this solution, FAR more secure and the vast majority are open source - which is KINDA important when dealing with security.

Sorry for being blunt, but all I see here is a huge gaping security hole. The $5/month is just adding insult to injury.


> Sure, but e.g. on Debian /etc/systemd/system/sshd.service is 22 lines. The still-carried shellscript version is 162 lines, and that's before counting any of the per-distro shellscript libraries systemd removed the need for. That's a big reduction in complexity even if systemd didn't have any advantages over shellscript init, which it does.

And for that "simplicity" all you need is a daemon that depends on dbus, glibc and cgroups (IIRC), just to name a few - which makes it non-portable for anything that isn't Linux and non-usable for anything that doesn't want to depend on, say, glibc.

And if they got their way, dbus would have been shoved into the kernel (kdbus, bus1, or whatever they called it) - adding a bloated mess of an IPC mechanism into the kernel.

Simplicity. Right.


I think it's legitimate to complain about systemd not being portable, but it's odd to do so in terms of the non-portability not being "simple".

Imagine how much more complex systemd would be for even common things like "start this daemon and make sure neither it or any of its sub-processes collectively use more than 1GB of memory" would be if it had to run on z/OS, AIX, Solaris, OpenBSD, HP/UX, Windows, Mac OS X etc.

And let's be clear, systemd is perfectly usable for things that don't want to depend on glibc, for example it works just fine for starting Go programs, or your random C program you've linked to uClibc. What you can't do is not have a glibc on your system at all. Given how tiny glibc is in terms of modern hardware resources, if you can't have it on your system you probably weren't the target audience for systemd in the first place.


Worse still, some of the code in that daemon exists pretty much solely to replace functionality in that one shell script: the negated ConditionPathExists seems to be used only by sshd.service, with the non-negated version being used only to decide whether to launch getty. We've essentially replaced complexity in a shell script with complexity in an always-running process written in C.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: