Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We didn't have to set up servers all the time, they just ran, for years, without interruption. Some machines had uptimes in decades. We had worldwide software distribution, before the internet. BBSs, Shareware, etc. UseNet had every support channel in the universe. Email involved routing through ihnp4

Many things are clearly better, but IDEs really didn't keep up.



Today, you don't have to set up servers all the time either. But you can and that is a huge advantage.

You don't need to hire a special person to constantly optimize your database server and indexes. You don't need sharding, except in the most exotic use cases. You don't need to manage table ranges. You no longer need to manually set up and manage an HA cluster.

> Some machines had uptimes in decades.

What machine had uptime in decades (i.e. >= 20 years) in 1990? Did you have access to such a machine?


I agree that it is nice to be able to fire up a machine from a script. Back in the days of MS-DOS, it was entirely possible that your work system consisted of a few disks, which contained the whole image of everything, and you didn't hit the hard drive. That's pretty close to configure less systems.

As for databases, they were small enough that they just worked. Database Administrators were a mainframe thing, not a PC thing.

I didn't have a huge network, only a handfull of machines, but one of my Windows NT servers had a 4 year uptime before Y2K testing messed things up.

A friend had a Netware machine with 15 years of uptime... started in the 1990s.

Moore's law and the push to follow it has given us amazing increases in performance. The software that runs on this hardware isn't fit for purpose, as far as I'm concerned.

None of the current crop of operating systems is actually secure enough to handle direct internet connectivity. This is a new threat. Blaming the application and the programmer for vulnerabilities that should fall squarely on that of the operating system, for example, is a huge mistake.

It should be possible to have a machine connected to the internet, that does real work, with an uptime measured in the economically useful life of the machine. The default permissive model of computing inherited from Unix isn't up to that task.

Virtualization/Containers is a poor man's ersatz Capability Based Security. Such systems (also know as Multi-Level Security) are capable of keeping the kernel of the operating system from ever being compromised by applications or users. They have existed in niche applications since the 1970s.

For the end user, lucky enough to avoid virii, things are vastly improved since the 1980s. The need to even use removable media, let alone load stacks of it spanning hours, is gone. The limitation to text, without sound, or always on internet, sucked.

But, in the days of floppy disks... you could buy shareware disks as your user group meetings, and take the stuff home and try it. You didn't have to worry about viruses, because you had write protected copies of your OS, and you didn't experiment with your live copies of the data. Everything was transparent enough that a user could manage their risk, even though there was a non-zero chance of getting an infected floppy disk.

Gosh that's a lot of writing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: