In that scenario I'd never have just a single server, be it Linux, Windows, BSD, or anything else.
Being in a situation where you're completely unable to restart a machine without losing money, is being in a situation where you're living on a knife edge regardless. A single hardware failure and you're in deep trouble.
I've been in exactly that situation (the entire company rested on a single Ubuntu server), it was really unpleasant and management didn't do a thing about it until we started getting sector errors which the RAID controller's error handling was struggling to cope with (after that incident and the downtime, management shelled out for four physical boxes, a SAN, VMWare licenses, and a setup with 15 minute rollbacks and dynamic migrations if a physical server died).
Agreed. Up to a certain point, a single server is going to be considerably more cost effective than multiple machines, as well as requiring less devOps time. (Especially nowadays when you can easily lease a server with, say, 16 cores and 48GB of RAM.) I can recall having to restart our production server exactly once in the past 12 months for a kernel update. Every month would be an irritation for sure.
You can update the software without pulling it down. In order to use the update you need to restart, but there's a large difference between restarting a service and restarting an OS. It isn't really analogous.
Neither of those support your original claim. The author is focused on one specific feature, which is nice but excludes the entire shutdown process and booting up until the Windows logo appears. Anyone who's run a server knows that just the BIOS initialization is often much longer than 10 seconds. And, of course, many services take a noticeable amount of time to startup so time-to-available is longer than time-to-login-screen.
Worse, the conversation was actually about the time needed to install updates. Now that Windows logs out before installing updates, you have a fairly sizable time delay – tens of minutes after a large update even on an SSD – while the service is unavailable but before the system starts rebooting.
How does Fast Boot work? Well, it's actually not booting at all – it's logging out all active users and hibernating:
That's a really nice bit of work in normal usage but it means that you have to do the standard cold boot process when you're getting a system update. That means that a sysadmin would need to review each update to know whether it would trigger a fast or slow boot, assuming that a patch for e.g. SQL Server wouldn't always trigger a cold boot to be conservative.
The alternative is what everyone's been saying, namely that you have to assume that a reboot is slow and have n > 1 servers if uptime matters.
Being in a situation where you're completely unable to restart a machine without losing money, is being in a situation where you're living on a knife edge regardless. A single hardware failure and you're in deep trouble.
I've been in exactly that situation (the entire company rested on a single Ubuntu server), it was really unpleasant and management didn't do a thing about it until we started getting sector errors which the RAID controller's error handling was struggling to cope with (after that incident and the downtime, management shelled out for four physical boxes, a SAN, VMWare licenses, and a setup with 15 minute rollbacks and dynamic migrations if a physical server died).