As soon as you have any kind of highly-distributed system, accurate clocks are usually very important. When events can start on one machine, and end on another, it's important to have some kind of consistency (knowing that one event occurred before another). Probably the most famous paper on this is Lamport's "Time, clocks, and the ordering of events in a distributed system" - http://dl.acm.org/citation.cfm?id=359563.
But to give a practical example. On something like facebook, if you receive two comments on a post within a millisecond, how do you determine the order that they appear in the list? Each request may hit a different datacentre in the world, yet somehow the machines determine the ordering, and every request sees that ordering from then on. You can't just have a single server, because that couldn't handle the load for the entire world. These kinds of things require very strict temporal ordering, which is why time is such an important thing. This leads to the CAP theorem, and distributed systems in general.
So that's exactly why you don't depend on hardware clocks -- you use software techniques when your algorithm depends on some notion of time.
The point of Lamport's paper is that time in distributed systems is a partial ordering, not a total ordering. If you are taking values from hardware clocks, you are using a total ordering.
Google spanner is a DB that uses atomic clocks distributed world wide. They seem to think the pain involved dealing with partial ordering is not worth the hassle given there genuinely is a total ordering on operations.
Yeah, but you can't have your clocks drifting a lot. It's fine if they are consistently wrong, but when a second isn't a second, things can start to screw up.
"As soon as you have any kind of highly-distributed system, accurate clocks are usually..."
Let me stop you right there. If you have a highly distributed system, it's design CAN NOT rely on perfectly synchronized clocks among it's components. That's actually the point of the Lamport paper you cited as well as the OP.
I actually started writing a detailed rehash of the Lamport's paper but I realize I'd do a poor job of it at best. Instead, I think it's best to just point to excellent and highly readable entire paper here:
http://research.microsoft.com/en-us/um/people/lamport/pubs/t...
Using a highly accurate clock to solve ordering like that is punting on the actual problem. It's possible for requests to different servers to happen at exactly the same time. You need a mechanism to handle that. And any mechanism that can handle that can also handle clocks being a couple hundred milliseconds out of sync.
At a certain level of precision you have to accept that data only travels at the speed of light and there is no way to have a global ordering that shows data consistently and immediately. So you can use an easy non-immediate method like a vector clock.
But to give a practical example. On something like facebook, if you receive two comments on a post within a millisecond, how do you determine the order that they appear in the list? Each request may hit a different datacentre in the world, yet somehow the machines determine the ordering, and every request sees that ordering from then on. You can't just have a single server, because that couldn't handle the load for the entire world. These kinds of things require very strict temporal ordering, which is why time is such an important thing. This leads to the CAP theorem, and distributed systems in general.