It still increases write rate a lot over more sequential values / reduced cache hit ratio.
With sequential-ish values indexed by something like b-trees the same index pages get modified over and over again because new table rows will be in a narrow part of the index. As databases / operating systems typically buffer writes for a while, this reduced the number of writes hitting storage substantially.
Conversely, with random values, every leaf page will be modified with the same probability. That's not problem with a small index, because you'll soon dirty every page anyway. But as soon as the index gets larger you'll see many more writes hitting storage.
FYI - for anyone running across this thread later. This is a unique problem with (a kind of globally) consistent storage on small (single node equivalent) systems. If you have a large scale distributed system, you WANT writes to be well distributed across all nodes, or you’ll end up with problematic hot spots.
All new writes ending up on the same node/page/index is a good way to crush your system in a cascading-never-coming-back-up-until-you-drain-traffic-kind-of-way