Yeah, I agree. Your initial reply to me didn't mention legacy issues, just org size, and I was interpreting it more in the context of a large org building new systems the same unmaintainable way they always have because of inertia / politics / ossified sysadmins / etc.
The problems are around process almost always. But I’ve also seen on occasion some sadly pragmatic reasons for very slow processes such as critical legacy software that a replacement for can’t be found even with 8+ figure checks to vendors waved.
One of the big trends from the 2010s for cloud software was to cloudwash old stacks that really weren’t more than simply validated to not crash and burn on an EC2 instance, which is why the entire cloud native movement exists to differentiate greenfield cloud architecture services from cloudwashed ones.
Things can be pretty frustrating working with different vendors of different applications and competencies. People will be patching log4j issues for years to come, for example, and that’s probably easier to validate in aggregate than entire kernel upgrades for decrepit, unsupported distros like CentOS 5 that I still hear about being used.