Yes and "software written by some competent dev" is a thing that stops scaling after an org reaches 100s or 1000s of devs.
Management then moves to a model of minimizing outlier behavior to reduce risk of any one dev doing stupid things. However this process tends to squeeze the "some competent dev" types out as they are outliers on the positive side of the scale..
True, but maybe we should utilise principles which don’t suck. Things like onion architecture, SOLID, DRY and similar don’t appear to scale well considering software is still a mess. Because not only can’t your hardware find your functions and data, your developers can’t either.
It’s a balancing act of course, but I think a major part of the issue with “best practices” is that there are no best practices for every thing. Clean Code will work well for somethings. If you’re iterating through a list of a thousand objects it’s one and a half time slower than a flat structure. If you were changing 4 properties in every element it might be 20 times less performant though. So obviously this wouldn’t be a good place to split your code out into four different files in 3 different projects. On the flip side something like the single responsibility principle is completely solid for the most part.
Maybe if people like Uncle Bob didn’t respond with “they misunderstood the principle” when faced with criticism we might have some useful ways to work with code in large teams. I’d like to see someone do research which actually proves that the “modern” ways work as intended. As far as I’m aware, nobody has been able to prove that something like Clean Code actually works. You can really say the same thing for something like by the book SCRUM or any form of strategy. It’s all a load of pseudo science until we have had evidence that it actually makes the difference it claims to do.
That being said. I don’t think it’s unreasonable to expect that developers know how a computer works.