Isn't the logical conclusion of this kind of reasoning that everyone should become a programmer and browsers should be supplied in source form only, so everyone can customise any tiny detail they want to? That's not exactly a realistic prospect for most non-geeks.
User interface development for mainstream software is invariably a balance between presenting something accessible to novice/occasional users and presenting something powerful to expert/frequent users. You can go a long way to helping both groups with a thoughtful design, but there's always going to be someone who wants something slightly (or completely) different.
For mainstream software, trying to please everyone all the time is a fool's game. That's what bespoke development is for.
But where do you draw the line? What constitutes an "advanced option" worthy of dedicated UI rather than a customised build? What proportion or minimum number of users have to find it valuable for the effort and lost simplicity to be justified? When there are too many advanced options to manage sensibly, do we move to "advanced", "really advanced" and "actually quite scary you even thought of this" options?
Love to know how numbers of users affected is measured. One of the arguments is that with a large enough number of users (millions) an option can cause significant numbers of people grief if they misuse it.
Same argument can be said for removing a feature 2% of the population of users rely on. 2% of 450 million users is 9 million users. Not what I'd call a tiny number of users.
User interface development for mainstream software is invariably a balance between presenting something accessible to novice/occasional users and presenting something powerful to expert/frequent users. You can go a long way to helping both groups with a thoughtful design, but there's always going to be someone who wants something slightly (or completely) different.
For mainstream software, trying to please everyone all the time is a fool's game. That's what bespoke development is for.