I've programmed posix sh (Kenneth Almquist's shell ash) for years. Bash is like a poor implementation of korn. For large programs /(k|z)?sh/ or simply using perl or ruby where it makes sense. Nothing beats writing system with not much more than sed, head, tail, wc, tr, sort, uniq, grep and occasionally awk. No reason to create dependencies where none are needed.
That's one of the things I like about Debian, is that /bin/sh is dash (Debian Almquist Shell).
Somehow I got known at work as a guy who knows shell scripting, but people never ask me about simple problems and so I always end up telling them to use Python or C#.
At work I'm the ruby|python|perl guy because I almost steadfastly refuse to use bash/sh.
Its less that I don't like shell, but doing complex things in it doesn't scale past a certain point. That and I'd rather use a scripting language than shell. If not that then its time to break out good old C and do it the hard way.
UNIX scales very well. This is coming from someone who programs in regular expressions. Nothing wrong with not using bash, or ruby|python|perl when everything can be done in the small one. The posix sh has a 70k footprint. There is nothing that can't be accomplished in the shell with both less code while defeating complexity through both recursion and stream processing. They hit it spot on with the philosophy and operating system semantics. It's just another way of using it.
If the argument for posix sh is that its 70k, I'm not sure thats a pro or con. I compared the runtime of a perl script versus shell doing pretty much the same thing, and perl was much faster because there was significantly less fork/exec going on.
Sure the executable is say 70k versus dunno 5 megs for perl lets say. In the end, if you're that constrained for memory, you should be using c not shell or even perl.
My aversion to shell is more related to finding, and eventually having to support, 20k shell "programs". They are inevitably always brittle creations with no tests or even design philosophy behind them. They nominally were cobbled together to fill a niche. From which I'm impressed they work, but inevitably they fall on their own petard. I've rewritten most of them with some scripting language and reduced their complexity by orders of magnitude and made them easier to use as well as orders of magnitude faster. Stream/pipe processing isn't the only way to skin some cats, despite its simplicity.
I guess I just find the "I know i'll use shell" behavior to really really end up being its own detriment in later use. Shell may be turing complete but things like spaces for example really make using it easily and simply very much a losing proposition compared to say tcl.
I'm not against shell, its a great glue language, but it just seems to get over used in some domains.
I realized how little I knew about sh earlier this year. It struck me that it has by far the highest ratio of "how often I use it" to "how much I know about it"
So (to learn more about it) I implemented a large fraction of the POSIX shell specification. It was rather fun, and fairly easy in any high level language that can call setenv() and fork() and I learned all sorts of interesting things. Example:
FOO=bar cd .; echo $FOO
will output "bar" in a POSIX compatible shell but not bash. Also, this is completely valid:
for if in case; do echo {; done
since reserved words are only reserved when in the command position.
http://webcache.googleusercontent.com/search?q=cache:MGUqZp5...