> the idea is that by testing even the trivial degenerate cases, you can refactor everything with confidence
The trouble is that on the one hand, you get diminishing returns for each more extreme edge case you include as a test, while on the other hand, unless you have coded up every possible scenario, you can't really refactor with impunity. Something has to give, and in most cases it can only be the confidence in refactoring, because most interesting problem spaces are infinite and it's tough to write a comprehensive test suite for an infinite number of scenarios!
I've found that for many types of project, some degree of automated testing is well worth the trouble. However, the "test everything" mentality seems to breed a dangerous false sense of security. Perhaps worse, in some contexts the "test everything" approach also forces developers to warp otherwise clean and natural designs into a shape where automated test tools can work with them more easily, at the expense of making it harder for people to work with them. I am far from convinced that that particular trade-off is ever worthwhile, and I've always found it a rather odd contradiction that many Agile methodologies supposedly advocate people over process and the like, yet stick to their guns on this one.
The people who believe with test-everything is probably the same people who believe with 100% code-coverage.
I think most people, by now, have learned that 100% code-coverage and test-everything are superfluous so there's no point of discussing these two or making a big deal of these two as a problem of subscribing to TDD.
The idea of TDD is to test the very minimum such that the code is proven to work as per requirement. When there is a bug found, write the test first before you fix the bug. This way, at one point of the life of the software, you'll eventually have enough tests to cover. I think most people put too much focus on the development story rather than the maintenance story thus most people only explain how to do TDD on new code, not how to do TDD on existing code (or rather, the next phase).
I've been in projects where because the people behind them were not putting too much effort for testing, they start automation effort from behind. Eventually you'll hit a chicken-n-egg situation: we'd like to refactor this buggy part but the architecture makes it hard to write automation test.
All professional projects will have automation tests at some point of their life. People by now should already know that software grows and hiring more QAs, re-test everything (regression, smoke, full-blown, etc), or even telling devs to test the code they just wrote manually don't scale.
Keep in mind that sometime, quality is defined by the client (or by the requirements). The client might not ask superb quality (as long as there is no data corruption) thus one probably does not have to write extensive automation-tests.
The trouble is that on the one hand, you get diminishing returns for each more extreme edge case you include as a test, while on the other hand, unless you have coded up every possible scenario, you can't really refactor with impunity. Something has to give, and in most cases it can only be the confidence in refactoring, because most interesting problem spaces are infinite and it's tough to write a comprehensive test suite for an infinite number of scenarios!
I've found that for many types of project, some degree of automated testing is well worth the trouble. However, the "test everything" mentality seems to breed a dangerous false sense of security. Perhaps worse, in some contexts the "test everything" approach also forces developers to warp otherwise clean and natural designs into a shape where automated test tools can work with them more easily, at the expense of making it harder for people to work with them. I am far from convinced that that particular trade-off is ever worthwhile, and I've always found it a rather odd contradiction that many Agile methodologies supposedly advocate people over process and the like, yet stick to their guns on this one.