Hacker Newsnew | past | comments | ask | show | jobs | submit | voiceofunreason's commentslogin

https://tidyfirst.substack.com/p/canon-tdd isn't particularly new; Beck has been consistent about "write tests before the code, one test at a time" for about 25 years now.

Same idea, different spelling: do you really think TDD should get credit for your good results, when you aren't actually shackling yourself to the practices that the thought leaders in that community promote?


I want to credit "writing automated tests" with the good results that I get from that practice. The problem is I need terminology that's widely used by other developers.


Is there any prior art that defines covariant/contravariant tests in this way, or is Martin just borrowing language that sounds cool?


Bob Martin was the first to use the terms covariant/contravariant in the context of software development, as far as I'm aware. Using this precise language borrowed from mathematics at once clarifies both the problem and the solution that folks like Kent Beck and Dan North had been talking about for decades. Now we can discuss these issues with a whole lot less hand-waving.


"I still find the skepticism around TDD weird."

A small community of programmers, with a disproportionately large audience, foretold that practicing test-driven development would produce great benefits; over twenty five years the audience has found that not to be the case.

Compare with "continuous integration" - here, the immediate returns of trying the proposed discipline were so good that pretty much everybody who tried the experiment got positive returns, and leaned into it, and now CI (and later CD) are _everywhere_.

As for what is gained, try this spelling: test driven development adds load to your interfaces at a time when you know the least about the problem you are trying to solve, which is to say the period where having your interfaces be flexible is valuable.

And thus, the technique gets criticism from both ends -- that design work that should have been done up front is deferred (making the design more difficult to change, therefore introducing costs/delays), and that the investment is being made in testing before you have a clear understanding for which tests are going to be sensitive to the actual errors that you introduce creating the code (thereby both increasing the amount of "waste" in the test suite, in addition to increasing the risk of needing test rewrites).

The situation is further not improved by (a) the fact that most TDD demonstrations are problems that are small, stable problems that you can solve in about an hour with any technique at all and (b) the designs produced in support of the TDD practice aren't clearly an improvement on "just doing it", and in some notable cases have been much much worse.

So if it is working for you: GREAT, keep it up; no reason for you not to reap the benefits if your local conditions are such that TDD gives you the best positive return on your investment.


>As for what is gained, try this spelling: test driven development adds load to your interfaces at a time when you know the least about the problem you are trying to solve

If Im writing a single line of production code I should know as much as possible what requirements problem Im actually trying to solve with it first, no?

This is actually dovetails into a benefit to writing the test first. If you flesh out a user story scenario in the form of an executable test it can provoke new questions ("hm, actually I'd need the user ID on this new endpoint to satisfy this requirement...") and you can more quickly return to stakeholders ("can you send me a user ID in this API call?") and "fix" your "requirements bugs" before making more expensive lower level changes to the code.

This outside-in "flipping between one layer and the layer directly beneath it" is very effective at properly refining requirements, tests and architecture.

>And thus, the technique gets criticism from both ends -- that design work that should have been done up front is deferred

I dont think "design work" should be done up front if you can help it. I've always felt that the very best architecture emerges as a result of aggressive refactoring done within the confines of a complete set of tests that made as few architectural assumptions as possible. Why? Coz we're all bad at predicting the future and it's better if we dont try.

This is a mostly separate issue from TDD though.


"I call them 'unit tests' but they don't match the accepted definition of unit tests very well." -- Kent Beck, __Test Driven Development By Example__

The short version is that "unit test" did actually mean something (see Glenford Myers, __The Art of Software Testing__ or Boris Beizer, __Software Testing Techniques__), although it wasn't necessarily clear how those definitions applied to object-oriented programming (see Robert Binder, __Testing Object-Oriented Systems__).

The Test-First/TDD/XP community later made an effort to pivot to the language of "programmer test", but by the time that effort began it was already too late.

So I think you should continue to call your tests "tests" (or "checks", if you prefer the framing of James Bach and Michael Bolton).

As best I can tell - there's no historicity to the idea that "unit test" was a reference to the isolation of a tests from its peers; it's just a ret-con.


"REST is just pure bullshit. Avoid it like a plague."

No it isn't. Evidence: I'm reading this in a web browser.

"...REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them."

Bikeshedding the spelling of resource identifiers? Or what "verb" should be used to express specialized domain semantics? Yeah, _that_ is certainly plague bullshit.


> No it isn't. Evidence: I'm reading this in a web browser.

And you might not that this site is _not_ REST-ful. It's certainly HTTP, but not REST.

> Bikeshedding the spelling of resource identifiers? Or what "verb" should be used to express specialized domain semantics?

Or whether we want to use If-Modified-Since header or explicitly specify the condition in the JSON body. And 6 months later, with some people asking for the latter because their homegrown REST client doesn't support easy header customization on a per-request basis.

Or people trying (and failing) to use multipart uploads because the generated Ruby client is not actually correct.

There is _way_ too much flexibility in REST (and HTTP in general). And REST in particular adds to this nonsense by abusing the verbs and the path.


> It's certainly HTTP, but not REST.

How isn't it RESTful? It's a single entrypoint using content types to tell the client how to interpret it, and with exploratory clues to other content in the website.


The "R" letter means "Representational". It requires a certain style of API. E.g. instead of "/item?id=23984792834" you have "/items/comments/23984792834".

HN doesn't have this.


Representational is to do with being able to deal with different representations of data via a media type[0]. There is stuff about resource identification in ReST, but it's just about being able to address resources directly and permanently rather than the style of the resource identifier:

> Traditional hypertext systems [61], which typically operate in a closed or local environment, use unique node or document identifiers that change every time the information changes, relying on link servers to maintain references separately from the content [135]. Since centralized link servers are an anathema to the immense scale and multi-organizational domain requirements of the Web, REST relies instead on the author choosing a resource identifier that best fits the nature of the concept being identified.

[0] https://ics.uci.edu/~fielding/pubs/dissertation/rest_arch_st...


> "REST is just pure bullshit. Avoid it like a plague."

> No it isn't. Evidence: I'm reading this in a web browser.

REST is not HTTP endpoints and verbs.


I have yet to see an LLM + TDD essay where the author demonstrates any mastery of Test Driven Development.

Is the label "TDD" being hijacked for something new? Did that already happen? Are LLMs now responsible for defining TDD?


Berard 1993 offers a good survey of the meanings of Abstrasction, Encapsulation, and Information Hiding

https://web.archive.org/web/20071214085409/http://www.itmweb...


There is definitely room for confusion as to whether the "design" that is "driven" by TDD is design-the-noun or design-the-verb.


Well, TDD (or its immediate precursor, depending on where you draw the lines) escaped from the Smalltalk world circa 1997; but I think you can make a case for 1999 being when it really began to emerge. Most of the examples that I saw in the next 5 or so years were written in Java or Python.

Beck's book was 2003, with examples in Java and Python. David Astels wrote a book later that year, again primarily Java but also with short chapters discussing RubyUnit, NUnit, CppUnit.... Growing Object Oriented Software was 2009.

My guess is that "peak" is somewhere between 2009 and 2014 (the DHH rant); after that point there aren't a lot of new voices with new things to say ("clap louder" does not qualify as a new thing to say).

That said, if you're aware of the gap between decision and feedback, and managing that gap, I don't think it matters very much whether that feedback comes in the form of measurements of runtime behavior vs analysis of the source text itself. It might even make sense to use a mixed strategy (preferring the dynamic measurements when the design is unstable, but switching strategies in areas where changes are less frequent).


"why would one have to write the test first?"

Disclaimer first: TDD won't give you anything that you couldn't instead achieve via "git gud"; except perhaps a reduced anxiety about overlooking a subtle error (but after "git gud", you don't _make_ subtle errors, do you?)

The main justification for test first is something like this: "we didn't have to be brilliantly prescient designers to find a less tightly coupled design. We just had to be intolerant of pointless effort in writing the tests." (Aim, Fire -- Beck 2000)

TDD is, in part, an attempt at reducing the length of a feedback loop. The catch is that (in spite of the labels that have been used) the feedback loop of interest is not the programming-test loop, but instead the analysis-design-programming loop (bringing OOA, OOD, and OOP closer to each other).

The underlying assumption is something like "complicate code should be easy to test". If you believe that easy-to-test drives less-tightly-coupled design, and you think that latter characteristic is valuable, then it makes a certain amount of sense to lock in that easy-to-test constraint early.

"TDD strikes me as a practice that slows you down a fair amount yet still doesn't offer anything close to complete formal validation"

Yes and... that's not TDD's job? The automated checks used in TDD are written by the developer to satisfy the needs of the developer; if the thing you want is complete formal validation, then you should be using tools designed to meet that need. TDD might give you a higher success rate when you subject candidate systems to formal validation, and should give you lower costs when revising a failed candidate, but "the TDDs passed, ship it" is _not_ a risk free proposition.

Beck again: "I never said test-first was a testing technique. In fact, if I remember correctly, I explicitly stated that it _wasn't_."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: