Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

And in part 2 about 12 minutes in we show the code to exemplify how our acceptance test suite works (and is very different from traditional E2E): https://www.youtube.com/watch?v=caxpxszueI0&list=PLfqo9_UMdH...


I cannot watch this entire video, so it is possible that you explain conceptually how your acceptance tests are different from end to end test?

I would like to undestand what were the goals of your initial end to end and what are the goals of acceptance testing, and how you define these acceptance testing from test objective perspective.

I assume end to end tests could be described by this definition: "test the functionality and performance of an application under product-like circumstances and data to replicate live settings. The goal is to simulate what a real user scenario looks like from start to finish" [0]

[0] https://smartbear.com/solutions/end-to-end-testing/


I think what they mean, basically, is "e2e testing absolutely everything was becoming a nightmare, so we've now switched to 'contract based testing' - effectively 'unit testing where the microservice is the unit granularity' - plus some e2e style testing for critical paths where it's still valuable enough to justify all the extra effort".


Ok. In this case it is still e2e. In my head e2e is never a strategy to test everything.

I also thing they did a good optimizations to offload some cases from e2e go some other form of testing which for should be integration testing.

If they dont do integration testing then a lot of possible bug cannot be found. Just testing the input and output of each microservice is not enough.

But I will watch the video as maybe it is better explained there and there is something to learn from this experience.


Yes, I said it was still e2e.

What they meant - I think - was they moved away from "a primarily end-to-end integration test suite" to "service-as-black-box unit-ish testing" plus "focused end-to-end integration testing for critical paths".


Yeah and that's deeply confusing.

At work we have too many components that are tested in isolation, but which have grown to become tightly coupled, so we're trying to build an end to end testing framework.

So from my perspective I'm living in a world where our end-to-end test suite doesn't exist and therefore could be equivalent to "killing it" and it is bad. Each component tests its own contracts, but if there's no global testing that the contracts match in both codebases then you're still shipping broken software.

I thought this article would be some clever way to match client side and server side contracts to ensure that the contracts are identical on both sides and tested so that you could test in isolation then still come away with assurances that the whole would work together.

Instead it sounds like it is advice to only build as much end-to-end tests so that you're reasonably confident that more isolating unit/functional will work, but don't build too much because they're horribly slow, and never adopt a policy that literally everything should be end-to-end tested because that will result in infinitely long running test suites. If you have no end-to-end tests you have no confidence that the software you ship works, if you have only end-to-end tests you have no confidence in your ability to ship software in the future.

So, uh, "clickbait title" I guess is my point?


sorry for the bait. thanks for the click :-D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: