logoalt Hacker News

cogman10last Tuesday at 8:07 PM2 repliesview on HN

In fact, in my experience, these elaborate test environments and procedures cripple products.

I'm firmly of the opinion that if a test can't be run completely locally then it shouldn't be run. These test environments can be super fragile. They often rely on a symphony of teams ensuring everything is in a good state all the time. But, what happens more often than not, is one team somewhere deploys a broken version of their software to the test environment (because, of course they do) in order to run their fleet of e2e tests. That invariably ends up blowing up the rest of the org depending on that broken software and heaven help you if the person that deployed did it at 5pm and is gone on vacation.

This rippling failure mode happens because it's easier to write e2e tests which depend on a functional environment than it is to write and maintain mock services and mock data. Yet the mock services and data are precisely what you need to ensure someone doesn't screw up the test environment in the first place.


Replies

jeltzlast Tuesday at 8:23 PM

You are not wrong but I have had many experiences where mock services resulted in totally broken systems since they were incorrectly mocked. In complex systems it is very hard to accurately mock interactions.

Personally I think the real issue is not the testing strategy but the system itself. Many organizations make systems overly complex. A well structured monolith with a few supporting services is usually easy to test while micro service/SOA hell is not.

coryrclast Tuesday at 8:18 PM

There are many reasons you want to be able to turn up your whole stack quickly; disaster recovery is just one of them. And if you can turn up your environment quickly then why not have multiple staging environments? You start with the most recent of yours and everyone else's prod version, then carrots combinations from there

Obviously this is for large-scale systems and not small teams.