logoalt Hacker News

bluGillyesterday at 11:32 PM4 repliesview on HN

90% of the time (or more): you don't. The real thing is perfectly fine in a test with the right setup. Fileio is fast, I just need a test file in a tempdir. databases are fast, I just need an easy way to settup my schema. Sometimes I need the isolation but normally I do not.


Replies

danparsonsontoday at 1:54 AM

Well, no - you don't.

What you're describing is a very limited subset of testing, which presumably is fine for the projects you work on, but that experience does not generalise well.

Integration testing is of course useful, but generally one would want to create unit tests for every part of the code, and by definition it's not a unit test if hits multiple parts of the code simultaneously.

Apart from that, databases and file access may be fast but they still take resources and time to spin up; beyond a certain project and team size, it's far cheaper to mock those things. With a mock you can also easily simulate failure cases, bad data, etc. - how do you test for file access issues, or the database server being offline?

Using mocks properly is a sign of a well-factored codebase.

show 3 replies
asa400today at 1:46 AM

I worked on a project where a dev wanted to mock out the database in tests "for performance, because the database is slow". I almost lost my shit.

Even funnier, this was all hypothetical and yet taken as gospel. We hadn't even written the tests yet, so it was impossible to say whether they were slow or not. Nothing had been measured, no performance budget had been defined, no prototype of the supposedly slow tests had been written to demonstrate the point.

We ended up writing - no joke - less than 100 tests total, almost all of which hit the database, including some full integration tests, and the entire test suite finished in a few seconds.

I'm all for building in a way that respects performance as an engineering value, but we got lost somewhere along the way.

show 3 replies
bccdeetoday at 4:27 AM

What if you're calling an API? How do you test a tool that does a bunch of back-and-forth with another service without actually hitting that other service? Do you have to spin up your entire k8s ecosystem in a local cluster just to validate that the logic in one function is sound? Do you have to deliberately misconfigure it in order to test that your function handles errors properly?

More broadly, suppose foo() has an implementation that depends on Bar, but Bar is complicated to instantiate because it needs to know about 5 external services. Fortunately foo() only depends on a narrow sliver of Bar's functionality. Why not wrap Bar in a narrow interface—only the bits foo() depends on—and fake it?

I'm not a maximalist about test doubles. I prefer to factor out my I/O until it's high-level enough that it doesn't need unit tests. But that's not always an option, and I'd rather be flexible and use a test double than burden all my unit tests with the full weight of their production dependencies.

show 1 reply
8notetoday at 3:20 AM

how are you deliberately breaking those dependencies? or are you only testing the happy path?

you could extend this to say 85% of the tome just write the code directly to prod and dont have any tests. if you broke something, an alarm will go off