logoalt Hacker News

jchwtoday at 1:45 AM0 repliesview on HN

The difference, IMO, between a mock and a proper "test" implementation is that traditionally a mock only exists to test interface boundaries, and the "implementation" is meant to be as much of a noop as possible. That's why the default behavior of almost any "automock" is to implement an interface by doing nothing and returning nothing (or perhaps default-initialized values) and provide tools for just tacking assertions onto it. If it was a proper implementation that just happened to be in-memory, it wouldn't really be a "mock", in my opinion.

For example, let's say you want to test that some handler is properly adding data to a cache. IMO the traditional mock approach that is supported by mocking libraries is to go take your RedisCache implementation and create a dummy that does nothing, then add assertions that say, the `set` method gets called with some set of arguments. You can add return values to the mock too, but I think this is mainly meant to be in service of just making the code run and not actually implementing anything.

Meanwhile, you could always make a minimal "test" implementation (I think these are sometimes called "fakes", traditionally, though I think this nomenclature is even more confusing) of your Cache interface that actually does behave like an in-memory cache, then your test could assert as to its contents. Doing this doesn't require a "mocking" library, and in this case, what you're making is not really a "mock" - it is, in fact, a full implementation of the interface, that you could use outside of tests (e.g. in a development server.) I think this can be a pretty good middle ground in some scenarios, especially since it plays along well with in-process tools like fake clocks/timers in languages like Go and JavaScript.

Despite the pitfalls, I mostly prefer to just use the actual implementations where possible, and for this I like testcontainers. Most webserver projects I write/work on naturally require a container runtime for development for other reasons, and testcontainers is glue that can use that existing container runtime setup (be it Docker or Podman) to pretty rapidly bootstrap test or dev service dependencies on-demand. With a little bit of manual effort, you can make it so that your normal test runner (e.g. `go test ./...`) can run tests normally, and automatically skip anything that requires a real service dependency in the event that there is no Docker socket available. (Though obviously, in a real setup, you'd also want a way to force the tests to be enabled, so that you can hopefully avoid an oopsie where CI isn't actually running your tests due to a regression.)