logoalt Hacker News

maerF0x0yesterday at 6:06 PM3 repliesview on HN

> The first is manual testing. If you haven’t seen the code do the right thing yourself, that code doesn’t work. If it does turn out to work, that’s honestly just pure chance.

Depending on exactly what the author meant here, I disagree. Our first and default tool should be some form of lightweight automated testing. It's explicit (serves a form of spec and docs how to use the software), it's repeatable (manual testing is done once and it's result is invalidated moments later), and it's cost per minute of effort is more or less the same (most companies have the engineers do the tests, they are expensive).

Yes. There will be exceptions and exceptional cases. This author is not talking about exceptions and neither am I. They're not an interesting addition to this conversation.


Replies

IMTDbyesterday at 6:20 PM

> Our first and default tool should be some form of lightweight automated testing

Manual verification isn't about skipping tests, it's about validating what to test in the first place.

You need to see the code work before you know what "working" even means. Does the screen render correctly? Does the API return sensible data? Does the flow make sense to users? Automated tests can only check what you tell them to check. If you haven't verified the behavior yourself first, you're just encoding your assumptions into test cases.

I'd take "no tests, but I verified it works end-to-end" over "full test coverage, but never checked if it solves the actual problem" every time. The first developer is focused on outcomes. The second is checking boxes.

Tests are crucial: they preserve known-good behavior but you have to establish what "good" looks like first, and that requires human judgment. Automate the verification, not the discovery. So our first and default tool remains manual verification

codevikingyesterday at 6:24 PM

I'm a big fan of lightweight, automated tests. Despite that, I still default to manual verification. Usually I do both.

Automated tests omit a certain type of feedback that I think remains important to the development loop. Automation doesn't care about a poor UX; it only verifies what you tell it to.

For instance, I regularly contribute to a CLI that's widely used at $WORK. I can easily write tests to verify the I/O of a command I'm working on that assert correctness. Yet if I actually try to use the command I'm changing, usually as part of verifying my changes, I tend to discover usability issues that make the program more pleasant to use and the tests would happily ignore.

Also, there's certainly cases where automation isn't worth the cost. Maybe because the resulting tests are complex, or brittle. I've often found UI tests to lie in this category (but maybe I'm doing them wrong).

Because of these things I think manual testing is the right default. Automated tests should also exist; but manual tests should _always_ be part of the process.

tech-ninjayesterday at 6:17 PM

I disagree, no company no matter the size will have E2E or integrations tests for all of its features, it's just not feasible.

Unless you are working on a tiny change on a highly tested part of the code you should be manually testing your code and/or adding some tests.