logoalt Hacker News

zenopraxtoday at 3:36 AM5 repliesview on HN

> witr is successful if users trust it during incidents.

> This project was developed with assistance from AI/LLMs [...] supervised by a human who occasionally knew what he was doing.

This seems contradictory to me.


Replies

zephyreontoday at 4:51 AM

The last bit

> supervised by a human who occasionally knew what he was doing.

seems in jest but I could be wrong. If omitted or flagged as actual sarcasm I would feel a lot better about the project overall. As long as you’re auditing the LLM’s outputs and doing a decent code review I think it’s reasonable to trust this tool during incidents.

I’ll admit I did go straight to the end of the readme to look for this exact statement. I appreciate they chose to disclose.

show 2 replies
pranshuparmartoday at 11:53 AM

Fair enough! That line was meant tongue‑in‑cheek, and to be transparent about LLM usage. Rest assured, they were assistants, not authorities.

solarkrafttoday at 5:39 AM

No to me. It just has to demonstrate to work well, which is plenty possible with a developer focused on outcome rather than process (though hopefully they cared a bit about process/architecture too).

Retr0idtoday at 7:56 AM

Regardless of code correctness, it's easy enough for malware to spoof process relationships.

guywithahattoday at 5:28 AM

I agree, the LLM probably has a much better idea of what's happening than any human