> witr is successful if users trust it during incidents.
> This project was developed with assistance from AI/LLMs [...] supervised by a human who occasionally knew what he was doing.
This seems contradictory to me.
Fair enough! That line was meant tongue‑in‑cheek, and to be transparent about LLM usage. Rest assured, they were assistants, not authorities.
No to me. It just has to demonstrate to work well, which is plenty possible with a developer focused on outcome rather than process (though hopefully they cared a bit about process/architecture too).
Regardless of code correctness, it's easy enough for malware to spoof process relationships.
I agree, the LLM probably has a much better idea of what's happening than any human
The last bit
> supervised by a human who occasionally knew what he was doing.
seems in jest but I could be wrong. If omitted or flagged as actual sarcasm I would feel a lot better about the project overall. As long as you’re auditing the LLM’s outputs and doing a decent code review I think it’s reasonable to trust this tool during incidents.
I’ll admit I did go straight to the end of the readme to look for this exact statement. I appreciate they chose to disclose.