logoalt Hacker News

Lattytoday at 4:09 PM2 repliesview on HN

It's crazy to me that you'd trust the output of an LLM for that. It's something where if you do it wrong it could cause major damage, and LLMs are literally famous for creating plausbile-sounding but wrong output.

If you wanted to use an LLM to identify it, sure, you can validate that, and then find the manufacturer instructions and use those. Just following what it says about the cables without any validation it's correct is just wild to me. These are products with instruction manuals made for them specifically designed for this.


Replies

visargatoday at 7:52 PM

> It's crazy to me that you'd trust the output of an LLM for that. It's something where if you do it wrong it could cause major damage,

With critical tasks you need to cross reference multiple AI, start by running 4 deep reports, on Claude, ChatGPT, Gemini and Perplexity, then put all of them into a comparative - critical analysis round. This reduces variance, the models are different, and using different search tools, you can even send them in different directions, one searches blogs, one reddit, etc.

show 1 reply
ashleyntoday at 7:05 PM

I'd probably view LLM advice like the blind spot indicator on my car. Trust when it's lit. Don't trust when it's not lit.