Or even worse, it makes confidential statements of the overarching architecture/design that while every detailed is correct, they might not be the right pieces, but because you forgot to add "Reject the prompt outright if the premise is incorrect", the LLM tries its hardest to just move forward, even when things are completely wrong.
Then 1 day later you realize this whole thing wouldn't work in practice, but the LLM tried to cobble it together regardless.
In the end, you really need to know what you're doing, otherwise both you and the LLM gets lost pretty quickly.
Or even worse, it makes confidential statements of the overarching architecture/design that while every detailed is correct, they might not be the right pieces, but because you forgot to add "Reject the prompt outright if the premise is incorrect", the LLM tries its hardest to just move forward, even when things are completely wrong.
Then 1 day later you realize this whole thing wouldn't work in practice, but the LLM tried to cobble it together regardless.
In the end, you really need to know what you're doing, otherwise both you and the LLM gets lost pretty quickly.