As we build and understand them now there's pretty good structural reasons to believe that LLMs cannot be tweaked or tuned to be more than incidentally truthful.
If you use an LLM only solution, sure, but there are many more opportunities if you expand the system to include more.
We could, for example, use the existing tools for copyright detection, on the output and refuse to send it to the client side. It's just another moderation check in that perspective
If you use an LLM only solution, sure, but there are many more opportunities if you expand the system to include more.
We could, for example, use the existing tools for copyright detection, on the output and refuse to send it to the client side. It's just another moderation check in that perspective