But you can find that information regardless of an LLM? Also, why do you trust an LLM to give it to you versus all of the other ways to get the same information, with more high trust ways of being able to communicate the desired outcome, like screenshots?
Why are we assuming just because the prompt responds that it is providing proper outputs? That level of trust provides an attack surface in of itself.
But you can find that information regardless of an LLM? Also, why do you trust an LLM to give it to you versus all of the other ways to get the same information, with more high trust ways of being able to communicate the desired outcome, like screenshots?
Why are we assuming just because the prompt responds that it is providing proper outputs? That level of trust provides an attack surface in of itself.