So your solution to prevent LLM misuse is to prevent LLM misuse? That's like saying "you can solve SQL injections by not running SQL-injected code".
Isn't that exactly what stopping SQL injection involves? No longer executing random SQL code.
Same thing would work for LLMs- this attack in the blog post above would easily break if it required approval to curl the anthropic endpoint.
Isn't that exactly what stopping SQL injection involves? No longer executing random SQL code.
Same thing would work for LLMs- this attack in the blog post above would easily break if it required approval to curl the anthropic endpoint.