I thought of doing a similar LLM in a AI evals teaching site to tell users to interact through it but was concerned with inducing users into a prompt injection friendly pattern.