If you could influence the LLM's actions so easily, what would stop it from equally being influenced by prompt injection from the data being processed?
What you need is more fine-grained control over the harness.