logoalt Hacker News

sdesol01/20/20250 repliesview on HN

To make the long story short, you can manipulate LLM responses (I want this for testing/cost reasons) in my chat app, so it's not safe to trust the LLM generated code. I guess I could make it possible to not execute any modified LLM responses.

However, if the chat app was designed to be used by one user, evaling would not be an issue.