logoalt Hacker News

rat998801/20/20251 replyview on HN

I thought he was hinting on using eval.


Replies

sdesol01/20/2025

To make the long story short, you can manipulate LLM responses (I want this for testing/cost reasons) in my chat app, so it's not safe to trust the LLM generated code. I guess I could make it possible to not execute any modified LLM responses.

However, if the chat app was designed to be used by one user, evaling would not be an issue.