Congrats on launching. I've noticed that switching prompts without edits between different LLM providers has degradation on performance. I'm wondering if you guys have noticed how developers do these "translations", I'm wondering since maybe your eval framework might have data for best practices.
Yeah, this is something we've heard as well. No particular feature right now but we did ship an agent in local dev to help people improve their prompts.