Doing the same. Though I wish there was some kind of optimization of text generated by an LLM for an LLM. Just mentioning it’s for an LLM instead of Juan consumption yields no observably different results.