> And we do not need gigawatts and gigawatts for this use case anyway. A small local model or batched inference of a small model should do just fine.
I guess I'm a dinosaur but I think emailing the friend to ask what they are actually up to would be even better than involving an LLM to imagine it.
Asynchronous human to human communication is a pretty solved problem.
A common cited use case of LLMs is scheduling travel, so being able to pretend it’s somebody somewhere else is for sure important to incentivize going somewhere!