Fair point, I just think that example is better off running on a cron job than using compute from llm inference, (though that will become negligible over time, anyways).