Wouldn't trust AI to run TODO, especially weak models. They can hallucinate tasks, forget to remind etc.
LLMs are stateless. But given an actual database of task-shaped items and some work, I could see the potential.
With a canonical source of truth, and set input/output expectations, the potential blast radius is quite small.
LLMs are stateless. But given an actual database of task-shaped items and some work, I could see the potential.
With a canonical source of truth, and set input/output expectations, the potential blast radius is quite small.