I recognize this part:
> I don’t recall what happened next. I think I slipped into a malaise of models. 4-way split-paned worktrees, experiments with cloud agents, competing model runs and combative prompting.
You’re trying to have the LLM solve some problem that you don’t really know how to solve yourself, and then you devolve into semi-random prompting in the hope that it’ll succeed. This approach has two problems:
1. It’s not systematic. There’s no way to tell if you’re getting any closer to success. You’re just trying to get the magic to work.
2. When you eventually give up after however many hours, you haven’t succeeded, you haven’t got anything to build on, and you haven’t learned anything. Those hours were completely wasted.
Contrast this with you beginning to do the work yourself. You might give up, but you’d understand the source code base better, perhaps the relationship between Perl and Typescript, and perhaps you’d have some basics ported over that you could build on later.