I'm on the same page. Do people not analyze the problems themselves? Are they just copy/pasting their entire ticket description into Claude Code and having it iterate until they land on something that works?
I don't get it.
That'd be crazy. The agent has a skill configured to fetch ticket descriptions from Jira by itself. Copy-pasting feels like manual labor.
Not what I do. I'll reformulate the ticket description so that the purpose and as many details as possible about the solution are made clear from the start. Then I tell Opus to go and research the relevant parts of the codebase and what needs to be done, and write its findings to a research.md file. Then I'll review that file, bring answers to any open questions and hash out more details if any parts seem fuzzy. When the research is sound I'll ask Opus to produce a plan.md document that lists all the changes that need to be made as actionable steps (possibly broken into phases). Then I'll let Sonnet execute the steps one by one and quickly review the changes as we go along.
> Are they just copy/pasting their entire ticket description into Claude Code and having it iterate until they land on something that works?
"Their ticket" = that was AI generated. After which they will wait their AI generated PR be checked by an automated AI QA that will validate against the AI generated spec.
It feels like important metric of "corporate AI adoption" should be how effective the human in steering the AI.
IF THE HUMAN ISN'T EFFECTIVE, THE HUMAN NEEDS TO GO.
You should.
If it manages to solve the working solutions - then it's great! why would you waste your time on it?
It it fails - then it's great! you find your value by solving the ticket, which can be a great example where human can still prevail to the AI (joke: AI companies might be interested to buy such examples)
(All assuming that your time cost is pricier than token spending. Totally different story if your wage is less than token cost)
Actually no. We ask business analysts to supply documentation for whole products. We use AI to analyze that documentation and after that we use AI to create tasks in Jira. Business analysts will review them.
After that we use AI to translate the tasks to a more technical view.
After that we use AI to implement the tasks.
After that we use AI to review the tasks.
After that a human QA tests the tasks.
If all is good, the code is merged and lands in production.
And yes, we burn a lot of tokens but the process is very fast. It takes months instead of years.
> Are they just copy/pasting their entire ticket description into Claude Code and having it iterate until they land on something that works?
There's also the pattern of creating an army of agents to solve problems. Human write a plan. One agent elaborates on it. Another reviews it and makes changes. Another splits it up into tasks and delegates out to multiple agents who make changes. Yet another agent reviews the changes, and on and on. All working around the clock.
> Are they just copy/pasting their entire ticket description into Claude Code and having it iterate until they land on something that works?
That is exactly what they are doing, yes