logoalt Hacker News

My AI-Assisted Workflow

65 pointsby maiobarberotoday at 7:08 AM47 commentsview on HN

Comments

crustycodertoday at 10:19 AM

He's also missed a major step, which is to feed your skill into the LLM and ask it to critique it - after all, it's the LLM that's going to act on it, so asking it to assess first is kinda important. I've done that for his skills, here's the assessment:

==========

  Bottom line
  Against the agentskills.io guidance, they look more like workflow specs than polished agent skills.
  The largest gap is not correctness. It is skill design discipline:

  # stronger descriptions,
  # lighter defaults,
  # less mandatory process,
  # better degraded-mode handling,
  # clearer evidence that the skills were refined through trigger/output evals.

  Skill           Score/10
  write-a-prd          5.4
  prd-to-issues        6.8
  issues-to-tasks      6.0
  code-review          7.6
  final-audit          6.3
==========

LLM metaprogramming is extremely important, I've just finished a LLM-assisted design doc authoring session where the recommendations of the LLM are "Don't use a LLM for that part, it won't be reliable enough".

show 5 replies
didibear77today at 11:49 AM

This looks a lot like the [BMad Method](https://github.com/bmad-code-org/BMAD-METHOD)

gbrindisitoday at 8:35 AM

This is pretty much a spec driven workflow.

I do similar, but my favorite step is the first: /rubberduck to discuss the problem with the agent, who is instructed by the command to help me frame and validate it. Hands down the most impactful piece of my workflow, because it helps me achieve the right clarity and I can use it also for non coding tasks.

After which is the usual: write PRDs, specs, tasks and then build and then verify the output.

I started with one the spec frameworks and eventually simplify everything to the bone.

I do feel it’s working great but someday I fear a lot of this might still be too much productivity theater.

show 1 reply
yanis_ttoday at 11:35 AM

Spec-driven approach is fun. I wonder at which point of anytime at all we are going to commit only specs into the got repo, while the actual code can be generated.

Obviously we’re not here yet because of price, context, and non-determinism, but it’s nice area to experiment with.

lbreakjaitoday at 11:06 AM

My workflow is quite similar, but it's leveraging Notion instead of markdown files.

https://github.com/tessellate-digital/notion-agent-hive

The main reason is we're already using Notion at work, and I wanted something where I could easily add/link to existing documents.

Sample size of one, but I've noticed a considerable improvement after adding a "final review" step, going through the plan and looking at the whole code change, over a naive per-task "implement-review" cycle.

nDRDYtoday at 8:47 AM

Here's mine: code to spec until I get stuck -> search Google for the answer -> scan the Gemini result instead of going to StackOverflow.

Bossietoday at 9:29 AM

My workflow is also highly inspired by Matt's skills, but I'm leveraging Linear instead of Github.

/grill-me (back-and-forth alignment with the LLM) --> /write-a-prd (creates project under an initative in Linear) --> /prd-to-issues (creates issues at the project level). I'm making use of the blockedBy utility when registering the issues. They land in the 'Ready for Agent' status.

A scheduled project-orchestrator is then picking up issues with this status leveraging subagents. A HITL (Human in the loop) status is set on the ticket when anything needs my attention. I consider the code as the 'what', so I let the agent(s) update the issues with the HOW and WHY. All using Claude Code Max subscription.

Some notes:

- write-a-prd is knowledge compression and thus some important details occasionally get lost

- The UX for the orchestrator flow is suboptimal. Waiting for this actually: https://github.com/mattpocock/sandcastle/issues/191#issuecom...

- I might have to implement a simplify + review + security audit, call it a 'check', to fire at the end of the project. Could be in the form of an issue.

fpausertoday at 11:33 AM

> What is AI actually good at? Implementation.

AI is good in generating a lot of spaghetty code.

tim-projectstoday at 10:46 AM

I automated a lot of this with a tool I wrote - https://github.com/tim-projects/tasks-ai

It's not perfect by all means but it does the job and fast. My code quality and output increased from using it.

cg-enterprisetoday at 11:22 AM

Did you compare your flow to superpowers/GSD?

troupotoday at 11:41 AM

I just use /brainstorming from https://github.com/obra/superpowers/tree/main

Then I tell it to write a high level plan. And then rum subagents to create detailed plans from each of the steps in the high-level one. All olans must include the what, the why, and the how.

Works surprisingly well, especially for greenfield projects.

You have to manually revie the code though. No amount of agentic code review will fix the idiocy LLMs routinely produce.

throwatdem12311today at 10:57 AM

Congratulations you reinvented spec-kit.

zkmontoday at 9:03 AM

Congrats! You just rediscovered something called water-fall model.

show 3 replies
hansmayertoday at 8:57 AM

No kids, don´t put yourself through this suffering. If you have to invest so much deliberate effort to sort of make it work - while you still handle the most tenuous and boring parts yourself, then what is the point? Lets keep the LLM vendors to their word - they promised intelligent machines that would just work so well to the point of causing mass unemployment. Why on earth do we have to work around the LLMs to make them work? What is the point? Where is my nation of datacenter PhDs or a PocketPhd, depending on whose CEOs misleading statement one quotes?

imirictoday at 8:28 AM

Why is everyone compelled to write one of these articles? Do they think that their workflow is so unique that they've unlocked the secret to harnessing the power of a pattern generator? Every single one of these reads like influencer vomit.

My workflow hasn't changed since 2022: 1. Send some data. 2. Review response. 3. Fix response until I'm satisfied. 4. Goto 1.

show 4 replies
pydrytoday at 9:20 AM

>What is AI actually good at? Implementation. What is it genuinely bad at? Figuring out what you actually want

I've found it to be pretty bad at both.

If what you're doing is quite cookie cutter though it can do a passable job of figuring out what you want.

progxtoday at 8:12 AM

My AI-Results

consomidatoday at 10:37 AM

[dead]

slopinthebagtoday at 9:06 AM

[dead]