I need a version of this which swears loudly when an assumption it made turns out to be wrong, with the volume/passion/verbosity correlated with how many tokens it's burned on the incorrect approach.
Any chance you could add a video showcasing the plugin? I don't have any agentic app but I would love to see an example of what it does!
Marvelous!
Next innovation in this space should be the robotic arm that issues a dope-slap to the developer for writing crappy/buggy/insecure code.
I wish the agents could hear me when I have to suffer through their code!
I tried it but all I hear is a choir of angels, is it broken?
Please add Minecraft hurt sound effects for when my project fails to build, linter fails, segfault, etc
Unneeded when using local models, as every workload produces a novel pattern of coil whine from the GPU.
I wonder if it emits orgasmic moans when working with a particularly pleasureable codebase.
Please stop ascribing emotion to code that passably resembles speech.
These things do not think, nor feel, nor dream. We're cratering the world's economy because people can't stop trying to fuck the computer they stuck googly eyes on.
the scan catches surface stuff. funnier signal would be tracking when the agent reads the same file 3 times in a row, or deletes what it just wrote. you can hear the frustration in the access pattern.
this is wtfs per minute but now with AI! :all_the_things!:
https://www.osnews.com/story/19266/wtfsm/
I would really love to know if the groaning decreases or increases the more "agentic" (agent written) the code base is?
Does this actually relate to the code quality being observed by the agent? The readme isn't very clear on that IMO. I have some projects I'd love to try this out on, but only if I am to get an accurate representation of the LLMs suffering.
That sounds like farts https://www.youtube.com/watch?v=m7mYzSZdNPE
From a quick look, this doesn't have the model evaluate code quality, but it runs a heuristic analysis script over the code to determine the groan signal. Did I miss something? Why not leave it to the model to decide the quality of the code?
Is somebody going to give you money to do this?
Why? I don’t understand the objective for this?
Honestly, I don't care about Opus 4.7. This is the true evolution of agentic coding.
I made a robot that screams : https://www.reddit.com/r/ProgrammerHumor/comments/7rl2a0/i_m...
In the absence of real productive use cases for AI agents, I guess plugins to anthropomorphise them fruther will have to do.
Maybe I'm the person who yells at clouds but I find the personification of LLMs, for lack of better, less strong words, horrific.
[dead]
Hi Hacker News, I'm Andrew, the CTO of Endless Toil.
Endless Toil is building the emotional observability layer for AI-assisted software development.
As engineering teams adopt coding agents, the next challenge is understanding not just what agents produce, but how the codebase feels to work inside. Endless Toil gives developers a real-time signal for complexity, maintainability, and architectural strain by translating code quality into escalating human audio feedback.
We are currently preparing our pre-seed round and speaking with early-stage investors who are excited about developer tools, agentic engineering workflows, and the future of AI-native software teams.
If you are investing in the next generation of software infrastructure, we would love to talk.