Pretty nice. I've been using LLMs to generate different Python and JS tools for wrangling data for ontology engineering purposes.
More recently, I've found a lot of benefit from using the extended thinking mode in GPT-5 and -5.1. It tends to provide a fully functional and complete result from a zero-shot prompt. It's as close as I've gotten to pair programming with a (significantly) more experienced coder.
One functional example of that (with 30-50% of my own coding, reprompting and reviews) is my OntoGSN [1] research prototype. After a couple of weeks of work, it can handle different integration, reasoning and extension needs of people working in assurance, at least based on how I understood them. It's an example of a human-AI collab that I'm particularly proud of.
[1] Playground at w3id.org/OntoGSN/