My experience has been
* If I don't know how to do something, llms can get me started really fast. Basically it distills the time taken to research something to a small amount.
* if I know something well, I find myself trying to guide the llm to make the best decisions. I haven't reached the state of completely letting go and trusting the llm yet, because the llm doesn't make good long term decisions
* when working alone, I see the biggest productivity boost in ai and where I can get things done.
* when working in a team, llms are not useful at all and can sometimes be a bottleneck. Not everyone uses llms the same, sharing context as a team is way harder than it should be. People don't want to collaborate. People can't communicate properly.
* so for me, solo engineers or really small teams benefit the most from llms. Larger teams and organizations will struggle because there's simply too much human overheads to overcome. This is currently matching what I'm seeing in posts these days
The future of work is fewer human team members and way more AI assistants.
I think companies will need fewer engineers but there will be more companies.
Now: 100 companies who employ 1,000 engineers each
What we are transitioning to: 1000 companies who employ 10 engineers each
What will happen in the future: 10,000 companies who employ 1 engineer each
Same number of engineers.
We are about to enter an era of explosive software production, not from big tech but from small companies. I don't think this will only apply to the software industry. I expect this to apply to every industry.
> llms can get me started really fast. Basically it distills the time taken to research something
> the llm doesn't make good long term decisions
What could possibly go wrong, using something you know makes bad decisions, as the basis of your learning something new.
It's like if a dietician instructed a client to go watch McDonald's staff, when they ask how to cook the type of meals that have been recommended.
To me the biggest benefit of LLMs has always been as a learning tol, be it for general queries or "build this so I can get an idea of how it works and get started quickly". There are so many little things that you need to know when trying anything new.
I suspect the real breakthrough for teams won't be better raw models, but better ways to make the "AI-assisted thinking" legible and shareable across the group, instead of trapped in personal prompt histories