5 years ago: ML-auto-complete → You had to learn coding in depth
Last Year: AI-generated suggestions → You had to be an expert to ask the right questions
Now: AI-generated code → You should learn how to be a PM
Future: AI-generated companies → You must learn how to be a CEO
Meta-future: AI-generated conglomerates → ?
Recently I realized that instead of just learning technical skills, I need to learn management skills. Specifically, project management, time management, writing specifications, setting expectations, writing tests, and in general, handling and orchestrating an entire workflow.And I think this will only shift to the higher levels of the management hierarchy in the future. For example, in the future we will have AI models that can one-shot an entire platform like Twitter. Then the question is less about how to handle a database and more about how to handle several AI generated companies!
While we're at the project manager level now, in the future we'll be at the CEO level. It's an interesting thing to think about.
I've never understood this train of thought. When working in teams and for clients, people always have questions about what we have created. "Why did you choose to implement it like this?" "How does this work?" "Is X possible to do within our timeframe/budget?"
If you become just a manager, you don't have answers to these questions. You can just ask the AI agent for the answer, but at that point, what value are you actually providing to the whole process?
And what happens when, inevitably, the agent responds to your question with "You're absolutely right, I didn't consider that possibility! Let's redo the entire project to account for this?" How do you communicate that to your peers or clients?
If AI gets to be this sophisticated, what value would you bring to the table in these scenarios?
The moment we have true AGD (artificial general developer), we’ll also have AGI that can equally well serve as a CEO. Where humans sit then won’t be a question of intellectual skill differentiation among humans anymore.
I'd advise caution with this approach. One of the things I'm seeing a lot of people get wrong about AI is that they expect that it means they no longer need to understand the tools they're working with - "I can just focus on the business end". This seems true but it's not - it's actually more important to have a deep understanding of how the machine works because if the AI is doing things that you don't understand you run a severe risk of putting yourself in a very bad situation - insecure applications or servers, code with failure modes that are catastrophic edge cases you won't catch until they're a problem, data lossage / leakage.
If anything, managing the project, writing the spec, setting expectations and writing tests are things llms are incredibly well suited for. Getting their work 'correct' and not 'functional enough that you don't know the difference' is where they struggle.
one-shot doesn't mean what you think it means.
one-shot means you provide one full question/answer example (from the same distribution) in the context to LLM.
> more about how to handle several AI generated companies!
The cost of a model capable of running an entire company will be multiples of the market cap of the company it is capable of running.
No, no companies and no CEOs. Just a user. It's like StarTrek replicator. Food replication. No you are not a chef, not a restaurant manager, not agrifarm CEO but just a user that orders a meal. So yes you will need "skills" to specify the type of meal but nothing beyond that.
>While we're at the project manager level now, in the future we'll be at the CEO level.
This is the kind of half baked thought that seems profound to a certain kind of tech-brained poster on HN, but upon further consideration makes absolutely zero sense.