Wrong way to look at it.
Generally there are 2 types of human intelligence - simulation and pattern lookup (technically simulation still relies on pattern lookup but on a much lower level).
Pattern lookup is basically what llms do. Humans memorize the maps of tasks->solutions and statistically interpolate their knowledge to do a particular task. This works well enough for the vast majority of the people, and this is why LLMs are seen as a big help since they effectively increase your
Simulation type intelligence is able to break down a task into core components, and understand how each component interacts and predict outcomes into the future, without having knowledge beforehand.
For example, assume a task of cleaning the house:
Pattern lookup would rely on learned expereince taught by parents as well as experience in cleaning the house to perform an action. You would probably use a duster+generic cleaner to wipe surfaces, and vaccum the floors.
Simulation type intelligence would understand how much dirt / dust there is, how it behaves. For example, instead of a duster, one would realize that you can use a wet towel to gather dust, without ever having seen this used ever before.
Here is the kicker - pattern type intelligence is actually much harder to attain, because it requires really good memorization, which is pretty much genetic.
Simulation type intelligence is actually attainable by anyone - it requires much smaller subset of patterns to memorize. The key factor is changing how you think about the world, which requires realigning your values. If you start to value low level understanding, you naturally develop this intelligence.
For example, what would it take for you to completely take your car apart, figure out how every component works, and put it back together? A lot of you have garages and money to spend on a cheap car to do this and the tools, so doing this in your spare time is practical, and it will give you the ability to buy an older used car, do all the maintenance/repairs on it yourself on it, and have something that works well all for a lower price, while also giving you a monetizable skill.
Futhermore, LLMs can't reason with simulation - you can get close with agentic frameworks, but all of those are manually coded and have limits, and we aren't close to figuring out a generic framework for an agent that can make it do things like look up information, run internal models of how things would work, and so on.
So finally, when it comes to competing, if you chose to stick to pattern based intelligence, and you lose your job to someone who can use llms better, thats your fault.
At the longest timescale humans aren’t the best at either
I have yet to see a compelling argument demonstrating that humans have some special capabilities that could never be replaced