It means trying to figure out how to build an intelligence always loses to mindlessly brute-forcing problems with more compute:
unless you don't have unlimited compute, at which point you need other ideas
Well, it means that thus far trying to build an intelligence has lost out to brute forcing it with more compute.
There is nothing particular that suggests this is infinitely scalable.
They have researchers working for insane salaries just so they don't go to another frontier lab to share their ideas. If you think it is just "mindless bruteforce" you don't understand anything. The idea is that the most effective methods are ones that scale but those ideas are also then limited by the compute available.
It's not mindless brute-forcing, the details of the architecture, data, and training strategy still matter a lot (if you gave a modern datacenter to an AI researcher from the 60s they wouldn't get an LLM very quickly). The bitter lesson is that you should focus on adjusting your techniques so that they can take advantage of processing power to learn more about your problem themselves, instead of trying to hand-craft half the solution yourself to 'help' the part that's learning.