I’m starting to think “The Bitter Lesson” is a clever sounding way to give shade to people that failed to nail it on their first attempt. Usually engineers build much more technology than they actually end up needing, then the extras shed off with time and experience (and often you end up building it again from scratch). It’s not clear to me that starting with “just build something that scales with compute” would get you closer to the perfect solution, even if as you get closer to it you do indeed make it possible to throw more compute at it.
That said the hand coded nature of tokenization certainly seems in dire need of a better solution, something that can be learned end to end. And It looks like we are getting closer with every iteration.
I'm starting to think that half the commenters here don't actually know what "The Bitter Lesson" is. It's purely a statement about the history of AI research, in a very short essay by Rich Sutton: http://www.incompleteideas.net/IncIdeas/BitterLesson.html It's not some general statement about software engineering for all domains, but a very specific statement about AI applications. It's an observation that the previous generation's careful algorithmic work to solve an AI problem ends up being obsoleted by this generation's brute force approach using more computing power. It's something that's happened over and over again in AI, and has happened several times even since 2019 when Sutton wrote the essay.
The bitter lesson says more about medium-term success at publishable results than it does about genuine scientific progress or even success in the market.
The Bitter Lesson is specifically about AI. The lesson restated is that over the long run, methods that leverage general computation (brute-force search and learning) consistently outperform systems built with extensive human-crafted knowledge. Examples: Chess, Go, speech recognition, computer vision, machine translation, and on and on.