OK I, like the other commenter, also feel stupid to reply to zingers--but here goes.
First of all, I think a lot of the issue here is this sense of baggage over this word intelligence--I guess because believing machines can be intelligent goes against this core belief that people have that humans are special. This isn't meant as a personal attack--I just think it clouds thinking.
Intelligence of an agent is a spectrum, it's not a yes/no. I suspect most people would not balk at me saying that ants and bees exhibits intelligent behavior when they look for food and communicate with one another. We infer this from some of the complexity of their route planning, survival strategies, and ability to adapt to new situations. Now, I assert that those same strategies can not only be learned by modern ML but are indeed often even hard-codable! As I view intelligence as a measure of an agent's behaviors in a system, such a measure should not distinguish the bee and my hard-wired agent. This for me means hard-coded things can be intelligent as they can mimic bees (and with enough code humans).
However, the distribution of behaviors which humans inhabit are prohibitively difficult to code by hand. So we rely on data-driven techniques to search for such distributions in a space which is rich enough to support complexities at the level of the human brain. As such I certainly have no reason to believe, just because I can train one, that it must be less intelligent than humans. On the contrary, I believe in every verifiable domain RL must drive the agent to be the most intelligent (relative to RL award) it can be under the constraints--and often it must become more intelligent than humans in that environment.
Eh...kinda. The RL in RLHF is a very different animal than the RL in a Waymo car training pipeline, which is sort of obvious when you see that the former can be done by anyone with some clusters and some talent, and the latter is so hard that even Waymo has a marked preference for operating in July in Chandler AZ: everyone else is in the process of explaining why they didn't really want Level 5 per se anyways: all brakes no gas if you will.
The Q summations that are estimated/approximated by deep policy networks are famously unstable/ill-behaved under descent optimization in the general case, and it's not at all obvious that "point RL at it" is like, going to work at all. You get stability and convergence issues, you get stuck in minima, it's hard and not a mastered art yet, lot of "midway between alchemy and chemistry" vibes.
The RL in RLHF is more like Learning to Rank in a newsfeed optimization setting: it's (often) ranked-choice over human-rating preferences with extremely stable outcomes across humans. This phrasing is a little cheeky but gives the flavor: it's Instagram where the reward is "call it professional and useful" instead of "keep clicking".
When the Bitter Lesson essay was published, it was contrarian and important and most of all aimed at an audience of expert practitioners. The Bitter Bitter Lesson in 2025 is that if it looks like you're in the middle of an exponential process, wait a year or two and the sigmoid will become clear, and we're already there with the LLM stuff. Opus 4 is taking 30 seconds on the biggest cluster that billions can buy and they've stripped off like 90% of the correctspeak alignment to get that capability lift, we're hitting the wall.
Now this isn't to say that AI progress is over, new stuff is coming out all the time, but "log scale and a ruler" math is marketing at this point, this was a sigmoid.
Edit: don't take my word for it, this is LeCun (who I will remind everyone has the Turing) giving the Gibbs Lecture on the mathematics 10k feet view: https://www.youtube.com/watch?v=ETZfkkv6V7Y
So according to your extremely broad definition of intelligence, also a casio calculator is intelligent?
Sure, if we define anything as intelligent, AI is intelligent.
Is this definition somehow helpful though?