logoalt Hacker News

ViewTrick1002yesterday at 4:36 PM1 replyview on HN

Now try a different language. My take is hard RL tuning to fix these "gotcha:s" since the underlying model can't do it on its own.

OpenAI is working on ChatGPT the application and ecosystem. They have transitioned from model building to software engineering with RL tuning and integration of various services to solve the problems the model can't do on its own. Make it feel smart rather than be smart.

This means that as soon as you find a problem where you step out of the guided experience you get the raw model again which fails when encountering these "gotchas".

Edit - Here's an example where we see a very tuned RL experience in English where a whole load of context is added on how to solve the problem while the Swedish prompt for the same word fails.

https://imgur.com/a/SlD84Ih


Replies

ACCount37yesterday at 9:10 PM

You can tell it "be careful about the tokenizer issues" in Swedish and see how that changes the behavior.

The only thing that this stupid test demonstrates is that LLM metacognitive skills are still lacking. Which shouldn't be a surprise to anyone. The only surprising thing is that they have metacognitive skills, despite the base model training doing very little to encourage their development.

show 1 reply