logoalt Hacker News

numpad0today at 6:39 PM0 repliesview on HN

Personally, I don't bother prompting LLMs in Japanese, AT ALL, since I'm functional enough in English(a low bar apparent from my comment history) and because they behave a lot stupider otherwise. The Japanese language is always the extreme example for everything, but yes, it would be believable to me if merely normalizing input by first translating just worked.

What would be interesting then would be to find out what the composite function of translator + executor LLMs would look like. These behaviors makes me wonder, maybe modern transformer LLMs are actually ELMs, English Language Models. Because otherwise there'll be, like, dozens of functional 100% pure French trained LLMs, and there aren't.