This was long before. Google had conversational LLMs before ChatGPT (though they weren’t as good in my recollection), and they declined to productize. There was a sense at the time that you couldn’t productize anything with truly open ended content generation because you couldn’t guarantee it wouldn’t say something problematic.
See Facebook’s Galactica project for an example of what Google was afraid would happen: https://www.technologyreview.com/2022/11/18/1063487/meta-lar...
I'm having a hard time believing this, or at least understanding the decision (not on your part). Why wouldn't they just continue R&D on it rather than drop it entirely?
Many products we use every day start out unsafe and dangerous during the early stages. Why would this be any different?
And why allow the paper to be published?