HN submissions have a bunch of examples in them, but worth remembering they were released as "Look at this somewhat cool and potentially useful stuff" rather than what we see today, LLMs marketed as tools.
https://news.ycombinator.com/item?id=21454273 / https://news.ycombinator.com/item?id=19830042 - OpenAI Releases Largest GPT-2 Text Generation Model
HN search for GPT between 2018-2020, lots of results, lots of discussions: https://hn.algolia.com/?dateEnd=1577836800&dateRange=custom&...
Wild how many people were predicting the AI slop, but was dismissing it as unlikely beyond some trolls.
I still think of The Unreasonable Effectiveness of Recurrent Neural Networks and related writings.
http://karpathy.github.io/2015/05/21/rnn-effectiveness/