logoalt Hacker News

snek_caseyesterday at 3:00 PM0 repliesview on HN

You can work on building LLMs that use less compute and run locally as well. There are some pretty good open models. They probably be made even more computationally efficient.