logoalt Hacker News

impulser_yesterday at 9:17 PM0 repliesview on HN

Local models are always going to be useless unless compute get significantly cheaper, and it's not. TSMC might literally run out of capacity to build any consumer compute product.

Once computer constraints ease up, you will see much larger models. The reason LLM seems to have stalled a bit is because there just not enough compute.

You have more people using AI which requires more compute, and you want to build larger models which requires more compute and you have limited compute. What do you do?