> It proves that modern LLMs can run without Python, PyTorch, or GPUs.
Did we need any proof of that ?
I guess llama.cpp isn't quite as popular as I had assumed.
A bizarre claim like that would be what happens when you let an LLM write the README without reading it first.
Knowing the performance is interesting. Apparently it's 1-3 tokens/second.
I imagine so regarding GPUs, right? Is this is a legitimate project then doesn’t it provide a proof of concept for performance constraints that relate to them? Couldn't the environmentally concerned take this as an indicator that the technology can progress without relying on as much energy is potentially spent now? Shouldn’t researchers in the industry be thinking of ways to prevent the future capabilities of the technology from outrunning the capacity of the infrastructure?
I know very little about AI but these are things that come to mind here for me.
Python and PyTorch all call out to C libraries… I don’t get what he means by “proving LLMs can run without Python and PyTorch” at all. Seems like they don’t understand basic fundamentals about things here…