logoalt Hacker News

w4yaiyesterday at 7:35 PM5 repliesview on HN

> It proves that modern LLMs can run without Python, PyTorch, or GPUs.

Did we need any proof of that ?


Replies

jdefr89yesterday at 8:15 PM

Python and PyTorch all call out to C libraries… I don’t get what he means by “proving LLMs can run without Python and PyTorch” at all. Seems like they don’t understand basic fundamentals about things here…

jasonjmcgheeyesterday at 7:55 PM

I guess llama.cpp isn't quite as popular as I had assumed.

christianqchungyesterday at 10:04 PM

A bizarre claim like that would be what happens when you let an LLM write the README without reading it first.

skybrianyesterday at 7:39 PM

Knowing the performance is interesting. Apparently it's 1-3 tokens/second.

show 1 reply
toleranceyesterday at 7:59 PM

I imagine so regarding GPUs, right? Is this is a legitimate project then doesn’t it provide a proof of concept for performance constraints that relate to them? Couldn't the environmentally concerned take this as an indicator that the technology can progress without relying on as much energy is potentially spent now? Shouldn’t researchers in the industry be thinking of ways to prevent the future capabilities of the technology from outrunning the capacity of the infrastructure?

I know very little about AI but these are things that come to mind here for me.

show 1 reply