logoalt Hacker News

rent0patyesterday at 8:22 PM4 repliesview on HN

I've looked at the "only 18,935 lines of code" python code and it made me want to poke my eyes out. Not sure what's the point of this extreme code-golfing.


Replies

mellosoulstoday at 6:25 AM

From the instructions for potential contributors:

No code golf! While low line count is a guiding light of this project, anything that remotely looks like code golf will be closed. The true goal is reducing complexity and increasing readability, and deleting \ns does nothing to help with that.

https://github.com/tinygrad/tinygrad?tab=readme-ov-file#cont...

xiphias2yesterday at 11:58 PM

I think it's an amazing experiment.

You can look at the PyTorch code base and understand a small local function instantly, but if I would have a task to have a deep understanding of either PyTorch with all kernel code and low level code, whole CUDA code base + LLVM compilation code or Tinygrad I would pick Tinygrad in an instant.

The code looks hard because what it is doing is hard, but all its layers can be debugged.

Barrin92yesterday at 8:38 PM

yes, it's really crazy, if people think you're exaggerating look at this:

https://github.com/tinygrad/tinygrad/blob/master/tinygrad/co...

One of the cases why I think obsession with lines of code is one of the most counterproductive metrics, it always produces code like this.

show 12 replies
webdevvertoday at 12:50 AM

Down with large line counts!

LLMs have done two things for us: made lines of code a commodity, but arguably even more important: made understanding an arbitrary amount of lines of code a commodity too.

with this in hand, large codebases that were designed in the primordial era of requiring human cognition to grok are emabarassingly antiquated.

"umm what does this mean?" -> right click -> explain with AI -> "got it".

I literally reddit-soyjacked when i saw "tackle LLVM removal". I would argue that LLVM was/is the most well thought out and well-architected piece of software that exists today... but in the age of AI... who could possibly care now? when I say "well thought out" and "well-architected", that is in the context of human eyeballs. But who needs human eyeballs when we have much more powerful computer eyeballs? who grok code way better and way faster than we do.

now, the massive line count of projects like LLVM, Linux, and Chrome become much harder to defend in the context of code re-use. Why drag in millions of code when you can get an LLM to write just the bits you care about? LLVM has a ton of optimisations that are all well and good, but how many of them actually move the needle on HPC platforms that do an awful lot of instruction re-ordering and re-scheduling anyway?

Could we get away with 5% of the codebase to do "just the bits we care about"? in the past the answer might be, "well yes but its still a lot of (manual) coding", but now - why not? Let an LLM loose on the design doc and fill in the gaps!

show 1 reply