logoalt Hacker News

spionyesterday at 10:15 PM2 repliesview on HN

Has anyone measured whether doing things with AI leads to any learning? One way to do this is to measure whether subsequent related tasks have improvements in time-to-functional-results with and without AI, as % improvement. Additionally two more datapoints can be taken: with-ai -> without-ai, and without-ai -> with-ai


Replies

somethingsomeyesterday at 11:51 PM

I'm only a data point, but some years ago I spent a whole year learning a mathematical book above my level at the time. It was painful and I only grasped parts of it.

I did again the same book this year, this time spending much time questioning an llm about concepts that I couldn't grasp, copy pasting sections of the book and ask to rewrite for my understanding, asking for fast visualization scripts for concepts, ask to give me corrected examples, concrete examples, to link several chapters together, etc..

It was still painful, but in 2 months (~8h-10h a day) I covered the book in many more details that what I ever could do some years ago.

Of course I still got some memories of the content from that time, and I'm better prepared as I have studied other things in the meantime. Also the model sometimes give bad explanations and bad links, so you must stay really critic about the output. (same goes for plots code)

But I missed a lot of deep insights years ago, and now, everything is perfectly clear in two months.

The ability to create instant plots for concepts that I try to learn was invaluable, then asking the model to twist the plot, change the data, use some other method, compare methods, etc..

Note: for every part, when I finally grasped it, I rewrited it in my own notes and style, and asked the model often to critic my notes and improve a bit. But all the concepts that I wrote down, I truly understand them deeply.

Of course, this is not coding, but for learning at least, LLMs were extremely helpful for me.

By this experiments I would say at least 6x speedup.

epolanskiyesterday at 11:53 PM

Honestly I feel I have never learned as much as I do now.

LLMs remove quite a lot of fatigue from my job. I am a consultant/freelancer, but even as an employee large parts of my job was not writing the code, but taking notes and jumping from file to file to connect the dots. Or trying to figure out the business logic of some odd feature. Or the endless googling for responses lying deep inside some github issue or figuring out some advances regex or unix tool pattern. Or writing plans around the business logic and implementation changes.

LLMs removed the need for most of it which means that I'm less fatigued when it comes to reading code, focusing on architectural and product stuff. I can experiment more, and I have the mental strength to do some leetcode/codewars exercise where incidentally I'll also learn stuff by comparing my solution to others that can then apply back to my code. I am less bored and fatigued by the details to take some time more focusing on the design.

If I want to learn about some new tool or database I'm less concerned with the details of setting it up or exploring its features or reading outdated poorly written docs, when I can clone the entire project in a git subtree and give the source code to the LLM which can answer me by reading the signature, implementation and tests.

Honestly, LLMs remove so much mental fatigue that I've been learning a lot more than I've ever done. Yet naysayers will conflate LLMs as a tool with some lovable crap vibecoding, I don't get it.