This stands to reason. If you need the answer to a question, and you can either get it directly, or spend time researching the answer, you're going to learn much more with the latter approach than the former. You may be disciplined enough to do more research if the answer is directly presented to you, but most people will not do that, and most companies are not interested in that, they want quick 'efficient', 'competitive' solutions. They aren't considering the long term downside to this.
> If you need the answer to a question, and you can either get it directly, or spend time researching the answer, you're going to learn much more with the latter approach than the former.
Why not force everyone to start from first principles then?
I think learning is tied to curiosity and curiosity is not tied to difficulty of research
i.e. give a curious person a direct answer and they will go on to ask more questions, give an incurious person a direct answer and they won't go on to ask more questions
We all stand on the shoulders of giants, and that is a _good_ thing, not bad
Forcing us to forgo the giants and claw ourselves up to their height may have benefits, but in my eyes it is way less effective as a form of knowledge
The compounding force of knowledge is awesome to behold, even if it can be scary
*but most people will not do that*
LLMs will definitely be a technology that widens the knowledge gap at the same time that it improves access to knowledge. Just like the internet.
30 years ago people dreamed about how smart everyone would be with humanity's knowledge instantly accessible. We've had wikipedia for a while, but what's the take-up rate of this infinite amount of information? Most people prefer to scroll rage-bait videos on their phones (content that doesn't give them knowledge or even make them feel better, just that makes them angry)
Of course it's amazing to hear every once in a while the guy who maintains a vim plugin by coding on his phone in Pakistan.... or whatever other thing that is enabled by the internet by people who suddenly have access to this stuff. That's not an effect of all humans on average, it's an effect on a few people who finally have a chance to take advantage of these tools.
I heard in a YouTube interview a physicist saying that LLMs are helping physics research just because any physicist out there can now ask graduate-level questions about currently published papers, that is, have access to knowledge that would have been hard to come by before, sharing knowledge across sub-domains of physics by asking ChatGPT.
> They aren't considering the long term downside to this.
This echoes sentiments from the 2010s centered around hiring. Companies generally don’t want to hire junior engineers and train them—this is an investment with risks of no return for the company doing the training. Basically, you take your senior engineers away from projects so they can train the juniors, and then the juniors now have the skills and credentials to get a job elsewhere. Your company ends up in the hole, with a negative ROI for hiring the junior.
Tragedy of the commons. Same thing to day, different mechanism. Are we going to end up with a shortage of skilled software engineers? Maybe. IMO, the industry is so incredibly wasteful in how engineers are allocated and what problems they are told to work on that it can probably deal with shortages for a long time, but that’s a separate discussion.
that's why I mostly use chatgpt with platonic questions like
- given context c, i tried idea a, b and c. where there other options that I miss ?
- based on this plan, do you see missing efficiency ?
etc etc
i'm not seeking answers, i'm trying to avoid costly dead ends
>you can either get it directly, or spend time researching the answer, you're going to learn much more with the latter
A LOT of the time the things I ask LLMs for are to avoid metaphorically wading through a garbage dump looking for a specific treasure. Filtering through irrelevant data and nonsense to find what I'm looking for is not personal development. What the LLM gives back is often a very much better jumping off point for looking through traditional sources for information.
Sure, if I spend one hour researching a problem vs asking AI in 10 seconds, yes I will almost always learn more in the one hour. But if I spend an hour asking AI questions on the same subject I believe I can learn way more than by reading for one hour. I think the analogy could be comparing a lecture to a one-on-one tutoring session. Education needs to evolve to keep up with the tools that students have at their disposal.
I had thought I saw somewhere that learning is specifically better when you are wrong, if the feedback for that is rapid enough. That is, "guess and check" is the quickest path to learning.
Specifically, asking a question and getting an answer is not a general path to learning. Being asked a question and you answering it is. Somewhat, this is regardless of if you are correct or not.
I don't know if I agree here. When I ask an LLM a question it always leads to a whole lot of other questions with responses tailored to my current level of understanding. This usually results in a much more effective learning session than reading a bunch of material that I might not retain anyway because I'm scanning it looking for my answers.
I think you put your finger on it with the mention of discipline. I find AI tools quite useful for giving me a quick outline of things I want to play with or get up to speed on fast, but not necessarily get too invested in. But if you fin yourself so excited by a particular result that it sets your imagination whirling, it might be time to switch out of generative mode and use the AI as a tutor to deepen your actual understanding, ideally in combination with books or other static learning resources.
Actually, for most things (not PHD research level) you will learn more from the first approach. Getting answer directly means you can use the rest of the "free" time to integrate new knowledge into prior knowledge and review the information into long term memory.
We have accounts from the ancient Greeks of the old-school's attitude towards writing. In the deep past, they maintained an oral tradition, and scholars were expected to memorize everything. They saw writing/reading as a crutch that was ruining the youth's memory.
We stand now at the edge of a new epoch, reading now being replaced by AI retrieval. There is concern that AI is a crutch, the youth will be weakened.
My opinion: valid concern. No way to know how it turns out. No indication yet that use of AI is harming business outcomes. The meta argument "AGI will cause massive social change" is probably true.