logoalt Hacker News

mmcdermott10/02/20242 repliesview on HN

There is a disconnect somewhere. When I read online, I hear about how GenAI/LLMs replace programmers and office workers. When I go to work, I mostly hear the question of how we can apply GenAI/LLMs, apart from discussion of the general buzz.

Maybe this is a reflection of local conditions, I'm not sure, but it doesn't seem like the truly revolutionary changes require the solution to find a problem. It was immediately clear what you could do with assembly line automation, or the motor car, or the printing press.


Replies

eru10/02/2024

Electricity famously took perhaps twenty years for people to slowly figure out how to re-organise factories around it.. Hence the delayed impact on productivity figures.

To elaborate: in the bad old days of you had one big engine, eg a steam engine, that was driving shafts and belts all around the factory. There was a lot of friction, and this was dangerous. So you had to carefully design your factory around these constraints. That's the era of multi-story factories: you used the third dimension to cram more workstations closer to your prime mover.

With electricity, even if you have to make your own, you just need cables and you can install small electric motors for every task on every workstation. Now your factory layout becomes a lot more flexible, and you can optimise for eg material flow through your factory and for cost. That's when factories becomes mostly sprawling one-story buildings.

I simplify, but figuring all of that out took time.

show 2 replies
jerf10/02/2024

I think it's a lot of little things. There's a lot of people very motivated to keep presenting not just AI in general, but the AI we have in hand right now as the next big thing. We've got literally trillions of dollars of wealth tied up in that being maintained right now. It's a great news article to get eyeballs in an attention economy. The prospect of the monetary savings has the asset-owning class salivating.

But I think a more subtle, harder-to-see aspect, that may well be bigger than all those forces, is a general underestimation of how often the problem is knowing what to do rather than how. "How" factors in, certainly, in various complicated ways. But "what" is the complicated thing.

And I suspect that's what will actually gas out this current AI binge. It isn't just that they don't know "what"... it's that they can in many cases make it harder to learn "what" because the user is so busy with "how". That classic movie quote "Your scientists were so preoccupied with whether they could, they didn't stop to think if they should" may take on a new dimension of meaning in an AI era. You were so concerned with how to do the task and letting the computer do all the thinking you didn't consider whether that's what you should be doing at all.

Also, I'm sure a lot of people will read this as me claiming AI can't learn what to do. Actually, no, I don't claim that. I'm talking about the humans here. Even if AI can get better at "what", if humans get too used to not thinking about it and don't even use the AI tool properly, AI is a long way from being able to fill in that deficit.