logoalt Hacker News

hbarkatoday at 5:09 AM1 replyview on HN

I share the sentiment here about LLMs helping to surface personal tacit knowledge and the same time there was a popular post[1] yesterday about cognitive debt when using AI. It's hard not to be in agreement with both ideas.

[1] https://news.ycombinator.com/item?id=46712678


Replies

llIIllIIllIIltoday at 6:02 AM

I guess it depends on how people interact with LLM. Cognitive debt may be acquired when people `talk` with machines, asking personal questions, like asking what to answer to the sms from a friend, etc.

It may seem different when people `command` LLMs to do particular actions. At the end, this community, most of all probably, understands that LLM is nothing else than advanced auto-complete with natural language interface instead of Bash.

> Write me an essay about birds in my area

Than later will be presented as human’s work compared to

> How does this codebase charge customers?

When a person needs to add trials to the existing billing.

The latter will result a deterministic code after (many) prompts that a person will be able to validate for correctness (another question if they will though).