logoalt Hacker News

totetsutoday at 5:59 AM1 replyview on HN

Gemini told me just this morning that there are three pillars of cognitive decline related to AI use. - Reduced ability to exert cognitive effort resulting from habitual offloading of tasks. - Deminished Meta-cognitive Self-Trust, due to constantly seeking external validation from AI. - Decline in memory Encoding, and less brain effort is spent processing information. In all seriousness however, I think some of the interesting things to observe in this areas are; the reaction against the word 'Safety' as a whole and its replacement with 'Security'. Safety seeming to have it's roots in like the work of Ralph Nader with automobiles, and Security being some thing that can be manifactured and sold. In this sense I wonder how the discourses of 'Personal AI Safety' fit into past discussions of the offloading of risks resulting form choices of corperations onto individuals. But in the case of LLMs .. it really is the case that what makes it useful is what makes it dangerous. And ultimately because, of the high-dimensionality of the language space they are encoding, it seems impossible to make any technical barrier that can completely cut off access to parts of that space that encode for for example encouraging someone to kill themselves. Things can, and are done, it fine-tuning, pre- and post-filtering, etc, to reduce the readiness for a system to share with a user this kind of output, but all it can ever do is reduce it. Then the question is, who's responsibility is it to make sure that these things are done well.


Replies

achieriustoday at 6:28 AM

Based on what? This seems like speculation.

show 1 reply