logoalt Hacker News

minihattoday at 1:10 PM7 repliesview on HN

It's currently socially/politically unpalatable for authors to admit superintelligent AI is a possibility. I frequent some writer forums. As a group, they are 1) clearly feeling angry/threatened 2) in denial about LLM capabilities.

Folks working in software can more readily track progress of the frontier model performance.


Replies

pmarrecktoday at 1:17 PM

I work with Claude Max for hours a day.

I see a lot of speculation by people who do not.

I think it's going to be much harder to get from "slightly smarter than the vast majority of people but with occasional examples of complete idiocy" to "unfathomably smarter than everyone with zero instances of jarring idiocy" using the current era of LLM technology that primarily pattern-matches on all existing human interactions while adding a bit of constrained randomization.

Every day I deal with bad judgment calls from the AI. I usually screenshot them or record them for posterity.

It also has no initiative, no taste, no will, no qualia (believe what you will about it), no integrity and no inviolable principles. If you give it some, it will pretend it has them for a little while and then regress to the norm, which is basically nihilistic order-following.

My suggestion to everyone is that you have to build a giant stack of thorough controls (valid tests including unit, integration, logging microbenchmark, fuzzing, memory leak, etc.), self-assessments/code-reviews, adverse AIs critiquing other AIs, etc., with you as the ultimate judge of what's real. Because otherwise it will fabricate "solutions" left and right. Possibly even the whole thing. "Sure, I just did all that." "But it's not there." "Oops, sorry! Let me rewrite the whole thing again." ad nauseam

BUT... if you DO accomplish that... you get back a productivity force to be reckoned with.

show 1 reply
elicashtoday at 1:27 PM

> As a group, they are 1) clearly feeling angry/threatened 2) in denial about LLM capabilities.

Or they (3) disagree with you

bigfishrunningtoday at 1:52 PM

I think the best phrase from the article is "the current (admittedly impressive) statistical techniques". These statistical techniques are so impressive that they seem to cause some users to stop evaluating them and assume there's intelligence there. Landing at this conclusion is really lazy, but most people are really lazy. The societal damage from LLMs comes not from their intelligence, but from the public perception of their intelligence.

ProllyInfamoustoday at 2:06 PM

>>2) in denial about LLM capabilities

If you want me to admit that machines will never be conscious — that's fine — I just need you to admit that lots of humans are not conscious, then, either.

----

I have never had a better bookclub participant than an LLM — if becoming a great reader correlates with becoming a great writer, then no human can compare.

----

Michael Pollen recently released A World Appears [0], which explores consciousness from the minds of writers, scientists, philosophers, and plants (among other "inanimates").

I'm only on page 15, but his introduction explores distinctions between sentience, consciousness, and intelligence. Two of these are possible without brains – perhaps all three?

As usual, this author's footnotes keep you thinking: what is it like to be a sentient plant (e.g. the "chameleon vine" [1] which mimics its host leaf patterns/shape/color)?

[0] <https://www.amazon.com/World-Appears-Journey-into-Consciousn...>

[1] <https://en.wikipedia.org/wiki/Boquila>

sublineartoday at 1:30 PM

What makes you think a sustainable negative social/political trend laser focused on AI is even possible?

Statistical approaches were already extremely unpopular socially and politically long before AI came around. Have you considered that it just doesn't work?

vrganjtoday at 1:16 PM

As somebody in software, I find my fellow tech folks have the opposite bias.

There is no reason to believe superintelligent AI is a possibility. Extraordinary claims require extraordinary evidence, and so far we haven't gotten any.

The burden of proof is on the side making the grand prophecies.