AI Megathread
-
@dvoraen This is one of my favourite reads lately: https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/
-
@Hobbie I love that one.
Friend quoted me this recently:
"It’s like ChatGPT has read everything on the internet, and kind of vaguely remembers some of it and is willing to make up the rest.”
There are so many documented instances of LLMs making up nonsense. Citing books that don’t exist. Making up fake lawsuit citations. Misrepresenting articles written by journalists. Making up fake biographical details. The code it spits out is often garbage (or, worse, wrong in subtle ways). And that’s not even touching on all the random stupidity where it tells people to use glue in their pizza or incorporate poison into their recipes.
The whole GenAI industry is most likely just a big bubble built on a con.
-
People call this “hallucination”. I think we should stop letting them assign a new name to an existing phenomenon. The LLM is malfunctioning. It is saying things that are wrong. It is failing to do what it was designed to do.
-
-
“They’re designed to produce statistically probable sentences.”
Exactly. Sometimes what is statistically probable is also correct: “What is the capital of France?” will most likely correctly tell you “Paris” because Paris has a high statistical association with “capital of France”.
But this methodology is inherently unreliable for giving facts. A LLM might confidently assert that George Washington cut down a cherry tree, just because there’s a common association between Washington and that story, even though historians largely believe it’s a myth. Elon Musk associated with Teslas + Teslas associated with car crashes + Elon Musk associated with a car crash leads to a LLM erroneously asserting that Elon Musk died in a fatal Tesla crash. Sure, it’s a statistically probable sentence, but it’s just not true. The LLM doesn’t know whether something is true, and it doesn’t care.
-
@Faraday said in AI Megathread:
The LLM doesn’t know whether something is true, and it doesn’t care.
I know this may seem like a quibble, but I feel it’s an important distinction: It can’t do either of those things, because it’s not intelligent. It’s a very fancy word predictor, it can’t think, it can’t know, it can’t create.
-
As more and more time passes and AI becomes more and more widespread all I can think is, “When do we declare our Butlerian Jihad?” Because I’m kinda over all this stuff already lol.
-
The AI problem is now so bad that people are trying to defeat it with… more AI. I don’t even know. What is life.
What I do know is that if someone fed my poses into ChatGPT or similar, I’d be pretty pissed off about it.
-
Is it sad that I’ve met people whose natural writing and RP are so bad that I assumed they were LLMs incorrectly?
-
spotted in the wild, made me lol at least as much as it made me groan.