@Jynxbox said in AI Megathread:
@Pavel said in AI Megathread:
@Faraday said in AI Megathread:
The LLM doesn’t know whether something is true, and it doesn’t care.
I know this may seem like a quibble, but I feel it’s an important distinction: It can’t do either of those things, because it’s not intelligent. It’s a very fancy word predictor, it can’t think, it can’t know, it can’t create.
(This very obvious rant is not directed to anyone in particular but I still feel like it needed to be said…)
Neither can a computer. It can’t think, it can’t know, it can’t create any more than an AI can. It can’t get nearly as close as AI can.
This seems like a strange separation: AI is run on computers. A computer is simply a larger tool that you can run all sorts of smaller tools on.
A computer can know some things, if you program it to do so. If you program a calculator, you instruct it on immutable facts of how numbers work.
If GenAI successfully reports that 1+1=2, all it’s saying is that a lot of people on the internet have mentioned that’s probably the case. It’s searching a massive database of random shit and finding a bunch of instances where someone mentioned the text “1+1” and seeing that a bunch of those instances ended with “=2”. It’s giving you a statistically probable sentence. Due to this, it’s ridiculously, laughable easy to manipulate.
The calculator on your computer knows that 1+1=2 because it knows what 1 is, and it knows what addition is, and it knows how to sum two instances of 1 together. Computers are very good at following strict rules and working within them when they are programmed to do so. And computers are very good at analyzing and iterating, and people have written really effective automation and AI tools (of the non-generative variety) to do that over the years.
But yes: as you said, computers can’t produce raw creation. Which is kind of the point being made.