@Pavel said in AI Megathread:
The only way you can truly tell if writing is LLM generated and not simply a style you’ve come to associate with LLM is to be comparative.
This is not true; people who are very familiar with AI-generated text can identify it accurately 90% of the time without any access to ‘comparative’ sources.
@Aria said in AI Megathread:
Anything I write professionally would almost certainly be pegged as written by AI,
@Pavel said in AI Megathread:
various institutions are using flawed heuristics – be they AI-driven or meatbrain – to judge whether something is written by an LLM
The fear that human-generated content is going to be flagged as written by AI is mostly overblown. People who are not familiar with AI are not good at detecting it, but when you see stats about how AI detection tools are “highly inaccurate”, that statistic is almost always referring to AI not being flagged (evasion), not false positives. Various studies have found commercial AI detector tools to have very low levels of “false positives”: GPTZero identified human content correctly 99.7% of the time, and Pangram also identified human content correctly over 99% of the time, while Originality.ai did slightly less well at only 98+% of the time.
If we take these numbers at face value, the odds of someone familiar with AI output identifying a piece of writing as suspect and putting it through two different commercial AI detectors and both of them flagging it as AI when it was, in fact, human-written, is in the neighborhood of 0.002%. You’re more likely to die in a given year than to have this happen to you. I’m personally comfortable with that level of risk.
The odds of someone unfamiliar with AI output accusing you off the cuff of AI use and being wrong about it are about 50%. So. You know. Watch out for that one.