AI Megathread
-
@Yam
I think that some amount of mistakes in any system are acceptable. Nothing is flawless. To me the barrier that a system needs to clear is “better than any alternative”.In AI detectors, we’ve already seen that most of the time, people unassisted get it right only 50-60% of the time. Certain detectors are performing at level where less than 1% of results are false positive. That seems better.
@Faraday said in AI Megathread:
say you have a self-driving car. Are you OK if it gets into an accident 1 out of every 100 times you drive it?
There were about 6 million auto accidents in 2022. If the self-driving car (extrapolated to the whole population) would have caused 5 million accidents, it would be better.
@Faraday said in AI Megathread:
Say you have a facial recognition program that law enforcement leans heavily on. Are you OK if it mis-identifies 1 out of every 100 suspects?
If this facial recognition program does a better job than humans, yes I am okay with it. Humans are notoriously poor eye witnesses.
Eyewitness misidentification has been a leading cause of wrongful convictions across the United States. It has played a role in 70% of the more than 375 wrongful convictions overturned by DNA evidence. In Indiana, 36% of wrongful convictions have involved mistaken eyewitness identification.
@Pavel said in AI Megathread:
but a sort of “humans relying on authorities instead of thinking” problem
There are cases when humans should rely on authorities instead of thinking. No one is advocating for completely disconnecting your brain while making any judgment, but authoritative sources can and should play a key role in decision-making.
-
@Trashcan said in AI Megathread:
There were about 6 million auto accidents in 2022. If the self-driving car (extrapolated to the whole population) would have caused 5 million accidents, it would be better.
Lol man, I have to agree. I realize that we’re generally anti-generative AI in art/writing here but I’ll be honest, if the computer drives the car better than my anxious ass, I’ll ride along.
-
@Trashcan said in AI Megathread:
There were about 6 million auto accidents in 2022. If the self-driving car (extrapolated to the whole population) would have caused 5 million accidents, it would be better.
Making cities walkable would be far better than throwing more money into the abyss that cities become when they’re overrun by self-driving cars.
-
@Yam said in AI Megathread:
if the computer drives the car better than my anxious ass, I’ll ride along.
That’s a big “if” though, and is the crux of my argument.
@Trashcan said in AI Megathread:
If this facial recognition program does a better job than humans, yes I am okay with it. Humans are notoriously poor eye witnesses.
The difference is that many people know that humans are notoriously poor eye witnesses. Many people trust machines more than they trust other humans, even when said machines are actually worse than the humans they’re replacing. That’s the psychological effect I’m referring to.
-
@Jumpscare said in AI Megathread:
@Trashcan said in AI Megathread:
There were about 6 million auto accidents in 2022. If the self-driving car (extrapolated to the whole population) would have caused 5 million accidents, it would be better.
Making cities walkable would be far better than throwing more money into the abyss that cities become when they’re overrun by self-driving cars.
Unfortunately, tech bros would rather reinvent bandaid solutions over and over again instead of actually working to improving the future.
-
@Jumpscare Walkable cities is a whole 'nother can of worms.
-
@Jumpscare Totally, but don’t let the perfect be the enemy of good.
-
@Trashcan It is part of ‘Turnitin’ which is pretty widely used. I have no idea if it’s one of the ones you’ve listed here, or which one if it is.
Part of what’s exasperating about it is that it doesn’t give me any clue as to why it is tagging segments as “likely AI generated” so even if I don’t spot some way that makes it seem likely that it’s wrong, what possible use is it?
It would be ironic to the point of grotesque in the context of a class where I spend the whole time saying, “Why do you believe that?” and “Prove it,” and “Where’s the evidence?” and “Does that research methodology work? Do you think the result mean what the reaseachers say it means? Did the newspaper report say it means what the researchers said it means?” and so on. After that I’m gonna roll up and say, “Hey, a computer program using semi-secret methodology to detect AI says you cheated, so did you?” to a student?
I get @Faraday’s comments about people trusting computers in a weird way, but I guess I don’t share that, because I feel like I may as well draw tarot cards and just say anybody who gets an inverted swords card cheated.
-
@Gashlycrumb how did you know about the method I used to grade papers when I was a TA?
-
@Trashcan said in AI Megathread:
No one is advocating for completely disconnecting your brain while making any judgment
I know that. You know that. But people are idiots and will entirely defer to an authority. Education is always ten years behind technology, and laws are fifteen years behind that.
-
While that ChatGPT Wikipedia guide is sort of okay, here are some really obvious tells that I’ve noticed.
“It’s not x. It’s y.”
“It’s x. And that matters.”
ChaGPT also likes the words “weight” and “pressure” a lot.
Here’s a style example:
It writes like this. Short sentences. And then starting a sentence with a conjunction. That’s why each paragraph has weight, and that matters.
-
@InkGolem said in AI Megathread:
“It’s not x. It’s y.”
The reason that this—or anything else GenAI does—comes up frequently is because the algorithms are recognizing patterns in actual writing. It doesn’t make this stuff up out of thin air.
Now yes, sometimes it uses those constructs in the wrong place / wrong way and that can be a tell. Or you can find it in weird places, like an email from your friend. But the construct itself isn’t a tell of AI use. The AI is just copying what it sees.
For example, from a few book sources in the 1900’s:
It’s not about dieting. It’s about freedom from the diet.
It’s not a newspaper. It’s a public responsibility.
It’s not being moved. It’s simply joy.
It’s not love. It’s something higher than love.
It took about 60 seconds to find these and a zillion other instances on a Google Ngram search of published literature.
-
I’m not trying to start anything, but a lot of this description is kind of why I shied away from Berem (aside from losing all time to do anything outside of work, family, sleep).
No shade for anyone using AI in code; to be clear, I don’t know if that’s the case here. Also, I’m not even sure if they actually used AI in the setting writing, but it just hits 100% of all of the hallmarks described above, including the use of emojis.


And the text itself:
The Town’s GrowthBerem was carved from wild woods and stubborn fields. The founders built it where river met forest, shielded by cliffs and watched over by a shimmering lake. They poured their remaining magic and wealth into it: a stout Town Hall to govern, a Marketplace to trade, an Inn for wanderers, and a Temple so Berem’s spirit would be forever honored.
Word soon spread across Mystara’s Known World — not of a bustling metropolis, but of a place where young heroes could find their first footing. Here, a novice could earn coin protecting caravans, clearing nearby ruins, or rooting out trouble in the thick woods. Berem’s Tavern became famous for its worn quest board, where calloused hands posted cries for help — and new legends were born.
The town’s location remains a curiosity. Some maps mark Berem near Darokin, others whisper it lies close to the edges of the Five Shires, or in the misty borderlands beyond Karameikos. Berem itself cares little for such speculations. It belongs to the world, and to those bold enough to find it.
️ A Beacon for AdventurersBerem today remains what it has always been:
- A starting point for wanderers, outcasts, and dreamers.
- A resting place for those who need healing or hope.
- A memorial to courage — not grand, but real.
Its harbor welcomes merchant ships. Its smithy forges the blades that will one day sing in distant halls. Its temple bells toll for the fallen and the triumphant alike. And every so often, beneath the light of the high tower’s beacon, a new band of adventurers sets forth — hearts full of hope, blades newly sharpened — carrying the spirit of Berem into the wide, wild world.
They say if you stand quietly at the town’s entrance when the mist rolls in, you might just hear Berem’s laughter on the breeze — urging you onward.
I’m not sure why I had such a visceral reaction to it. Generally speaking I don’t care if people use AI in code, and I will usually have a kind of ‘meh’ reaction to seeing AI art (especially the piss-filter images on the NPC pages on the Berem site), but the writing just took me entirely out of it.
If I had to guess, it’s because I can see my own motivations for using AI code or something to help facilitate a location where I can tell a specific story. If you use AI for everything altogether, then I wonder what the point of having a writing-based game is, or why you would create a unique location in a world at all. Berem is not, to my knowledge, part of the Mystara canon, so if you have an idea for a cool little adventurer town, why not write it up yourself? Idk. This also is likely not completely on topic, since most of the discussion is on AI in the real world rather than in our corner of it.
-
@somasatori Yeah. It looked like AI to me and it’s not the only game out there that looked like it had AI content to me. I don’t personally care as much about code, probably bc I’m not a coder, but the content is the heart and soul of it for me, and it sucks to see.
-
Yeah, that absolutely reads as LLM slop to me. BOooo
-
Yeah, I would personally never call myself a coder, just an extreme hobbyist (also my knowledge of code starts at Evennia/python and ends at Evennia/python, with a very minor knowledge of TinyMUX/MUSH functions from 13 years ago thanks to Cobalt doing a code class back then). The setting and story is where a game really lives and breathes for me. I have admittedly a million ideas for games, but those ideas often have very specific stories, locations, characters, etc. which/who I will generally feel the need to write out, for better or worse. I personally don’t think this is an abnormal perspective for MUSH developers, especially the ones who are driving the setting, or story if something like a metaplot exists. Obviously we all can’t make an Arx or write a bunch of setting lore like Empire or some of the WoD projects. However, even if you’re just coming in with a new game that’s completely barebones, aren’t you at least interested in seeing what happens based on how the world is developed by your players? That probably requires some investment in the creation process.
I’m reminded of this comment that I read, or maybe was highlighted in one of the writing youtuber people that I watch, where someone was saying that LLMs allow writers to bypass having to work out the prose to get to the plot. This defeats the purpose of writing, in my opinion. I have like no physical artist bone in my body, so it could be that I’m reacting in the same way as my artist friends do when they see AI art.
-
though this non-MUSH-related thing also annoys me about AI and the techbro insistence that it must be in absolutely every product:

I typo’d “b and”. No one calls a watch band a “watch B&”. This is a very Hello Fellow Kids reaction, because if I had to guess it’s assuming that I’m using some sort of slang.
-
@Tez said in AI Megathread:
I don’t personally care as much about code, probably bc I’m not a coder, but the content is the heart and soul of it for me, and it sucks to see.
FWIW I’m a professional software engineer and I am 100% fine with people using AI to help write small amounts of MU* code, like CSS tweaks, as long as they’re comfortable with the fact that it might be weird and flaky code that’s extra hard to troubleshoot.
I would absolutely not build an entire codebase from an AI vibe-codey foundation, but that’s not at all an ethical stance, just a “wow holy shit that’s going to be impossible to maintain, please just use Ares or Evennia or something and read some guides” stance. Maybe in a few years it’ll be a 100% technically viable option and turn back into an ethical question, but by then I’ll probably be automated out of job anyway, whee.
AI slop in the content of a game will make me sadface, though.
Btw, I’ve absolutely had people ask me professionally if I used ChatGPT to write something, and it made me so deeply offended that now I deliberately swing my writing the other way and refuse to overly polish it. You’re just gonna get HELLA ADVERBS and run-on sentences and informal grammar from me, so there. Embrace my squishy human foibles.