AI In Poses
-
@Yam said in AI In Poses:
There is a human that’s in charge of disinviting people to games.
I’m not only talking about staff using AI detectors to ban people, I’m also talking about people running each others’ poses through AI detectors.
If your human gut is saying that something is AI generated, that’s one thing. I just don’t trust these AI detector tools. Everything I’ve seen about them from experts tells me that the fundamental way they work is flawed, and I’ve seen enough drama around false-positives that I don’t want anything to do with them. YMMV.
-
I very strongly disagree these detectors should be discounted. Don’t use the free one google suggests and no other checks, obviously, but the hobby needs to be protected from this slop, at whatever costs. It’s an inherently vulnerable hobby, where everyone is agreeing to submit their actual writing to another person in a scene for reaction. For fun!
It’s not just that LLM writing is hollow, bloated, and uninteresting (though it is), it’s also a breach of that fundamental contract to use it. I have always told people if they aren’t in it to have fun AND make fun for their scene partners, they need a new hobby. If you are using LLMs you are not engaged and you certainly aren’t trying to be fun to rp with.
-
@Yam said in AI In Poses:
Has ANYONE gotten banned, not suspected, BANNED, for use of LLM in poses/profiles/etc when they HAVEN’T used it?
This is the only thing that concerns me. I’m a FOOL and was tricked by at least 1 AI app that slipped through. Sorry to catzilla for having to RP with this ai person for a week
I recall you lamenting.At least it made me aware that people are doing copy and paste of AI in their writings/backgrounds/etc.
I can’t recall if their poses were actually AI but everything in their profiles was.
And then looking back at it (before the website went kablooey) I was like, this is so obvious AI how did I not know?

-
@Faraday The tools work OK from what I’ve seen (as has been mentioned on other threads, Turnitin has modules for this). What it does is flag it as possible - the human has to do some leg work. With essays, for example, you might look at a mini viva. M**s have a pretty low bar, the cost of being booted off the game is…you don’t get to play on that game. That’s all.
-
Oh also, people using GenAI in this way absolutely should be shunned. It is crass slop that is fucking up a lot of things, and the sooner the bubble bursts the slightly less screwed we will all be. There are really specific use cases for GenAI - but it is expensive tech, and those use cases don’t need everyone to be using it. Hopefully they’ll get onto AI that actually learns at some point.
-
@catzilla said in AI In Poses:
At least it made me aware that people are doing copy and paste of AI in their writings/backgrounds/etc.
I have legit seen people copy AI slop into their backgrounds and forget to take the AI chatter out, so at the end it’s like
Would you like a more dramatic, literary option next?It would be extremely funny if it was on purpose, but I’m very sure it wasn’t.
@hellfrog said in AI In Poses:
It’s not just that LLM writing is hollow, bloated, and uninteresting (though it is), it’s also a breach of that fundamental contract to use it. I have always told people if they aren’t in it to have fun AND make fun for their scene partners, they need a new hobby. If you are using LLMs you are not engaged and you certainly aren’t trying to be fun to rp with.
+1000.
Also +1000 to @Faraday’s point that you cannot trust AI detectors to detect AI. They just absolutely are not trustable, and I think human intuition of “wait, this writing feels wordy and bland and disconnected from what’s actually happening in the scene” is both more accurate and more useful right now, because if a person isn’t using AI but does sound wordy and bland and disconnected from the scene, that’s still worth checking in about. (Mostly the disconnected “are you even reading this scene” part. I too am sometimes bland and wordy all by my human self, but at least I’m blandly and wordily responding to the scene.)
-
Regardless of whether I agree (morally, ethically, whatever) with the use of AI, out in the real world I can understand it: You want to make a buck, get a grade, or otherwise achieve something that’s difficult with as little effort as possible. I get that.
But… creativity and writing are the entire goddamn point(s) of the kind of RP we do. If you want to use Grammarly or something like that to catch typos and comma placement, that’s totally fine, but to use an LLM to do the creative bit is so alien an idea to me that I’d probably never even suspect a person of doing it. I’d probably just think they’re boring, or ESL, or ESL and boring.
-
@Clarion said in AI In Poses:
Also +1000 to @Faraday’s point that you cannot trust AI detectors to detect AI. They just absolutely are not trustable, and I think human intuition of “wait, this writing feels wordy and bland and disconnected from what’s actually happening in the scene” is both more accurate and more useful right now, because if a person isn’t using AI but does sound wordy and bland and disconnected from the scene, that’s still worth checking in about.
On a personal level I agree with and I’m guided primarily by my intuition as I try to navigate this stuff. When I pull up a detector it’s to have a second sanity check. Because what else have we got, ya know? The idea of relying purely on my feels is worse to me, because I think it does open the door to people get way too confident about being The Em Dash Police with people they haven’t played with much, while being totally blind to their buddy who’s writing style suddenly morphs into nine paragraph purple sycophancy. It also feels very easy to invalidate when someone says, ‘Well it looks fine to me! Who cares lol’ I mean…maybe it is fine? And plenty of people don’t care, which is all well and good and they can figure that out for themselves. But I do care! And I am trying to navigate this stuff.
I also don’t think players should be getting into ‘nu-uh’ fights about this among themselves while staff is just hands off. I increasingly want a game to be very clear on what its stance on AI is because players policing this themselves is a fucking nightmare. If staff doesn’t allow LLM writing, players should respect that, and potential LLM posers should imo be reported so staff can sort it out with whatever process they manage to cobble together. If a game straight-up doesn’t care they should say they straight-up don’t care and people should leave it be, and then players can decide for themselves whether the environment is OK with them.
-
OK genuine question because I couldn’t find anything with a quick search and was too lazy to dive deep, but…
Don’t AI detectors also use LLMs? And couldn’t they then be training on the stuff they’re scanning?
If so, by putting poses into them, I could potentially be using other peoples’ RP to feed the very machine I hate so much.
-
@Faraday
This is a question about the individual service, not the entire category. For instance, Pangram’s policy:Pangram does not train generalized AI models like ChatGPT, and our AI detection technology is based off of a large, proprietary dataset that doesn’t include user submitted content.
We train an initial model on a small but diverse dataset of approximately 1 million documents comprised of public and licensed human-written text. The dataset also includes AI-generated text produced by GPT-4 and other frontier language models. The result of training is a neural network capable of reliably predicting whether text was authored by human or AI.
If you refuse to use any technology that relies on machine learning, algorithms, or neural networks regardless of the specifics then obviously that is your prerogative but you are going to have a hard time using the internet at all.
-
@Trashcan said in AI In Poses:
If you refuse to use any technology that relies on machine learning, algorithms, or neural networks regardless of the specifics then obviously that is your prerogative but you are going to have a hard time using the internet at all.
Yes, that would be ridiculous, and is not even remotely close to anything I’ve said. I have literally worked on ML software to identify cancer cells on digital pathology scans and categorize covid risks. My objection is to LLMs trained on material without compensation or consent, designed to replace creative folks with crappy knockoffs.
I just asked if someone knew off-hand how these detector tools worked because I couldn’t find the info quickly myself. Not all of them in existence, but the prominent ones at least.
-
@Third-Eye said in AI In Poses:
I increasingly want a game to be very clear on what its stance on AI is because players policing this themselves is a fucking nightmare.
Yeah I’d agree, this is something that staff should be on top of, one way or another. Any situation that dips into accusation drama would be a player problem and should be dealt with accordingly.
-
@Yam Something like “Using LLMs/AI for any contributions to the game, including but not limited to backgrounds, descriptions, wiki images, poses, etc, is a bannable offence. Being a dick if you suspect someone of using LLMs/AI is also a bannable offence.”
-
@Third-Eye said in AI In Poses:
When I pull up a detector it’s to have a second sanity check. Because what else have we got, ya know?
Yeah I totally get that impulse. My concern is rooted in psychology more than the technology itself.
Here’s a different example that maybe illustrates my point better: Grammar checkers. They are sometimes useful and very often completely, utterly wrong. As a professional writer, I have the skill to sift through the chaff to find the suggestions that are actually correct and useful. But the teen homeschoolers I work with don’t. If I hadn’t taken the time to teach them why one should be skeptical of the suggestions from a grammar checker, it would be completely understandable for them to just be like: “Well, this thing obviously knows more than me; I should do what it says.” (Here’s a neat video essay about the problems with someone who doesn’t know grammar well using Grammarly, btw)
So I’m not saying “never use grammar checkers because they suck and have no value”. I’m just saying that they don’t work well enough to be relied upon, and anybody who uses them needs to be well aware of their limitations. This just doesn’t happen when you’re a layperson whose only info is their marketing hype.
Now that’s grammar checkers, where we have a tangible baseline to compare it to (e.g., CMS style guide, etc.) Plagiarism detectors are the same. It’ll tell me: “Hey, this seems like it’s ripping off (this article)” and I can go look at the article and decide if it’s right.
With AI detectors, you don’t have that capability. You just have to take its word for it. If it lines up with your vibe, you’re probably likely to take that as confirmation even if it’s wrong. If it doesn’t line up with your vibe, you have no way to tell whether it’s wrong or you’re wrong.
I also have concerns about the fundamental way these detectors work. GPTZero analyzes factors like “Burstiness”. Yes, sometimes AI writing has low burstiness because it’s overly uniform. But sometimes human writing has low burstiness too, and sometimes AI writing can be massaged to make it bursiter.
These tools are new, there hasn’t been a lot of sound research into the subject (even that big article from U of Chicago was a “working paper” that hasn’t been peer-reviewed (as far as I can tell). Their methodology might suck or it might be brilliant, but until more folks have reproduced the research, it can’t be taken as gospel.
-
@Faraday said in AI In Poses:
So I’m not saying “never use grammar checkers because they suck and have no value”. I’m just saying that they don’t work well enough to be relied upon, and anybody who uses them needs to be well aware of their limitations. This just doesn’t happen when you’re a layperson whose only info is their marketing hype.
Make not mistake, I agree with this and agree with it more the more I play with the detectors, both the one I value enough to pay actual money for and the various other ones out there. I think Grammar Checks are one of the better comparisons that’ve come up, because I both use them and frequently just ignore what they’re highlighting.
Gonna be real real here, and apologies if this sounds dismissive to the people who say they can’t tell or don’t care about this stuff. I honestly do trust my gut more at this point, especially if my gut is confirmed by three or four other acquaintances of mine who I think have a really good sense of this stuff. My experience with false positives has been pretty minimal and limited to sets of text I’ve come to the conclusion teh detectors aren’t well-equipped to read. But the false negatives, and they happen not-infrequently, drive me a little batty. Because at that point I feel like I just have to accept probably being lied to by this person I’m playing with and shrugging my shoulders. But, idk, maybe that’s not the worst thing in the world.
I also really cringe at ‘vibes man’ becoming the way to figure this out, though, because I see some people spot ‘AI’ and I think they’re wrong, have terrible instincts, and are fixating on stuff I don’t think is relevant.
-
@Third-Eye said in AI In Poses:
I also really cringe at ‘vibes man’ becoming the way to figure this out, though, because I see some people spot ‘AI’ and I think they’re wrong, have terrible instincts, and are fixating on stuff I don’t think is relevant.
Just to be clear on my stance - I can absolutely believe that there are people whose gut is worse than the detectors, and people whose gut is better than the detectors. I’m just critiquing the detectors in isolation and the danger of someone who already has a bad gut relying on them.
Much the same stance I have with self-driving cars, incidentally. They are definitely better than the worst drivers, and worse than the best drivers. But that aside, they are nowhere near reliable enough that I would trust myself or my loved ones to their care.
-
@Tez said in AI In Poses:
I might not run things through a plagiarism checker, but I literally have seen people steal descriptions from other people and reuse them on other games. (@Roz for example. Someone stole her character desc from Arx and tried to use it on Concordia. As I recall, the player was disciplined. I am not sure if they were banned.)
I have been accused of stealing a desc I wrote bc someone first saw it being used by a person who stole it.
-
@Clarion said in AI In Poses:
They just absolutely are not trustable, and I think human intuition of “wait, this writing feels wordy and bland and disconnected from what’s actually happening in the scene” is both more accurate and more useful right now, because if a person isn’t using AI but does sound wordy and bland and disconnected from the scene, that’s still worth checking in about.
Wordy, bland, and disconnected? Shit. Now I’m starting to think that they trained the AI on my poses.
-
@Third-Eye said in AI In Poses:
I also really cringe at ‘vibes man’ becoming the way to figure this out, though, because I see some people spot ‘AI’ and I think they’re wrong, have terrible instincts, and are fixating on stuff I don’t think is relevant.
The number of times I’ve seen people on social media assert that something is “clearly AI” simply because it is a thing they themselves have never said or seen is astonishing. I can’t imagine it being any better when it’s something important, like the RP they’re presently having.
-
Aight so we can’t use tools to check, and we can’t use our guts to check, and we apparently can’t use both to check. What the fuck do we do, lie back and think of England? Hope for structural change in society? Assume the doofus that wrote like a chimpanzee 1 pose ago mustered the will and intelligence to get their shit together for this poetry contest?