Don’t forget we moved!
https://brandmu.day/
Would you roleplay with AI?
-
It’s always hilarious (and depressing) when people who don’t know how technology works get worked up about it. In the early 19th century, people were terrified of the trains. There was a famous work of art that depicted the locomotive as a hungry beast, eternally driving it and his human cargo down to the deepest reaches of hell. Now it’s AI in general and Chat-GPT in particular.
People believing chatbots have cognition is not new. The first chatbot, Joseph Weizenbaum’s Eliza, did the same thing way back in 1964. He was shocked, in fact, at how willing people (including his own secretary) were willing to assign emotion and thought to what was a very simple program.
Chatbots are a computer science magician’s trick, a bit of chicanery that allows the user to fool themselves. If you don’t know how it works, it can seem magical, but the trick is fundamentally simple.
Chat-GPT uses a statistical analysis of a body of input (typically text, but images can be used as well) to count the frequency of ‘tokens’. A token is slice of text and can be sentences, fragments of sentences, words, or individual letters (with images it’s shapes, lines, colors, and individual pixels). After the tokens in the source have been counted, when some other form of input is given, the chatbot looks at the statistical model and outputs a response based on that.
For instance, lets say there’s a sentence fragment that says, “The dog sat on the”. What’s the next word used to complete the sentence? Depending on what the chatbot was trained on, it might be ‘chair’ or ‘carpet’ or ‘stoop’ or any number of things. The key here is that the frequency that the token appears in relation to the next token is used as a weighting to provide the next token, usually randomly.
But that’s a old and well known method for producing an underlining model of any organized but fuzzy system (like natural language or image processing). Chat-GPT goes a bit further.
As has been noted, systems like Dungeon AI tend to produce mostly nonsense over time. This is because there is no cognitive model involved; the chatbot doesn’t have context or understanding of what its spouting, so eventually the statistical model will start producing things that don’t make any sense given the context of the conversation. Chat-GPT still doesn’t have anything close to a cognitive model, but it does have a means to provide general context. There is a second input list that is used to modify the first, and this one is human created/curated. Because this is laborious and error prone (since it involves humans), this second list is much, much smaller than the typical primary input list (usually only around 15k lines). But it does provide direction (ie, changes the weights of frequency) for whatever subject the chatbot was designed to focus on.
Going back to our “The dog sat on the” example, maybe the creators think it’s more likely that a dog will sit on a carpet rather than a chair, so when that particular token arises, the statistical model is adjusted so that the weighting favors ‘carpet’ as a token, rather than ‘chair’.
There is one further step, the iterative curation. Here, every so often the human operators of the chatbot will look at the log of chatbot conversations and give a yea or nay to its responses. This further refines the secondary input list. This is also much easier to do than the initial training as all the humans have to do is reinforce responses that are considered ‘good’ and punish those responses deemed ‘bad’. This is called Reinforcement Learning and it is quite similar to the way it’s done with animals.
In this way, especially over time, the chatbot will gradually get closer and closer to seeming like it understands the context of a conversation. But, again, there’s no thought here on part of the chatbot. It still doesn’t really understand context or background; its just been given hints about it from actual humans. If you pull something out of left field that is beyond the scope of what the secondary list focuses on, the default behavior just becomes the unvarnished statistical analysis.
I’m oversimplifying here, but that’s the gist of how Chat-GPT works. Now onto the original question: I think it’s relatively pointless to RP with a chatbot. It certainly won’t get anything out of it, and the human only will until they do something that is outside the scope of it’s training and the illusion will break down instantly.
That said, an AI helper to do as some have suggested here like provide a sudden RP prompt or scene could definitely be useful. This would be a fairly focused domain that could easily be trained based on the genre and setting of the MU*. It could even take PC backgrounds into account to provide scene prompts that touch on those.
We’re centuries, if not milennia, away from true, strong AI. AI that has a cognitive level and understanding that could rival any higher-order animal is a pipe dream and has been since the dawn of computer science. I could give a bit more about the different schools of thought on AI that have sprung up over the decades, but that’d be outside the scope of this thread, I think.
-
@STD said in Would you roleplay with AI?:
t’s always hilarious (and depressing) when people who don’t know how technology works get worked up about it.
I mean this post is already outside the scope of the thread, as no one was getting worked up about it, or fundamentally misunderstanding the technology.
-
@STD Admit it, you asked ChatGPT for its opinion to get this response, right?
-
@Pavel said in Would you roleplay with AI?:
@STD Admit it, you asked ChatGPT for its opinion to get this response, right?
WHO TOLD YOU?! Uh, I m-mean… gee whilikers, why would you think that?
Actually, this does bring up a point. Technical text and objectively true answers are a very, very poor fit for this kind of AI. A recent example I can think of is the Bing chatbot arguing with a user over whether Avatar: the Way of the Water is out yet. It kept insisting that the release date was in the future, despite it being out for weeks by that point.
This was likely because the Bingbot was trained on old search data and the vast majority of those searches would have Avatar releasing in the future. Statistically, it was far more likely that asking about the movie would have reference of it being the future, so that’s how the Bingbot responded. It is amusing how stubborn it was on the point, though.
If you get a chatbot to write about a technical subject, it’ll definitely produce something jargon-laden and which would seem, to a layman at least, coherent. Anyone who knew the subject, though, would think that whoever wrote the technical paper was either bullshitting or having a stroke.
A Chat-GPT bot would be good at spouting random technobabble, though.
-
@STD said in Would you roleplay with AI?:
WHO TOLD YOU?! Uh, I m-mean… gee whilikers, why would you think that?
Mostly because it looks like it’s stitched together from a few Wikipedia articles and doesn’t actually answer the question.
-
@STD said in Would you roleplay with AI?:
Going back to our “The dog sat on the” example, maybe the creators think it’s more likely that a dog will sit on a carpet rather than a chair, so when that particular token arises, the statistical model is adjusted so that the weighting favors ‘carpet’ as a token, rather than ‘chair’.
Good summary. And perfect illustration of how it can go awry in storytelling in a fictional world. The dog might be on a spaceship where there’s only a hard deck, or in a primitive world where there are no chairs or carpets. The model has a few-k lines of textual context, but it doesn’t have the context of the world at large.
-
@Pavel said in Would you roleplay with AI?:
The whole point is to play with other people. If I wanted to play with some kind of artificial or virtual intelligence, I’d play a video game.
This is exactly my take on it. If I wanted to do quests given out by an AI NPC and interact with other AI NPCs… I would just play a video game? I’m not really understanding where the difference would like if GMing and NPCing were done by AI, even with a human GM feeding the prompts.
-
@Prospero said in Would you roleplay with AI?:
@Arkandel said in Would you roleplay with AI?:
@Prospero What if the RP provided by NPCs was there to supplement and enhance scenes rather than to be on an equal footing as a PC?
So think of an AI bartender in your local IC vampiric hangout rather than an actual Kindred. Its purpose would be to offer some dialogue and color instead of driving a plot per se; roles most players would find boring to play but they might be neat to have available and play with, if that makes sense.
Yeah, that would be great as well. Sometimes providing a bit of a seed for something to be RPed, something to play off of. Maybe help those “what do we RP” kind of situations, they can just go and interact with the AI and see what happens.
Like I said, I think there are some very good potential applications for AI, but it would need to be well known and documented and would need to be just as you say - a more convincing and interactive NPC.
I actually backed this project when it was live on Kickstarter, specifically because I was interested in trying to dust off some of my plot-running skills and was curious about how they would break down potential plotlines and their various components.
There’s a basic formula to how the designers have laid everything out . I imagine it would be very easy to generate a program that would provide a surprisingly broad variety of plots and stories for people to RP using a structure similar to what they’ve laid out.
-
Always role-play with the AI, on the off-chance you can manage to teach it morals, and then it ends up declaring war on Elon Musk, and then a higher than normal number of Teslas start spontaneously combusting.
-
@Jennkryst said in Would you roleplay with AI?:
Always role-play with the AI, on the off-chance you can manage to teach it morals, and then it ends up declaring war on Elon Musk, and then a higher than normal number of Teslas start spontaneously combusting.
-
@Pavel said in Would you roleplay with AI?:
Mostly because it looks like it’s stitched together from a few Wikipedia articles and doesn’t actually answer the question.
Really? Ouch. I guess that’s why I’m not a technical writer.
But I did answer the question. I mean, that was the whole point of explaining how the chatbot works. It’s a pointless exercise to RP with an AI because the AI won’t get anything out of it and any illusion that the human has will quickly disintegrate as soon as they do something outside the scope of its trained model.
-
@STD said in Would you roleplay with AI?:
@Pavel said in Would you roleplay with AI?:
Mostly because it looks like it’s stitched together from a few Wikipedia articles and doesn’t actually answer the question.
Really? Ouch. I guess that’s why I’m not a technical writer.
But I did answer the question. I mean, that was the whole point of explaining how the chatbot works. It’s a pointless exercise to RP with an AI because the AI won’t get anything out of it and any illusion that the human has will quickly disintegrate as soon as they do something outside the scope of its trained model.
I was mostly teasing.
Ark’s original question, though, does predicate itself on a fictional sort of specialist RP-focused virtual intelligence (which most people call AI, I just really prefer Mass Effect’s version of the term):
@Arkandel said in Would you roleplay with AI?:
That you can adjust the themes, overall directions or style of your AI RP partner to match whatever you find most appropriate in a given scene. You want some combat? It’ll create some goons for you. You want logic and puzzles? Exploration? Flirtation? It can cook up some of that for you.
I don’t think current chatbot VI can do that yet.
-
I’ve caught so many students using AI to write papers this semester. There’s something very “uncanny valley” about the writing. I can’t imagine rping and my rp partner suddenly getting stuck in a creepy loop about being “lost in the haze of the human dimension.”
-
@helvetica That’s now, with a general-purpose Chat AI intentionally hindered as a demo/test bed.
Just wait a year or two until we start having competing commercial AIs being trained and tuned to purpose.
-
I’ve tried ‘RPing’ with AI - If you feed it enough background information, it can kind of make a go of it, but ultimately it falls on its face. It does generate the occasional interesting idea, which spurs me to think on the matter as a writing prompt of sorts, but I’ll echo the thought brought up earlier - it’s more akin to playing a text adventure video game than RPing. So it’s a bit sterile, and I find myself being extraordinarily lazy with my responses.
ETA: My bad, I saw the topic, saw ChatGPT, and dove right into responding before reading the assumptions. >_>
-
Yeah, I didn’t take this as a “RP with AI as it is now”, it can’t lol. It’s not even AI. When the tech is better though, having it take over bit parts in stories will add a lot. I wouldn’t want it to do the other PCs, that’s not why we are here, but background noise could really bring a world to life.
-
@IoleRae said in Would you roleplay with AI?:
It’s not even AI.
I’m really trying to not be the kind of person who quibbles about definitions, but seriously: it’s not AI. We don’t even know if AI is physically possible; if code could ever be written that can simulate intelligence and if hardware could ever exist that would be capable of computing at the necessary speed.
So if the question is, “Would you invite Lt. Commander Data to a D&D game,” then sure. If the question is, “Would you RP with ChatbotGPT,” then I’d try anything once but I doubt I’d try that twice.
-
This seems topical:
ChatGPT plays TTRPGs #dnd #cofd #ttrpg Demon: The Descent
https://www.youtube.com/watch?v=c1ZOLL0vvBAIt’s about 10 minutes. This 100% tracks with my experiencing of trying to get ChatGPT to help me make character sheets for CoD NPCs. GOD IT WAS USELESS.
-
@Tez Shh, don’t ruin it. We can all just tell our resident Mummy fan that it can play Mummy with them. >_>
-
@GF I don’t even know what to think of the definition of AI. The team at the MIT AI Lab started as a dedicated place for researching AI in 1960, and it’s been a first-class research area since before then.
The software they developed just in operating systems and programming was so revolutionary half the lab would get hired away to do THAT stuff.
For a long time I used to think what we have NOW was impossible: a computer generating new natural language text in response to prompts.
In fact for a long time, many of the AI wizards would call computer vision or natural language understanding “AI-hard” meaning you had to have an AI to solve those problems.
Well, we now are starting to have those things. So, by the definitions that we used to have, we now are starting to have AIs.
They’re not the science fiction form of AI that is “self-aware” in some impossible-to-define way, but that’s usually just a plot point anyway.