Don’t forget we moved!
https://brandmu.day/
MU Peeves Thread
-
I have given serious thought recently to having ChatGPT or similar fill out chargen apps for me. I just can’t bring myself to do it. I’ll RP until I pass out but write a background? Going on month 4 and no progress.
-
@Babs I’ve definitely used these virtual intelligence tools to come up with concepts or seeds for me to expand on. Not a whole backstory or character, but bits and pieces I can use to write something about - otherwise I fall into the trap of making the same character for the tenth time.
-
I would rather read the worst background in the world than a ChatGPT app and I can’t imagine I’m alone. It fills me with despair.
-
A ChatGPT app can’t even come close to the appreciation I have for the person who apped into a modern-day horror game with a character whose dark backstory is that he killed the dinosaurs.
-
-
@Third-Eye said in MU Peeves Thread:
I do make use of AI detection tools, both free versions and a paid one I subscribed to after Too Many Of These Incidents For Me, but mostly they’re confirmation for me when something feels REALLY off. There’s tons of AI generated stuff that’s actually been edited or was just a touch-up on something a human wrote that I don’t and would never notice. I associate LLM with being over-long and flat yet also weirdly effusive, mainly, but it’s usually not ‘bad’ writing, as such. It’s weird because it’s not ‘written’ at all, it’s word-generated-after-word. ‘Simple’ language that’s also repetitive due to that generation is probably the rhythm of it that twigs me the most. This article mentions it and some other tells.
https://readwrite.com/how-to-tell-if-something-is-written-by-chatgpt/
IDK, I think there’s also a ‘scales falling from your eyes’ quality when you know this stuff is becoming widespread (presuming it bothers you, I guess). Once you actually start to look for it, you start to see it when it’s obvious, and a lot of the time people don’t bother not to make it obvious.
AI detection tools are only really useful when you’re operating in a culture where use of generative AI is considered inappropriate in the first place. That said, there’s an appropriate and an inappropriate use of LLMs in the context of creative writing, and at least in my own view, it has very little to do with how obvious it is. The most inappropriate use is to rely on it to do all of the creative parts for you, not really forming an idea of what it should be writing for you before it does. This is usually worse than merely obvious, it’s unreadable. What it produces is trash, it’s not worth the time to try to absorb it.
The two more appropriate uses that I can think up are one, invoking its hallucinatory impulse to brainstorm with you before you write the actual prose of one of the ideas it gives you, or perhaps something derivative of that. That use case doesn’t require much elaboration, so I’ll move on to the other use case of expounding upon what you already have. In this case it can function almost like having a personal assistant.
An example of the latter case: I write a monthly TTRPG-and-variety magazine, and we use a system t hat’s entirely homebrewed. It’s inspired by WoD, Diablo 2, Magic: the Gathering, and several other things, not so much in terms of setting but in terms of game design. One of the things we do when we release a new setting is we write a variety of new “Proficiencies,” which analogize to Skills in CofD/WoD terms. These Proficiencies are specific to the setting we’re in. We want these Proficiencies to be written in a certain format, which is consistent between settings, but have the style of writing change from one setting to the next.
We hand-write short versions of these Proficiencies. We instruct the LLM to write a long-form version of each one in that format. We also instruct it to not use the same phraseology over and over again, and instruct it to rely on its context window of previously-generated Proficiencies for clues as to which phraseology and sentence structure to avoid repeating over and over again. This works beautifully for us. Most of the Proficiencies written come out pretty damn good.
All of this leads to the other two factors in appropriate use of LLMs in the context of creative writing. They are:
Actually paying attention. Generative AI can produce some great stuff and make one way more productive as an author or as an illustrator. However, it can also produce a lot of trash. This means you have to proofread the LLM output, examine the Stable Diffusion dump. It also means you’re going to have to fix the parts of it that are flawed in some way or other. Don’t sit there fiddling with the prompt for 30min when you can just fix the output by hand in 30sec. Edit the sixth finger out with your tablet.
The other factor is honesty. Don’t pretend you aren’t using LLMs or generated illustrations. If you are open about what you’re doing, it’s harder for people to fault you for it. Don’t want to read content that might have been generated by an AI? Don’t buy this product. Not all of what we publish is made by an AI, and basically none of it is made exclusively by an AI, but we’re not going to go through the trouble of labeling which parts are which or to what extent a certain part relied on AI, because the philosophy we’re working with is that it’s the results, not the means, that matter. What we aim to put to print are things worth reading; things worth looking at. And that’s irrespective of how it was made. We explicitly published this point in our April 2023 issue here, with this spread (left and right page):
We also run articles about prompt-writing and scripting LLMs to do things like chargen for you given a ruleset and a concept, where it is guided, step by step, through each part of the process and relies on its context window to consider the choices it already made while making future choices that are consistent with the concept and with the already-chosen options. This technique has proven quite effective. Another one we wrote is on fine-tuning Stable Diffusion based on a particular artist’s body of work. We may run one on fine-tuning local LLMs, but they aren’t very good yet, so we’re holding off on that.
The above issue in particular paid my mortgage that month. Since then, our editing has gotten way better. We’re – and it is a we since I’m not the only person here – of the view that AI-generated content is the future, but that it will also never replace human editorial vision. It’s here to stay, and so are us humans.
-
I’m interested in that magazine-slash-newsletter.
-
@Narrator
Of course using AI tools is expedient. At this point, everyone involved in the conversation recognizes two things: AI tools are expedient to use and they stand to make the people creating and using them a lot of money. No one is arguing these points.I’m glad you’re being transparent about your business practices, but the complaint in this thread is about people who are not being transparent. To use your Whole Foods analogy, you may not care where your eggs come from and that’s fine. Keep buying your 5 dozen egg trays. However, if someone thinks they’re buying the pasture-raised free range eggs and when they open the carton, they are, in fact, the same eggs from the factory farm, that just ain’t right.
-
@Trashcan said in MU Peeves Thread:
@Narrator
Of course using AI tools is expedient. At this point, everyone involved in the conversation recognizes two things: AI tools are expedient to use and they stand to make the people creating and using them a lot of money. No one is arguing these points.I’m glad you’re being transparent about your business practices, but the complaint in this thread is about people who are not being transparent. To use your Whole Foods analogy, you may not care where your eggs come from and that’s fine. Keep buying your 5 dozen egg trays. However, if someone thinks they’re buying the pasture-raised free range eggs and when they open the carton, they are, in fact, the same eggs from the factory farm, that just ain’t right.
Yeah. Well, that’s an issue of false advertising, which is a form of fraud. So there’s definitely basis for outrage there, or at a minimum irritation depending on how much it really matters.
If you want a real critique of generative AI, it’s that the out-of-the-box solutions right now aren’t very good. If your goal is to make something cool and interesting, they’ll produce something formulaic to the point of being droll. Quite a bit of effort goes into wrangling and contorting them to produce that interesting thing, which requires editorial vision.
You can still automate away the drudgery of production, though, if you do go through that trouble. That’s for certain. And it’s also definitely not going away no matter how angry people get about it.
@Tez said in MU Peeves Thread:
@Pavel Honestly I’m just impressed. You have to really work at it to get ChatGPT to come up with anything smutty, after aggressively reassuring it that all parties are consenting to the joyous experience.
This is because the entire GPT line of LLMs is designed to be inoffensive. OpenAI’s content moderation philosophy is basically that the produced text must be safe according to an HR or PR executive. This means no graphic violence. It means no sex. It means no -isms or -phobias. And so on. Its output comes off as sanitized, because it IS sanitized. This is great if you’re a corporation who wants to automate job rejection letters that feel somewhat personalized without running afoul of anti-discrimination law, but it’s awful if you’re a novelist trying to make it take the broad strokes of your plot and fill in the blanks, because everything interesting in the human experience is all-but-guaranteed to piss hall monitor bureaucrats off.
There are LLMs that can be run locally with a commercial-grade GPU, like those downloadable GGUF models. However, in order to use them effectively for a purpose other than an HR-safe one, you need to fine-tune it on the kind of content you want. If you want smut, you’ll need to fine-tune it on bodice rippers. If you want violence, you’ll have to fine-tune it on military science fiction or horror. If you want -isms or -phobias, you’ll have to train it on /pol/ or your uncle’s bowling alley GC. If you want some mix of all of the above, you’ll need to train it on all of the above.
There is a similar issue with commercial image generation models like Midjourney and DALL-E, where they refuse to make explicit violence or pornography, and to an extent don’t like making women with certain body types because they’re too titillating. If you’re hoping to make an avatar of a woman with a Christina Hendricks build, that’s an uphill battle. It’s so frustrating that in our circle we’ve come to calling Midjourney “Midjanny,” since it’s acting as the fun police.
-
I am happy to concede that AI can be fun and useful in appropriate contexts. This is not one of those contexts. I do not want to rp with AI.
-
@Narrator said in MU Peeves Thread:
@Third-Eye said in MU Peeves Thread:
I do make use of AI detection tools, both free versions and a paid one I subscribed to after Too Many Of These Incidents For Me, but mostly they’re confirmation for me when something feels REALLY off. There’s tons of AI generated stuff that’s actually been edited or was just a touch-up on something a human wrote that I don’t and would never notice. I associate LLM with being over-long and flat yet also weirdly effusive, mainly, but it’s usually not ‘bad’ writing, as such. It’s weird because it’s not ‘written’ at all, it’s word-generated-after-word. ‘Simple’ language that’s also repetitive due to that generation is probably the rhythm of it that twigs me the most. This article mentions it and some other tells.
https://readwrite.com/how-to-tell-if-something-is-written-by-chatgpt/
IDK, I think there’s also a ‘scales falling from your eyes’ quality when you know this stuff is becoming widespread (presuming it bothers you, I guess). Once you actually start to look for it, you start to see it when it’s obvious, and a lot of the time people don’t bother not to make it obvious.
AI detection tools are only really useful when you’re operating in a culture where use of generative AI is considered inappropriate in the first place.
Yeah, and I think the point of this specific conversation is that a lot of people want it to be culturally inappropriate in this hobby. There are already games that make it explicitly not allowed to use generative AI in writing, and so sometimes they have to use tools to help confirm suspicions.
The above issue in particular paid my mortgage that month. Since then, our editing has gotten way better. We’re – and it is a we since I’m not the only person here – of the view that AI-generated content is the future, but that it will also never replace human editorial vision. It’s here to stay, and so are us humans.
So AI can replace writers, just not editors???
@Narrator said in MU Peeves Thread:
@Trashcan said in MU Peeves Thread:
@Narrator
Of course using AI tools is expedient. At this point, everyone involved in the conversation recognizes two things: AI tools are expedient to use and they stand to make the people creating and using them a lot of money. No one is arguing these points.I’m glad you’re being transparent about your business practices, but the complaint in this thread is about people who are not being transparent. To use your Whole Foods analogy, you may not care where your eggs come from and that’s fine. Keep buying your 5 dozen egg trays. However, if someone thinks they’re buying the pasture-raised free range eggs and when they open the carton, they are, in fact, the same eggs from the factory farm, that just ain’t right.
Yeah. Well, that’s an issue of false advertising, which is a form of fraud. So there’s definitely basis for outrage there, or at a minimum irritation depending on how much it really matters.
It was pretty much the whole point of this particular conversation thread on this particular board: this is MU Peeves, it’s about people using LLM on a MU, not LLM in general.
If you want a real critique of generative AI, it’s that the out-of-the-box solutions right now aren’t very good. If your goal is to make something cool and interesting, they’ll produce something formulaic to the point of being droll. Quite a bit of effort goes into wrangling and contorting them to produce that interesting thing, which requires editorial vision.
You can still automate away the drudgery of production, though, if you do go through that trouble. That’s for certain.
Is “the drugery of production” here…writing? Writing words in our writing hobby?
And it’s also definitely not going away no matter how angry people get about it.
ChatGPT doesn’t need you veering off in a conversation just to shill for it.
@Tez said in MU Peeves Thread:
@Pavel Honestly I’m just impressed. You have to really work at it to get ChatGPT to come up with anything smutty, after aggressively reassuring it that all parties are consenting to the joyous experience.
This is because the entire GPT line of LLMs is designed to be inoffensive. OpenAI’s content moderation philosophy is basically that the produced text must be safe according to an HR or PR executive. This means no graphic violence. It means no sex. It means no -isms or -phobias. And so on. Its output comes off as sanitized, because it IS sanitized. This is great if you’re a corporation who wants to automate job rejection letters that feel somewhat personalized without running afoul of anti-discrimination law, but it’s awful if you’re a novelist trying to make it take the broad strokes of your plot and fill in the blanks, because everything interesting in the human experience is all-but-guaranteed to piss hall monitor bureaucrats off.
There are LLMs that can be run locally with a commercial-grade GPU, like those downloadable GGUF models. However, in order to use them effectively for a purpose other than an HR-safe one, you need to fine-tune it on the kind of content you want. If you want smut, you’ll need to fine-tune it on bodice rippers. If you want violence, you’ll have to fine-tune it on military science fiction or horror. If you want -isms or -phobias, you’ll have to train it on /pol/ or your uncle’s bowling alley GC. If you want some mix of all of the above, you’ll need to train it on all of the above.
There is a similar issue with commercial image generation models like Midjourney and DALL-E, where they refuse to make explicit violence or pornography, and to an extent don’t like making women with certain body types because they’re too titillating. If you’re hoping to make an avatar of a woman with a Christina Hendricks build, that’s an uphill battle. It’s so frustrating that in our circle we’ve come to calling Midjourney “Midjanny,” since it’s acting as the fun police.
Buddy, it was just one person making a joke, it doesn’t need paragraphs of explanation.
-
Please no LLM proselytizing in this MU forum, jfc. Get it out of here.
-
@Tez said in MU Peeves Thread:
@Tez said in MU Peeves Thread:
I am begging game-runners to stop using ChatGPT.
I AM BEGGING PLAYERS TO STOP USING CHATGPT.
I’LL STOP USING IT WHEN GAMES STOP ASKING FOR DESCS
Disclaimer: If it wasn’t obvious; this is a joke, I use it to get ideas for a desc and then actually write one. As much as I absolutely loath to write it.
-
@Testament said in MU Peeves Thread:
@Tez said in MU Peeves Thread:
@Tez said in MU Peeves Thread:
I am begging game-runners to stop using ChatGPT.
I AM BEGGING PLAYERS TO STOP USING CHATGPT.
I’LL STOP USING IT WHEN GAMES STOP ASKING FOR DESCS
This is a solved problem if you recycle the same PBs and the same @descs over and over again!
-
Using ChatGPT in online role-playing games can enhance the gaming experience by creating dynamic and responsive non-player characters (NPCs). Unlike pre-scripted NPCs, ChatGPT provides unique and varied responses based on player interactions, making the game world more immersive and engaging.
ChatGPT also assists game masters (GMs) by generating creative content on the fly, such as detailed descriptions, side quests, and resolving unexpected player actions. This support allows GMs to focus more on the overarching narrative and players’ enjoyment, inspiring new ideas and directions for the game.
Moreover, incorporating ChatGPT can make RPGs more accessible by guiding players who may struggle with improvisation or dialogue. This lowers the barrier to entry for new players and encourages a diverse range of participants, fostering a more inclusive and supportive gaming community. Overall, ChatGPT enriches the narrative depth and flexibility of RPGs, creating captivating and dynamic gaming experiences.
-
-
@watno
I got a LOL out of the joke of what looks like an AI generated post defending AI. The comedy has many layers. -
@Testament I will write every desc for you forever if you want
You still have to do the drudgery of editing my filthy human writing tho, and it will be a lot of work
-
@watno Alright fine, I’ll RP with the chat bot and turn it into a Mummy and/or EDGELORD DARKSIDE WRONGFUN-obsessed intelligence so when skynet takes over, things are gonna get EXTRA weird!
-
@Roz said in MU Peeves Thread:
Yeah, and I think the point of this specific conversation is that a lot of people want it to be culturally inappropriate in this hobby. There are already games that make it explicitly not allowed to use generative AI in writing, and so sometimes they have to use tools to help confirm suspicions.
Those tools are basically useless. They have a success rate of about 60%, which is to say 40% of the time they give either a false positive or a false negative. They’re only marginally better than using a magic 8-ball. You’re wasting GPU cycles by relying on them.
Also, the “MU*” segment of RPG players, what’s left of it, is notoriously backward in culture. Your culture needs fixing. I watched MUDs, which had legitimately sophisticated game design, get gradually replaced by MUSHes, which are basically just glorified Telnet chat rooms. It was done because they wanted to get rid of the Game part of Roleplaying Game, especially when it came to the Game part of Game Design. Through the 1990s, the conceit was that there was an automated GM that managed the world and did all the rules accounting involved in that. You could treat the rules almost like a strategy game, especially when it came to combat. More advanced MUDs had committees designed to better balance this element of the game and to add new features, to make the game richer. There was a potential to gradually build up a system so sophisticated that it basically was an automated GM, in all respects. But that was discarded in favor of World of Darkness MUSHes where most of the game is in the character application process and in making sure you never offend the right clique in the OOC foyer, and not in the scenes your character is in.
LLMs have a similar potential to function as an automated GM, although there are some issues with the prose they generate, most notably a lack of something called “perplexity.” The current thinking is that the programmer feeds it all of the facts of the gamestate, all of the game rules, and what the players intend to attempt to do in that scenario, and the LLM uses its advanced reading comprehension to determine which rules to invoke, which gets returned to the programmer. They then evaluate the outcome of those invoked rules and feed the outcome tot he LLM, asking it to write some prose to describe what happens. That workflow has already been implemented in some limited contexts, it just needs work and refinement.
And you’re doing what people did with MUDs again, here. Instead of seeing something with all of this potential and going, “How could we improve this? How could we identify its flaws and mitigate or remove them?” you think, “How can we make sure this never gets utilized at all?” It’s not conservative thought, it’s literally regressive thought.
So AI can replace writers, just not editors???
To an extent, yeah. An LLM needs direction, but you need to give it the right context, and there are still flaws in most of these models, most notably there’s a balancing act between reading comprehension and logic and being able to display the kind of high perplexity that a human writer does.
Is “the drugery of production” here…writing? Writing words in our writing hobby?
RPGs entail a lot more than just “writing.” Remember the “game” part of “roleplaying game”? That thing MUSHers gradually excised from MUDs because it might entail a little bit of logic, a little bit of math? That’s pretty key here. “Writing” is one aspect of many in RPGs. There’s also game balance, mechanical flavor, simulation accuracy, ensuring choices players make are consequential, game depth (players have more than one avenue that could lean to a “win”) over game complexity (having a bunch of rules to memorize), and the list goes on and on. Calling RPGs a “writing hobby” really tells me quite a bit about what you think the whole point of this is. It also probably explains the contempt I recall for a certain class of players I met in the 1990s and 2000s, who didn’t have the best English skills, but boy oh boy, did they display a lot of creativity and cunning in the warfare/combat modes of MUDs back then, and why they were gradually pushed out in the 2000s and were basically gone by 2010. It also explains why what’s left of the MU* sphere has a dearth of good programmers. As I demonstrated above, this isn’t just a “writing hobbby.” It’s also a design hobby and an engineering hobby. It’s a hobby for people who know how things work, and want to make things.
You got rid of the grognards and the math geeks and now all you have are people who think RPGs are about purple prose and pronouns.
ChatGPT doesn’t need you veering off in a conversation just to shill for it.
By “ChatGPT” do you mean “LLMs”? They aren’t all interchangeable. But like I said. It’s not going away no matter how mad you get about it.
Buddy, it was just one person making a joke, it doesn’t need paragraphs of explanation.
My guy, to the crowd who thinks that it’s actually a better use of time and resources trying to detect LLMs so you can squelch them than it is to fiddle with them and make them serve your purposes better, it absolutely does need paragraphs of explanation.