AI PBs
-
-
If you need more proof that generative AI is super bad, take some advice from one of it’s initial investors:
https://futurism.com/openai-investor-chatgpt-mental-health
IT WILL DRIVE YOU INSANE.
-
@MisterBoring Technological Darwinism.
-
So AI isn’t so much coming for my job as it is creating more work for my colleagues. I see.
-
@MisterBoring said in AI PBs:
Are there any other options that might represent a truly ethical source of PB art?
My tabletop players have made HeroForge minis of their PCs. Not actually downloaded them or had them 3d printed or anything, just screenshots of the miniature online. So, it’s free, and I don’t think anybody minds. Mind you they do sometimes buy stuff from HeroForge, too.
-
@Gashlycrumb said in AI PBs:
@MisterBoring said in AI PBs:
Are there any other options that might represent a truly ethical source of PB art?
My tabletop players have made HeroForge minis of their PCs. Not actually downloaded them or had them 3d printed or anything, just screenshots of the miniature online. So, it’s free, and I don’t think anybody minds. Mind you they do sometimes buy stuff from HeroForge, too.
I have also seen people use things like BG3 and Cyberpunk’s character creators in a similar fashion for VTT.
(And it’s becoming more common for games to release character creators for free in advance of the game’s release.) -
I’ve actually been looking at using Unreal Engine’s Metahuman Creator to make PBs, but my abilities in that engine are… awful.
-
So AI isn’t so much coming for my job as it is creating more work for my colleagues. I see.
here’s another article about it more generally. I find it kind of funny but also deeply sad considering that these are probably just…what, undiagnosed narcissists who have badly needed intervention for years? and now they’ve got Fancy Autocorrect endlessly validating them into psychosis.
-
@Wizz There’s a dark part of my psyche that hopes it leads one of them to this:
-
@Wizz Yeah, I was being a bit flippant, but some colleagues of mine have already reported (anecdotally, their study is as-yet unpublished) an increase in cases of delusion being “fed” by hallucinating LLMs supposedly confirming cases of unreality.
Alas, most research I’ve seen thus far has been on clinical applications of LLMs rather than their clinical impact.