Don’t forget we moved!
https://brandmu.day/
AI Megathread
-
@SpaceKhomeini Wasn’t that the one where a bunch of 4chan trolls turned the AI racist?
-
@imstillhere said in AI Megathread:
@Griatch said in AI Megathread:
As for me, I find it’s best to embrace it; Digital art will be AI-supported within the year. Me wanting to draw something will be my own hobby choice rather than necessity.
are you aware that artists for whom this is NOT a hobby are suffering from this thing you’re excited to “embrace”
When photography displaced illustrators there was a new human art form that supported human creativity and jobs. When digital art allowed quick work in a new medium it was still human artists at work.
Yes, I expect this will dramatically change the art industry. I can see why people are legitmately concerned. Same is true for a lot of white-collar jobs (for once, the blue-collar workers may be safest off). While I don’t work as a professional artist, I expect my own job in IT to fundamentally change or even go away too, as programmers eventually become baby-sitters of AI programmers rather than actually code ourselves. Since I think that this is inevitable, I’m trying to learn as much as I can about it already.
AI removes the human and removes the employment and does so by unethical sourcing of human effort.
There’s definitely discussions to be had about the ethical sources of the training data; OSS models (which is what I use, since I run these things locally) are already trying to shift to using more ethically sourced, freely available data sets (but yes, there are still issues there, considering the size of the corpus). You can in fact look into those data sets if you want - they are publicly searchable. Companies with proprietary solutions (Midjourney is particularly bad here) will hopefully be forced to do so by lawsuits and regulation, eventually. But that said, I’d think that even an AI completely trained on public-domain images will still change the industry, so it’s not like this changes the fundamental fact of the matter: LLM processing is here to stay.
To say that’s no different than painting in photoshop is naive at best and disingenuous at worst.
AI image generation is only one aspect of LLMs. It on its own is certainly not the same as painting in Photoshop, and I never suggested as much. But I do expect photoshop to have AI support to speed up your painting process in the future - for example, you sketch out a face and the AI cleans it up for you, that kind of thing (not that I use Photoshop, I’m an OSS guy. ). But yeah, for professional artists, I fear the future will be grim unless they find some way to go with the flow and find a new role for themselves; it will become hard for companies not using AI to compete.
-
Re: AI’s shitposting on forums, Reddit is way ahead of us!
And war? Without getting into the grim realities of a present conflict, I instead refer to a classic ‘Simpsons did it’: https://youtu.be/qEvTlARQJAY?t=203
-
@Testament said in AI Megathread:
@SpaceKhomeini Wasn’t that the one where a bunch of 4chan trolls turned the AI racist?
I mean maybe it was 4chan-related but it was just “We exposed a chatbot to Twitter with the express purpose that it would learn from posters that engaged it.”
A bit more on that: I worked at Microsoft Research (I was in an infrastructure role) 2015-2016 and was there for that fiasco. What baffled me was that leadership made heads roll over that shit and I just don’t understand how unprepared they were for what was obviously going to happen.
I was talking with one of the researchers who was just laughing at the whole mess and told him they needed to cast a wider net in terms of qualifications for projects like this. I mean, I had an active Somethingawful forums account since the early 2000s, that’s all it took to turn me into a Cassandra.
The people building and peddling this shit just have no idea sometimes.
-
@Griatch said in AI Megathread:
But yeah, for professional artists, I fear the future will be grim unless they find some way to go with the flow and find a new role for themselves; it will become hard for companies not using AI to compete.
This is what I expected, but the first people that made me start considering the positive effects of AI on the field of professional art are my friends who are professional artists. Most of them seem really happy with adding to their toolkit. Concept Art in hours what would previously take them days, or in the same amount of time being able to do significantly more iterations resulting in what they feel is a better final product. The takeaway I’ve got from conversations with them is that it will come down to quality of studio whether they use AI tools to level-up the art departments, or attempt to replace them. But as one said “An AI isn’t going to take my job. It will be a professional peer who knows how to use AI better than me.”
-
@shit-piss-love well yes, but that’s not a very useful distinction when an entire career path is staring down the barrel of a gun - because even though there will still be work for those who can incorporate and use AI to their advantage, there will be much less available work than there is now.
-
@Griatch said in AI Megathread:
Yes, I expect this will dramatically change the art industry. I can see why people are legitmately concerned. Same is true for a lot of white-collar jobs (for once, the blue-collar workers may be safest off).
I mean, exactly? It’s not like AI is doing anything new in terms of the exploitation and replacement of humanity within labor fields.
I’m not saying it’s ethical, I’m just saying that once art becomes labor, once it is bought and sold and entire industries are built on the art created, it becomes subject to the same rules, regulations, and pitfalls that all labor is subject to under capitalism. There’s no way to avoid that except for bringing down capitalism. The idea that ethical regulations will do anything to stop this is a pipedream. At best, it will slow it down.
AI image generation is only one aspect of LLMs. It on its own is certainly not the same as painting in Photoshop, and I never suggested as much. But I do expect photoshop to have AI support to speed up your painting process in the future - for example, you sketch out a face and the AI cleans it up for you, that kind of thing (not that I use Photoshop, I’m an OSS guy. ). But yeah, for professional artists, I fear the future will be grim unless they find some way to go with the flow and find a new role for themselves; it will become hard for companies not using AI to compete.
Photoshop already integrated some AI stuff into its latest, IIRC. You can have photoshop essentially “finish” or “expand” art so that it fills out what’s missing past the edges of a picture. Lol. It’s wild.
-
@shit-piss-love said in AI Megathread:
@Griatch said in AI Megathread:
But yeah, for professional artists, I fear the future will be grim unless they find some way to go with the flow and find a new role for themselves; it will become hard for companies not using AI to compete.
This is what I expected, but the first people that made me start considering the positive effects of AI on the field of professional art are my friends who are professional artists. Most of them seem really happy with adding to their toolkit. Concept Art in hours what would previously take them days, or in the same amount of time being able to do significantly more iterations resulting in what they feel is a better final product. The takeaway I’ve got from conversations with them is that it will come down to quality of studio whether they use AI tools to level-up the art departments, or attempt to replace them. But as one said “An AI isn’t going to take my job. It will be a professional peer who knows how to use AI better than me.”
That’s encouraging to hear! If your friends feel they are on top of the coming changes, all the more power to them.
-
@Coin said in AI Megathread:
Photoshop already integrated some AI stuff into its latest, IIRC. You can have photoshop essentially “finish” or “expand” art so that it fills out what’s missing past the edges of a picture. Lol. It’s wild.
This is called outpainting in Stable Diffusion (vs inpainting, which is replacing an element inside the image, as I did with Pikachu’s 3rd ear->crown). Photoshop has it’s own ‘Generative Fill’ which is an outpainting process on its own in-house model. It also has plugins for SD and DALL-E integration. These will automate passing selections back and forth to the AI software (which is sensitive to image dimensions, the plugins help with this).
100%, this stuff is going to become a standard part of a professional Photoshop workflow, if it already isn’t. Especially with how fast it can do some tasks that used to be either time consuming or inaccurate under prior algorithmic automation.
-
@bear_necessities said in AI Megathread:
@Rinel this is not me snarking but asking a legitimate question. If you believe this:
@Rinel said in AI Megathread:
Barring an economic revolution that is long-coming and never here, the result of this particular utopia is the collapse of widespread art as the practice reverts to only those privileged enough to spend large amounts of time on hobbies that they can’t use to help make a living. It’s difficult to fully describe how horrific this scenario is, but it’s the death of dreams and creativity for literal millions of people.
It would legitimately be better to destroy /all/ LLMs and prohibit their existence than to pay that cost.then why do you do this:
@Rinel said in AI Megathread:
I’ve been using Midjourney for quite some time now,
Hypocrisy, probably. The technology interests me. I do my best to offset what I justify as relatively minor harm (using it solely for small-scale placeholder art and out of curiosity) by doing my best to commission more actual art. But it’s probably still hypocrisy.
@bored said in AI Megathread:
Even though your goalpost was ‘MS paint doodle.’ Lets see your doodle so we can critique it.
“Pikachu, wearing a crown and royal cape, in a lightsaber duel on the moon with an angry duck”
ETA:
@bored said in AI Megathread:
You want to bang on ‘oh my god, it added an ear, it didn’t have a tail, it doesn’t understaaaaand’. I fixed the ear instantly (again, no photoshop - I just put a blob over the ear and told it ‘crown instead, plz’). I could obviously add a fucking tail. I’m not going to do more because it’s pretty clear I could give you the Picasso of Pikachu vs. Darth Maulard and you’d complain about a single pixel.
You failed to generate an image that conformed to the extremely basic specifications that I, a person who as you can clearly see literally cannot draw, managed to accomplish with ease. This is the essence of the AI apologist. You’re actually mocking me for pointing out that the image you generated is missing a tail and generated a third ear, as though that would be remotely acceptable if it were coming from a human.
LLMs cannot do what humans do. The only people who think they can are lowering their standards.
ETA2, EDITING BOOGALOO:
@Griatch said in AI Megathread:
Yes, I expect this will dramatically change the art industry.
Not just the art industry, but the artistic community writ large. Lots of people can only do art because of independent commissions.
-
@Rinel Where’s it’s nose? In fact, it doesn’t even have a head, it’s just one big lump. How is this Pikachu in any way other than a distant, vague, 2nd hand understanding of the concept? Percentage wise, its less Pikachu than the AI one.
(And to be clear: No, I cannot draw better. But that isn’t the argument here. I’m just unclear why your failures are less significant than a missing tail.)
But really, I don’t know how to engage with this and I’m going to stop here. You’re just… confidently wrong, here. The AI can make the thing. You said the MS paint doodle would be recognizable as the subject and AI couldn’t produce it at all. But it did. So you switch to critiquing single small elements, even when the subjects are clear, and even when your drawing fails to have 100% of the elements either. If you look through the grid, all the concepts you’re talking about are there. I’m not your employee so I’m not interested in getting you a ‘good enough’ picture, but those concepts are there. AI may not ‘understand’, but the tokens pikachu, jedi, lightsaber, moon, Earth, crown, cape, when passed through CLIP, correspond to vectors in the latent space. That is its equivalent of understanding.
Fnal question: if there was a cheap IP-stealing Chinese mobile game that wanted Jedi Pikachu artwork, which of the two would they pick? Pretty obviously not yours, right?
-
@imstillhere said in AI Megathread:
are you aware that artists for whom this is NOT a hobby are suffering from this thing you’re excited to “embrace”
“Which Pikachu is better” arguments aside, this is the heart of it for me.
Thousands upon thousands of humans are going to be out of jobs because of a tool that was literally built by stealing their work, and the overwhelming response of the majority of other humans ranges from “meh, who cares - it’s cool, let’s embrace it” to “it’s too late to regulate anything, let’s just let the tech companies run amok.”
That’s really disheartening.
Now I realize that copyright laws and internet regulations are imperfect, but imagine what the world would look like if everyone had just folded over Napster. “Oh well, data can be shared easily now; screw the musicians.” Or if YouTube had just let everyone upload every movie they owned, free for anyone to watch. “Oh well, movies can be shared easily now; screw the filmmakers.” Or if every social media outlet had just thrown up their hands and said. “Oh well, nothing to stop people from posting whatever they like; why even bother trying to moderate.”
We can’t un-invent things, but we can use them responsibly.
-
By the way, here’s an example of a smaller AI model trained using only public-domain images. The tech will move on also if regulators pull the brakes.
Now I realize that copyright laws and internet regulations are imperfect, but imagine what the world would look like if everyone had just folded over Napster. “Oh well, data can be shared easily now; screw the musicians.” Or if YouTube had just let everyone upload every movie they owned, free for anyone to watch. “Oh well, movies can be shared easily now; screw the filmmakers.”
You can already generate your own AI music. In a year you’ll be able to generate your own movies from a prompt, so that industry is also in for an upheaval …
-
@bored said in AI Megathread:
it doesn’t even have a head, it’s just one big lump.
new pokemon fans
you disgust me
behold the true form of the electric mouse, to which i pay homage
@Faraday said in AI Megathread:
“Which Pikachu is better” arguments aside
how dare
@Faraday said in AI Megathread:
Now I realize that copyright laws and internet regulations are imperfect, but imagine what the world would look like if everyone had just folded over Napster.
More seriously, LLMs are far, far worse than Napster, which hurt recording companies way more than it hurt actual musicians. I’m not taking a stance on the ethics of pirating, but there’s a difference between people copying things that others have made and people outright displacing human creators.
-
@Rinel said in AI Megathread:
@Faraday said in AI Megathread:
Now I realize that copyright laws and internet regulations are imperfect, but imagine what the world would look like if everyone had just folded over Napster.
More seriously, LLMs are far, far worse than Napster, which hurt recording companies way more than it hurt actual musicians. I’m not taking a stance on the ethics of pirating, but there’s a difference between people copying things that others have made and people outright displacing human creators.
So, if I understand you right, it’s not unethical training sourcing that is the issue for you (as it seems to be for Faraday), but the societal implications of the tech itself?
That’s a valid view. But while we can regulate and fix ethics of training sets, we won’t realistically stop AI being used and possibly upending a lot of people’s jobs in the same way as was done by countless new technologies in the past.I’m not saying I want people to lose their jobs, I’m just saying this is something we need to learn and adapt to rather than hope that the genie can be put back in its bottle.
-
@Griatch said in AI Megathread:
That’s a valid view. But while we can regulate and fix ethics of training sets, we won’t realistically stop AI being used and possibly upending a lot of people’s jobs in the same way as was done by countless new technologies in the past.
What makes most of these models powerful is the breadth of their training data. Yes, you can make an open-source model, but by definition they don’t work as well. They can’t make Pikachu fighting a duck with a lightsaber because both Pikachu and lightsabers are copyrighted/trademarked properties.
Also, given how poorly people understand intellectual property law, I really question whether these models are truly being trained only on things in the public domain. There’s such a widespread mentality of “well it’s on the internet and doesn’t have a copyright tag attached so it must be free right?” Plus people re-uploading copyrighted stuff to sharing sites with a different license. Maybe they are - I haven’t dug into it - but color me doubtful.
I’m not suggesting that we can - or should - stop all uses of a new technology. I’m just suggesting that the current hype wave that’s overselling what the tech can actually do, coupled with the unethical nature of the training sets, is creating a perfect storm of badness.
-
@Griatch said in AI Megathread:
@Rinel said in AI Megathread:
@Faraday said in AI Megathread:
Now I realize that copyright laws and internet regulations are imperfect, but imagine what the world would look like if everyone had just folded over Napster.
More seriously, LLMs are far, far worse than Napster, which hurt recording companies way more than it hurt actual musicians. I’m not taking a stance on the ethics of pirating, but there’s a difference between people copying things that others have made and people outright displacing human creators.
So, if I understand you right, it’s not unethical training sourcing that is the issue for you (as it seems to be for Faraday), but the societal implications of the tech itself?
Both are issues for me, though if you pressed me I’d say I’m more worried about the effects. If it didn’t have an economic effect, it would be a lot more like pirating media to me.
I very strongly support the implementation of strict regulations on how the models are trained, with requirements that all training data be listed and freely discoverable by the public.
I’m not saying I want people to lose their jobs, I’m just saying this is something we need to learn and adapt to rather than hope that the genie can be put back in its bottle.
One of the reasons I use MJ is to understand what’s going on, so I get what you mean, but we can still shackle the genie for a while.
-
@Faraday You talk as if it’s a clear-cut thing that these models are based on “theft”. Legally speaking, I don’t think this is really established yet - it’s a new type of technology and copyright law has not caught up.
If you (the human) were to study Picachu (as presented in publicly available, but copyrighted images) and learn in detail how he looks, you would not be breaching copyright. Not until you actually took that knowledge and made fan-art of him would you be in breach of copyright (yes, fan-art is breaching copyright, it’s just that it’s usually beneficial to the brand and most copyright holders seldomly enforce their copyright unless you try to compete or make money off it).
In the same way, an AI may know how Picachu looks, but one could argue that this knowledge does not in itself infringe on copyright - it just knows how Picachu looks after all, similarly to you memorizing his looks by just looking.
One could of course say that this knowledge inherently makes it easier for users of the AI to breach copyright. If you were to commission Picachu from a human artist, both you and the artist could be on the hook for copyright infrigement.
So would that put both the AI (i.e. the company behind the AI) and the commissioning human in legal trouble the moment they write that Picachu prompt? It’s interesting that the US supreme court has ruled that AI-generated art cannot be copyrighted in itself. So this at least establihes that the AI does not itself has a person-hood that can claim copyright (which makes sense).
Now, I personally agree with the sentiment that it doesn’t feel good to have my works be included in training sets without my knowledge (yes, I’ve found at least 5 of my images in the training data). But my feelings (or the feelings of other artists) don’t in itself make this illegal or an act of thievery. That’s up to the legal machinery to decide on, and I think it’s not at all clear-cut.
-
@Rinel said in AI Megathread:
I very strongly support the implementation of strict regulations on how the models are trained, with requirements that all training data be listed and freely discoverable by the public.
Proprietary models like Midjourney and OpenAI don’t release any of this stuff, alas. But if you stick to OSS models, like Stable Diffusion, you can freely search their training data here (they also use other public sources). There are tens of thousands of LLM models for various purposes and active research on hugging face alone; they tend to be based on publicly available training data sets.
-
-
@Griatch said in AI Megathread:
You talk as if it’s a clear-cut thing that these models are based on “theft”. Legally speaking, I don’t think this is really established yet - it’s a new type of technology and copyright law has not caught up.
I do, yes. Obviously the courts have not weighed in yet on the specific lawsuits at play, but that doesn’t prevent people from drawing their conclusions based on available evidence and knowledge of the laws.
I have seen with my own eyes these tools generate images and text that are very clearly copyright-infringing.
Arguing that they are somehow absolved of all responsibility because of how the users use the tools is like arguing that a pirate website or Napster bears no responsibility for being a repository of pirated material because it’s the users who are uploading and downloading the actual files. That has historically not worked out too well for the app makers. It’s the reason YouTube errs on the side of copyright claims - they don’t want to get drawn into that battle.
I also don’t personally find any weight to the argument that AI is ‘just learning like humans learn’. That’s like arguing that NFL teams should be allowed to use Mark Rober’s kicking robot in the Super Bowl because “it kicks just like a human does”.