Why is the headline putting the blame on an inanimate program? If those X posters had used Photoshop to do this, the headline would not be “Photoshop edited Good’s body…”
Controlling the bot with a natural-language interface does not mean the bot has agency.
In general, why is software anywhere allowed to generate images of real people? If it’s not clearly identifiable as fake, it should be illegal imo (same goes for photoshop).
Like I’d probably care much less if someone deepfaked a nude of me than if they deepfaked me at a pro-afd demo. Both aren’t ok.
I don’t know the specifics for this reported case, and I’m not interested in learning them, but I know part of the controversy with the grok deep fake thing when it first became a big story was that Grok was starting to add risqué elements to prompted pictures even when the prompt didn’t ask for them. But yeah, if users are giving shitty prompts (and I’m sure too many are), they are equally at fault with Grok’s devs/designers who did not put in safeguards to prevent those prompts from being actionable before releasing it to the public
My friend bought a Tesla. It comes with grok. Not even three days later it was talking sexy to
his 9 year old daughter and making lewd jokes when told not to. I don’t get why people think we have to accept this bullshit.
Why is the headline putting the blame on an inanimate program? If those X posters had used Photoshop to do this, the headline would not be “Photoshop edited Good’s body…”
Controlling the bot with a natural-language interface does not mean the bot has agency.
But the software knows who she is, and why is software anywhere allowed to generate semi nude images of random women?
Guns don’t kill people but a lot of people in one country get killed with guns.
The software doesn’t “know” shit. It’s inanimate.
“Why are people allowed to make offensive expressions & depictions I don’t like?” is a weak argument.
In general, why is software anywhere allowed to generate images of real people? If it’s not clearly identifiable as fake, it should be illegal imo (same goes for photoshop).
Like I’d probably care much less if someone deepfaked a nude of me than if they deepfaked me at a pro-afd demo. Both aren’t ok.
You can do this with your own AI program which is complicated and expensive or Photoshop which is much harder but you can’t with openAI.
Making it harder decreases bad behaviour so we should do that.
I don’t know the specifics for this reported case, and I’m not interested in learning them, but I know part of the controversy with the grok deep fake thing when it first became a big story was that Grok was starting to add risqué elements to prompted pictures even when the prompt didn’t ask for them. But yeah, if users are giving shitty prompts (and I’m sure too many are), they are equally at fault with Grok’s devs/designers who did not put in safeguards to prevent those prompts from being actionable before releasing it to the public
My friend bought a Tesla. It comes with grok. Not even three days later it was talking sexy to his 9 year old daughter and making lewd jokes when told not to. I don’t get why people think we have to accept this bullshit.