Sexually explicit AI-generated images of Taylor Swift have been circulating on X (formerly Twitter) over the last day in the latest example of the proliferation of AI-generated fake pornography and the challenge of stopping it from spreading.
X’s policies regarding synthetic and manipulated media and nonconsensual nudity both explicitly ban this kind of content from being hosted on the platform.
The obvious solution on X’s side is to ID everyone that wants to post anything. And remember that the obvious solution doesn’t have to be the best solution, a good solution or, even, a real solution at all.
I am shocked! Shocked, I say!
Too bad I’m not on Twitter anymore. Otherwise, I would check some of these out.
I’m against deepfaking others without their consent, but all this coverage has me wondering what the big deal is. Things like this have always existed, what is the difference this time?
They’re orders of magnitude more real looking.
This is why you never feed the trolls
Wow this is going to be interesting from multiple fronts for me especially.
First, I’m a huge swiftie - and Taylor is probably not going to take this lightly. Who she’s going to target will be a more interesting question.
Second, as a nerd who has dabbled with generated art - thank you trolls for ruining it for all of us. This is just going to beg for regulations that is going to ruin the generative AI world - as if we didn’t have enough regulations barreling towards the area with copyright issues.
Third, as someone who hates Musk - I hope everything focuses on him and the platform formerly known as Twitter.
In doesn’t matter. Sophisticated models are open-source and have already been forked and archived beyond all conceivable hope of regulation. There’s no going back.
We’ll just see about that.
Are you going to somehow reach into my personal computer and remove the software and models from it?
Could be. My tines are ever dangling.
Neuromorphic hardware is coming to some future gen phones to allow training custom sophisticated models.
Indeed we’ll see… the rest of the iceberg.
This is just going to beg for regulations that is going to ruin the generative AI world
Awesome.
I have an honest question and would like to hear your (and others, of course) opinion:
I get the anger at the models that exist today. DallE, Midjourney and others were trained on millions of images scraped without consent. That itself is legally ambiguous, and will be interesting how courts rule on it (who am I kidding, they’ll go with the corporations). More importantly though, some of it (and increasingly more, as the controversy reached mainstream) was explicitly disallowed by the author to be used as training data. While I don’t think stealing is the right term here, it is without question unethical and should not be tolerated. While I don’t feel as strongly about this as many others do, maybe because I’m not reliant on earning money from my art, I fully agree that this is scummy and should be outlawed.
What I don’t understand is how many people condemn all of generative AI. For me the issue seems to be one of consent and compensation, and ultimately of capitalism.
Would you be okay with generative AI whose training data was vetted to be acquired consentually?
What I don’t understand is how many people condemn all of generative AI. For me the issue seems to be one of consent and compensation, and ultimately of capitalism.
Would you be okay with generative AI whose training data was vetted to be acquired consentually?
Not if it was used to undercut human artists’ livelihoods.
Hypothetical future where everybody gets UBI and/or AI becomes sentient and able to unionize, maybe we look back at this again.
I don’t think AI has a soul but there no reason it couldn’t be given one.
Undercutting artists’ livelihoods is definitely a problem that needs to be addressed. I honestly don’t think UBI is going far enough, as it’s just a bandaid on the festering tumor of capitalism (but that’s a discussion for another day). But can’t the same be said about numerous other fields? AI can perform many tasks throughout all fields of work. At the moment it is still worse than an expert in most of these, but it’s a matter of when, not if, it surpasses that. Engineers, programmers, journalists, accountants, I can’t think of any job that is not en route to be streamlined or automated by AI, reducing need for humans and putting people out of work.
Artists have it worse in the sense that they are often self employed, which makes them more vulnerable to exploitation and poverty. But isn’t the problem much larger than that?
This whole debate somewhat reminds me of the swing riots. They were often portrait as anti-technology or backwards, when in actuality the reason for the revolts wasn’t that machines existed, but that they were used to undercut and exploit workers.
I’m not trying to argue that any of what’s happening now is good, just to clarify again. The current “AI revolution” is rotten through and though. But AI is (for now, the consciousness question is super interesting, but not all that relevant at the moment) just a tool. It irks me that so much righteous anger is projected at AI, instead of the people using it to exploit people and maximize their profits, and the system that gives them the power to do so. Capitalists don’t care if it’s an AI, sweatshop workers overseas or exploited workers competing for jobs domestically. They’ll go with that earns them more money. We should be angry at the cause, not the symptoms.
I’m curious what you mean by soul here, if you’re using it in a metaphorical sense or the religious sense
if you’re using it in a metaphorical sense or the religious sense
There’s a difference?
Well I’ve never heard of a religious person claiming AI could have a soul in the religious sense, and “soul” has other meanings than the religiously literal one, so yes?
Well how many and what difference sorts of religious people have you come across?
People hear “religious” and seem equate it with “Abrahamic malarkey it isn’t couth to call folks on.”“Religiously literal” seems a contradiction of terms a well. There is truth and there is ways to understand and to convey that truth.
My initial position was that AI art would be exciting when a more carefully curated training data is used. … But after some talking with friends, I think we’re living in a world that has minimal respect for copyright already, except when a corporation has a problem with it and wants to bring down the hammer of the law.
It does hurt and its easy to be emotional about artists’ livelihoods being threatened by AI, they aren’t the only laborers threatened by job loss to automation, but this one hurts the most.
So now its just up to AI and artists to make interesting art with it. And for artists to adapt to this environment that has automated art tools.
omg Franzia haii :3
with how easy it is to run these models by now, the technology is certainly here to stay, and people will need to adapt, for sure. It only really makes sense to discuss AI in the broader context of capitalism, imo
I don’t have a problem with training on copyrighted content provided 1) a person could access that content and use it as the basis of their own art and 2) the derived work would also not infringe on copyright. In other words, if the training data is available for a person to learn from and if a person could make the same content an AI would and it be allowed, then AI should be allowed to do the same. AI should not (as an example) be allowed to simply reproduce a bit-for-bit copy of its training data (provided it wasn’t something trivial that would not be protected under copyright anyway). The same is true for a person. Now, this leaves some protections in place such as: if a person made content and released it to a private audience which are not permitted to redistribute it, then an AI would only be allowed to train off it if it obtained that content with permission in the first place, just like a person. Obtaining it through a third party would not be allowed as that third party did not have permission to redistribute. This means that an AI should not be allowed to use work unless it at minimum had licence to view the work. I don’t think you should be able to restrict your work from being used as training data beyond disallowing viewing entirely though.
I’m open to arguments against this though. My general concern is copyright already allows for substantial restrictions on how you use a work that seem unfair, such as Microsoft disallowing the use of Windows Home and Pro on headless machines/as servers.
With all this said, I think we need to be ready to support those who lose their jobs from this. Losing your job should never be a game over scenario (loss of housing, medical, housing loans, potentially car loans provided you didn’t buy something like a mansion or luxury car).
Is that hatred, or fear, that I hear in this comment?
Is that hatred, or fear, that I hear in this comment?
That’s “suppressing theft masquerading as art is awesome” you hear in that comment.
Ah, it was the third option, ignorance.
I just wish my printer could actually print a car. 200mm bed is a little small
Break it down into chunks and assemble it like Lego.
Now you’re stealing from LEGO! 🙀
Ah, it was the third option, ignorance.
Oh, I’m not at all ignorant of how horrible generative " art " is, but I appreciate you checking on me.
If it’s horrible and it’s also “masquerading” as human art, what does that say about human art?
Misunderstanding doesn’t make the comment into the type of gotcha you think it is
Are you mad at people who can draw or something?
This is just going to beg for regulations that is going to ruin the generative AI world
One can only hope! Fingers crossed!!!
Just look at Facebook, yesterday I was spammed by sites with AI fakes of Scarlett Johansson, reported them all, this morning Billie Eilish with biiiig boobs in suggestive positions, reported, now I’m being bombarded by Alexandra Daddario obvious fakes, it’s getting ridiculous
I haven’t seen any of this, and Google knows I’m a big old perv.
Have you guys considered
Uhhhh
Not being on Facebook and Twitter?
It’s not that at all. I keep tabs on several far-flung friends and relatives on FB. Zero spam. TBF, I make it a point to click on ads for things I don’t need but don’t mind seeing (rockets, 3D printers, vocal jazz stuff). Of course, I’m on IPv4 with my whole household, so if I search for hiking shoes, everyone in the house gets FB ads for hiking shoes. I got a bunch of ads for Merino Wool outerwear in mid December. My wife was kind enough to get me several base layers for Christmas. There is no good and bad, just poor internet management and hygiene (IMHO).
I don’t mind ads and suggested pages that are vaguely related to my interests, but aside from obscene manga and obvious fake baits for horny men (seriously I watch my porn in private tabs), I’m not interested in groups as exotic as “car spotting Philippines” or “my dream mud house in Congo”
So - honest advice. I remember magazines - some with more ads than articles. You just flip past them. It’s different now because websites know your scrolling rates and FB wants you to engage. It’s why I actually click though a couple of ads every so often. Merrill? Great shoes. Osprey? Yeah, nice backpacks. Anycubic? I’ll probably want a new resin printer some day. Sure, with 2-3 clicks I can - and do - switch to the chronological “friends” feed that is exclusively friend-posted content with some paid ads (not engagement content) to pay the bills.
As for private browsing, I hear you. But, also, your IP is part of your online fingerprint. You don’t need cookies or tracking pixels from previous sessions active for FB to know - through the aggregation data they buy (possibly even from your Internet provider) what you’re looking at.
[Disclaimer - this next bit is anecdotal, no data to support the following theory.]I had a friend who suddenly was getting a ton of MAGA and alien conspiracy ads of his FB page. He doesn’t track his outgoing IP but I suspect that he was just re-assigned an outgoing IP that has previously been used by someone else (his locality is very red, politically, though he ie not). I know for my IP I’ve had my IPv4 for at least 7 months. It’s one reason that my wife, daughter and I all get intertwined ads on what we search.
To attempt get around this, one option is a vpn. Add to that a separate private browser (it’s how I did my online Christmas shopping, and it’s kind of a pain). You’re still in danger of machine fingerprinting, but it’s usually too much hassle for just marketing to wind its way back to you.
That damn algorithm. You send a dic pic to one celebrity and you’re being bombarded for life
Who is on X, though? Sigh.
It’s the smallest major social network but there’s still 300 million ish people on it.
Still literally millions and millions of users who don’t care about the things we care about
A surprisingly high number of leftists still use it as well.
Can’t you read? Trolls who post explicit AI images of Taylor Swift!
Cue the Sunny “That’s horrible! But where!?” Meme
ye olde chan
X, apparently.
Xtwitter giving hamster and videos some competition
Back in my day we just looked at photoshopped pictures of celebrities like respectable men!
so in twenty years this comment will be “we only ai generated like respectable men” under an article about the formerly
TwitterXFriendcorpLinkedIn For Friends headquarters being invaded by naked Taylor swift robotsngl holodeck porn will ruin lives in the future
I’d be surprised if vr hasn’t now.
This is actually an excellent way to trigger faster regulation of fakes. I applaud this.
You can’t regulate something that takes desktop levels of power to make. What are you going to do? Arrest people in China, Russia NK, etc.? Societal change is needed, not regulation.
Societal change is needed, not regulation.
I agree on the regulation, but I don’t think that society is likely to change. Are entertainers going to stop making use of sex appeal?
We could just…not lose our shit if we see slightly too much of someone’s body.
An extra quarter inch of titty just isn’t that big of a deal. We literally all have nipples.
Block their IPs. Keeps happening? Block their subnet. Of course taking this approach we may end up blocking all of Russia, China, NK, etc.
Nothing of value will be lost.
Sharks have flooded Shark Infested Waters with shark asshole stink but this time the asshole stink is AI generated and Taylor Swift has a billion dollars for lawyers.
@tardigrada so now Twitter is called XXX?😅