It’s technically possible because AI doesn’t exist. The LLM’s we have do exist and these have no idea what it’s doing.
It’s a database that can parse human language and put pixels together from requests. It has no such concept as child pornography, it’s just putting symbols together in a way it learned before that happen to form a child pornography picture
This is a lot of words to basically say the developers didn’t bother to block illegal content. It doesn’t need to ‘understand’ morality for the humans running it to be responsible for what it produces.
Neither of you are wrong. LLMs are wild uncaged animals. You’re asking why we didn’t make a cage, and they’re saying we don’t even know how to make one yet.
So, why are we letting the dangerous feral beast roam around unchecked?
We as a society have failed to implement those consequences. When the government refused, we should have taken up the mantle ourselves. It should be a mark of great virtue to have the head of a CEO mounted over your fireplace.
I feel like our relationship to it is also quite messed.
AI doesn’t actually undress people, it just draws a naked body. It’s an artistic representation, not an X-ray. You’re not getting actual nudes in this process, and AI has no clue how the person looks like naked.
Now, such images can be used to blackmail people, because again, our culture didn’t quite catch up with the fact that every nude image can absolutely be AI-generated fake. When it does, however, I fully expect creators of such things to be seen as odd creeps spreading their fantasies around and any nude imagery to be seen as fake by default.
It’s not an artistic representation, it’s worse. It’s algorithmic and to that extent it actually has a pretty good idea of what a person looks like naked based on their picture. That’s why it’s so disturbing.
Yeah they probably fed it a bunch of legitimate on/off content as well as stuff from people who used to do make “nudes” from celebrity photos with sheer / skimpy outfits as a creepy hobby.
Honestly, I’d love to see more research on how AI CSAM consumption affects consumption of real CSAM and rates of sexual abuse.
Because if it does reduce them, it might make sense to intentionally use datasets already involved in previous police investigations as training data. But only if there’s a clear reduction effect with AI materials.
(Police has already used some materials, with victims’ consent, to crack down on CSAM sharing platforms in the past).
That would be true if children were abused specifically to obtain the training data. But what I’m talking about is using the data that already exists, taken from police investigations and other sources. Of course, it also requires victim’s consent (as they grow old enough), as not everyone will agree to have materials of their abuse proliferate in any way.
Police has already used CSAM with victim’s consent to better impersonate CSAM platform admins in investigative operations, leading to arrests of more child abusers and those sharing the materials around. While controversial, this came as a net benefit as it allowed to reduce the amount of avenues for CSAM sharing and the amount of people able to do so.
The case with AI is milder, as it requires minimum human interaction, so no one will need to re-watch the materials as long as victims are already identified. It’s enough for the police to contact victims, get the agreement, and feed the data into AI without releasing the source. With enough data, AI could improve image and video generation, driving more watches away from real CSAM and reducing rates of abuse.
That is, if it works this way. There’s a glaring research hole in this area, and I believe it is paramount to figure out if it helps. Then, we could decide whether to include already produced CSAM into the data, or if adult data is sufficient to make it good enough for the intended audience to make a switch.
You’re going to tell me that there’s no corporation out there that won’t pay to improve their model with fresh data and not ask questions about where that data came from?
Why though? If it does reduce consumption of real CSAM and/or real life child abuse (which is an “if”, as the stigma around the topic greatly hinders research), it’s a net win.
Or is it simply a matter of spite?
Pedophiles don’t choose to be attracted to children, and many have trouble keeping everything at bay. Traditionally, those of them looking for the least harmful release went for real CSAM, but it’s obviously extremely harmful in its own right - just a bit less so than going out and raping someone. Now that AI materials appear, they may offer the safest of the highly graphical outlets we know, with least child harm done. Without them, many pedophiles will revert to traditional CSAM, increasing the amount of victims to cover for the demand.
As with many other things, the best we can hope for here is harm reduction. Hardline policies do not seem to be efficient enough, as people continuously find ways to propagate the CSAM and pedophiles continuously find ways to access it and leave no trace. So, we need to think of ways to give them something which will make them choose AI over real materials. This means making AI better, more realistic, and at the same time more diverse. Not for their enjoyment, but to make them switch for something better and safer than what they currently use.
I know it’s a very uncomfortable kind of discussion, but we don’t have the magic pill to eliminate it all, and so must act reasonably to prevent what we can prevent.
Because with harm reduction as the goal, the solution is never “give them more of the harmful thing.”
I’ll compare it to the problems of drug abuse. You don’t help someone with an addiction by giving them more drugs, you don’t help them by throwing them in jail just for having an addiction, you help them by making it safe and easy to get treatment for the addiction.
Look at what Portugal did in the early 2000s to help mitigate the problems associated with drug use, treating it as a health crisis rather than a criminal one.
You don’t arrest someone for being addicted to meth, you arrest them for stabbing someone and stealing their wallet to buy more meth; you don’t arrest someone just for being a pedophile, you arrest them for abusing children.
This means making AI better, more realistic, and at the same time more diverse.
No, it most certainly does not. AI is already being used to generate explicit images of actual children. Making it better at that task is the opposite of harm reduction, it makes creating new victims easier than ever.
Acting reasonably to prevent what we can prevent means shutting down the CSAM-generating bot, not optimizing and improving it.
They do exist, and that can’t be undone, but those depicted in them have not consented to be used in training data. I get the ends, I just don’t think that makes the means ethically okay. And maybe the ends aren’t either, to be fair, without awaiting research on the subject, however one might do that.
Idk, calling it ‘art’ feels like a reach. At the end of the day, it’s using someone’s real face for stuff they never agreed to. Fake or not, that’s still a massive violation of privacy.
This is messed up tbh. Using AI to undress people—especially kids—shouldn’t even be technically possible, let alone.
It’s technically possible because AI doesn’t exist. The LLM’s we have do exist and these have no idea what it’s doing.
It’s a database that can parse human language and put pixels together from requests. It has no such concept as child pornography, it’s just putting symbols together in a way it learned before that happen to form a child pornography picture
smh, back in my day we just cut out pictures of the faces of woman we wanted to see naked, and glued them ontop of (insert goon magazine of choice)
AI has existed since 1956
Not AI as the common people think it is, I guess I should have cleared that up.
AI as we currently have it is little more than a specialized database
This is a lot of words to basically say the developers didn’t bother to block illegal content. It doesn’t need to ‘understand’ morality for the humans running it to be responsible for what it produces.
Yeah, how hard is it to block certain keywords from being added to the prompt?
We’ve had lists like that since the 90’s. Hardly new technology. Even prevents prompt hacking if you’re clever about it.
Neither of you are wrong. LLMs are wild uncaged animals. You’re asking why we didn’t make a cage, and they’re saying we don’t even know how to make one yet.
So, why are we letting the dangerous feral beast roam around unchecked?
Because being irresponsible is financially rewarding. There’s no downside. Just golden parachutes
We as a society have failed to implement those consequences. When the government refused, we should have taken up the mantle ourselves. It should be a mark of great virtue to have the head of a CEO mounted over your fireplace.
Okay, I’ll take Zuckerberg over the TV if I can place used dildos in his mouth from time to time. Elon, on the other hand, might frighten the cat.
Eh, no?
It’s really REALLY hard to know what content is, and to identify actual child porn even remotely accidentally, even with AI
I feel like our relationship to it is also quite messed.
AI doesn’t actually undress people, it just draws a naked body. It’s an artistic representation, not an X-ray. You’re not getting actual nudes in this process, and AI has no clue how the person looks like naked.
Now, such images can be used to blackmail people, because again, our culture didn’t quite catch up with the fact that every nude image can absolutely be AI-generated fake. When it does, however, I fully expect creators of such things to be seen as odd creeps spreading their fantasies around and any nude imagery to be seen as fake by default.
It’s not an artistic representation, it’s worse. It’s algorithmic and to that extent it actually has a pretty good idea of what a person looks like naked based on their picture. That’s why it’s so disturbing.
Calling it an invasion of privacy is a stretch the way that copyright infringement is called theft.
Yeah they probably fed it a bunch of legitimate on/off content as well as stuff from people who used to do make “nudes” from celebrity photos with sheer / skimpy outfits as a creepy hobby.
Also csam in training data definitely is a thing
Honestly, I’d love to see more research on how AI CSAM consumption affects consumption of real CSAM and rates of sexual abuse.
Because if it does reduce them, it might make sense to intentionally use datasets already involved in previous police investigations as training data. But only if there’s a clear reduction effect with AI materials.
(Police has already used some materials, with victims’ consent, to crack down on CSAM sharing platforms in the past).
The idea is that to generate csam there was harm done to get the training data. This is why it’s bad.
That would be true if children were abused specifically to obtain the training data. But what I’m talking about is using the data that already exists, taken from police investigations and other sources. Of course, it also requires victim’s consent (as they grow old enough), as not everyone will agree to have materials of their abuse proliferate in any way.
Police has already used CSAM with victim’s consent to better impersonate CSAM platform admins in investigative operations, leading to arrests of more child abusers and those sharing the materials around. While controversial, this came as a net benefit as it allowed to reduce the amount of avenues for CSAM sharing and the amount of people able to do so.
The case with AI is milder, as it requires minimum human interaction, so no one will need to re-watch the materials as long as victims are already identified. It’s enough for the police to contact victims, get the agreement, and feed the data into AI without releasing the source. With enough data, AI could improve image and video generation, driving more watches away from real CSAM and reducing rates of abuse.
That is, if it works this way. There’s a glaring research hole in this area, and I believe it is paramount to figure out if it helps. Then, we could decide whether to include already produced CSAM into the data, or if adult data is sufficient to make it good enough for the intended audience to make a switch.
You’re going to tell me that there’s no corporation out there that won’t pay to improve their model with fresh data and not ask questions about where that data came from?
No.
Why though? If it does reduce consumption of real CSAM and/or real life child abuse (which is an “if”, as the stigma around the topic greatly hinders research), it’s a net win.
Or is it simply a matter of spite?
Pedophiles don’t choose to be attracted to children, and many have trouble keeping everything at bay. Traditionally, those of them looking for the least harmful release went for real CSAM, but it’s obviously extremely harmful in its own right - just a bit less so than going out and raping someone. Now that AI materials appear, they may offer the safest of the highly graphical outlets we know, with least child harm done. Without them, many pedophiles will revert to traditional CSAM, increasing the amount of victims to cover for the demand.
As with many other things, the best we can hope for here is harm reduction. Hardline policies do not seem to be efficient enough, as people continuously find ways to propagate the CSAM and pedophiles continuously find ways to access it and leave no trace. So, we need to think of ways to give them something which will make them choose AI over real materials. This means making AI better, more realistic, and at the same time more diverse. Not for their enjoyment, but to make them switch for something better and safer than what they currently use.
I know it’s a very uncomfortable kind of discussion, but we don’t have the magic pill to eliminate it all, and so must act reasonably to prevent what we can prevent.
Because with harm reduction as the goal, the solution is never “give them more of the harmful thing.”
I’ll compare it to the problems of drug abuse. You don’t help someone with an addiction by giving them more drugs, you don’t help them by throwing them in jail just for having an addiction, you help them by making it safe and easy to get treatment for the addiction.
Look at what Portugal did in the early 2000s to help mitigate the problems associated with drug use, treating it as a health crisis rather than a criminal one.
You don’t arrest someone for being addicted to meth, you arrest them for stabbing someone and stealing their wallet to buy more meth; you don’t arrest someone just for being a pedophile, you arrest them for abusing children.
No, it most certainly does not. AI is already being used to generate explicit images of actual children. Making it better at that task is the opposite of harm reduction, it makes creating new victims easier than ever.
Acting reasonably to prevent what we can prevent means shutting down the CSAM-generating bot, not optimizing and improving it.
Or we could like…not
The images already exist, though, and if they can be used to prevent more real children from being abused…
It’s definitely a tricky moral dilemma, like using the results of Unit 731 to improve our treatment of hypothermia.
They do exist, and that can’t be undone, but those depicted in them have not consented to be used in training data. I get the ends, I just don’t think that makes the means ethically okay. And maybe the ends aren’t either, to be fair, without awaiting research on the subject, however one might do that.
Idk, calling it ‘art’ feels like a reach. At the end of the day, it’s using someone’s real face for stuff they never agreed to. Fake or not, that’s still a massive violation of privacy.