• nullroot@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    2 days ago

    It’s not an artistic representation, it’s worse. It’s algorithmic and to that extent it actually has a pretty good idea of what a person looks like naked based on their picture. That’s why it’s so disturbing.

    • bookmeat@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Calling it an invasion of privacy is a stretch the way that copyright infringement is called theft.

    • aesthelete@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Yeah they probably fed it a bunch of legitimate on/off content as well as stuff from people who used to do make “nudes” from celebrity photos with sheer / skimpy outfits as a creepy hobby.

        • Allero@lemmy.today
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          2 days ago

          Honestly, I’d love to see more research on how AI CSAM consumption affects consumption of real CSAM and rates of sexual abuse.

          Because if it does reduce them, it might make sense to intentionally use datasets already involved in previous police investigations as training data. But only if there’s a clear reduction effect with AI materials.

          (Police has already used some materials, with victims’ consent, to crack down on CSAM sharing platforms in the past).

          • bookmeat@lemmynsfw.com
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 day ago

            The idea is that to generate csam there was harm done to get the training data. This is why it’s bad.

            • Allero@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              24 hours ago

              That would be true if children were abused specifically to obtain the training data. But what I’m talking about is using the data that already exists, taken from police investigations and other sources. Of course, it also requires victim’s consent (as they grow old enough), as not everyone will agree to have materials of their abuse proliferate in any way.

              Police has already used CSAM with victim’s consent to better impersonate CSAM platform admins in investigative operations, leading to arrests of more child abusers and those sharing the materials around. While controversial, this came as a net benefit as it allowed to reduce the amount of avenues for CSAM sharing and the amount of people able to do so.

              The case with AI is milder, as it requires minimum human interaction, so no one will need to re-watch the materials as long as victims are already identified. It’s enough for the police to contact victims, get the agreement, and feed the data into AI without releasing the source. With enough data, AI could improve image and video generation, driving more watches away from real CSAM and reducing rates of abuse.

              That is, if it works this way. There’s a glaring research hole in this area, and I believe it is paramount to figure out if it helps. Then, we could decide whether to include already produced CSAM into the data, or if adult data is sufficient to make it good enough for the intended audience to make a switch.

              • bookmeat@lemmynsfw.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                17 hours ago

                You’re going to tell me that there’s no corporation out there that won’t pay to improve their model with fresh data and not ask questions about where that data came from?

                • Allero@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  10 hours ago

                  I think such matters should be kept strictly out of corporate hands, or be completed with total oversight.

            • Allero@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              23 hours ago

              Why though? If it does reduce consumption of real CSAM and/or real life child abuse (which is an “if”, as the stigma around the topic greatly hinders research), it’s a net win.

              Or is it simply a matter of spite?

              Pedophiles don’t choose to be attracted to children, and many have trouble keeping everything at bay. Traditionally, those of them looking for the least harmful release went for real CSAM, but it’s obviously extremely harmful in its own right - just a bit less so than going out and raping someone. Now that AI materials appear, they may offer the safest of the highly graphical outlets we know, with least child harm done. Without them, many pedophiles will revert to traditional CSAM, increasing the amount of victims to cover for the demand.

              As with many other things, the best we can hope for here is harm reduction. Hardline policies do not seem to be efficient enough, as people continuously find ways to propagate the CSAM and pedophiles continuously find ways to access it and leave no trace. So, we need to think of ways to give them something which will make them choose AI over real materials. This means making AI better, more realistic, and at the same time more diverse. Not for their enjoyment, but to make them switch for something better and safer than what they currently use.

              I know it’s a very uncomfortable kind of discussion, but we don’t have the magic pill to eliminate it all, and so must act reasonably to prevent what we can prevent.

              • VoteNixon2016@lemmy.blahaj.zone
                link
                fedilink
                English
                arrow-up
                1
                ·
                17 hours ago

                Because with harm reduction as the goal, the solution is never “give them more of the harmful thing.”

                I’ll compare it to the problems of drug abuse. You don’t help someone with an addiction by giving them more drugs, you don’t help them by throwing them in jail just for having an addiction, you help them by making it safe and easy to get treatment for the addiction.

                Look at what Portugal did in the early 2000s to help mitigate the problems associated with drug use, treating it as a health crisis rather than a criminal one.

                You don’t arrest someone for being addicted to meth, you arrest them for stabbing someone and stealing their wallet to buy more meth; you don’t arrest someone just for being a pedophile, you arrest them for abusing children.

                This means making AI better, more realistic, and at the same time more diverse.

                No, it most certainly does not. AI is already being used to generate explicit images of actual children. Making it better at that task is the opposite of harm reduction, it makes creating new victims easier than ever.

                Acting reasonably to prevent what we can prevent means shutting down the CSAM-generating bot, not optimizing and improving it.

                • Allero@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  12 hours ago

                  To me, it’s more like the Netherlands giving out free syringes and needles so that drug consumers at least wouldn’t contract something from the used ones.

                  To be clear: granting any and all pedophiles access to therapy would be of tremendous help. I think it must be done. But there are two issues remaining:

                  1. Barely any government will scrape enough money to fund such programs now that therapy is astronomically expensive
                  2. Even then, plenty of pedophiles will keep consuming CSAM, legally or not. There must be some incentives for them to choose the AI-generated option that is at least less harmful than the alternative.
            • village604@adultswim.fan
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              2 days ago

              The images already exist, though, and if they can be used to prevent more real children from being abused…

              It’s definitely a tricky moral dilemma, like using the results of Unit 731 to improve our treatment of hypothermia.

              • NιƙƙιDιɱҽʂ@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                edit-2
                1 day ago

                They do exist, and that can’t be undone, but those depicted in them have not consented to be used in training data. I get the ends, I just don’t think that makes the means ethically okay. And maybe the ends aren’t either, to be fair, without awaiting research on the subject, however one might do that.

                • Allero@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  24 hours ago

                  Victims’ consent is paramount. I thought I made it clear enough when I gave the example of police investigations asking for that, but apparently not.

                  In any case, this is exactly why I’d like to see more research being done on the topic. First things first, we need to know if it even works. For now it’s a wild guess that it probably should.