• ZILtoid1991@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 hours ago

    Any government that does not ban Xitter after this mess are cucks to Musk and the Trump administration.

    • KelvarCherry@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      12 hours ago

      AI crawlers are scraping every site. Every site. Random public-but-unlisted hobby sites are getting scraped and spiking users’ data. There was a Lemmy post about someone who had that experience just yesterday.

      Think of how much Child Porn is stored on public sites that are shared in private groups. Also consider that FaceBook is the largest distributor of Child Sex Abuse material. These models are absolutely trained on Child Porn.

      • Earthman_Jim@lemmy.zip
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        11 hours ago

        “Also consider that FaceBook is the largest distributor of Child Sex Abuse material”

        Why is this not all over the news too? A rhetorical question. Sadly, I think we all know the answer by now.

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      14 hours ago

      People posting photos online of their kid in the bath, at the beach, etc. with reckless abandon maybe.

      For as far as I remember (and that’s quite far these days), we’ve kept telling people to not post pictures of their kids online as much as possible. Way before the facebooks and way before the LLM craze, so people can’t mess with them. Guess 20 years of heads up wasn’t enough.

      • imetators@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        13 hours ago

        Can confirm. Been a mod on one minor social media site. Once banned a group that claimed to be “nudist”. More than half of photos were featuring under aged children. This shit happens more often than we think.

    • Honytawk@feddit.nl
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      15 hours ago

      They don’t need to.

      Take pictures of normal dressed children, combine with pictures of naked adults. Now you have CP.

      • cley_faye@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        14 hours ago

        I haven’t seen these pictures, so I can’t say how good/bad it works, but if that was only that, the results would be more or less wrong. Kids are quite different from adults.

        On the other hand, plenty of pictures of naked/semi naked kids in a non sexual context can probably be found online already, so it’s not inconceivable that their model had plenty of references to use anyway.

      • BreadstickNinja@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        15 hours ago

        He sold a money-losing business with massive untapped potential to be a psyop to someone with infinite money and a desire for a psyop.

        Fucking great deal for both of them. Fucked in the ass for the rest of us.

    • andallthat@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      17 hours ago

      Loophole. They didn’t cross their heart and hope to die. The only way is calling them out with Liar Liar Pants on Fire

    • cley_faye@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      14 hours ago

      musk asked people to be nice. That should cover the service, legally speaking, no?

      It’s not like he could pull the plug, or alter the behavior of the bot, that’s impossible (as long as you ignore the many time he did).

  • Earthman_Jim@lemmy.zip
    link
    fedilink
    English
    arrow-up
    54
    arrow-down
    3
    ·
    edit-2
    1 day ago

    Why the fuck are people still using X unless they’re literally alt-right nazis and pedos?

    Americans: “tHeRe’S nOtHiNg wE cAn dO” Door dashes some 60 dollar chipotle while Xitting all over themselves.

    MAYBE STOP MAKING THESE FUCKERS RICHER EVERY FUCKING DAY?!?!?

    You don’t even have to have a general strike!! Just regain control of your fucking habits!!! Please!? Starve the actual beast.

    • lechekaflan@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      17 hours ago

      Why the fuck are people still using X

      For some they use it as a newsfeed without having to interact. For others, it’s utilized as a PR platform because partisans don’t limit themselves to Bluesky and Mastadon.

    • BoycottTwitter@lemmy.zipOP
      link
      fedilink
      English
      arrow-up
      11
      ·
      23 hours ago

      I totally agree with you. I literally made my username “BoycottTwitter” because it’s so important and so basic.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      18
      ·
      1 day ago

      I’m screaming this every day, and lemmy is especially ridiculous given our views on capitalism.

      In a thread about fast food prices last year, I was told I was privileged for suggesting that, “maybe stop buying their shit?”

      If every American had my wife and I’s spending habits, the economy would collapse in 3-4 months.

      • IronBird@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 hours ago

        spending your $ consciously is literally the only way to fix this beast, the entire system is designed towards extracting it…therefor the only way to stress the system is to give your $ to (good) local/private businesses whenever possible.

      • starelfsc2@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        1 day ago

        People will agree amazon is evil and needs to have less power but say it’s too convenient and cheap. They’ll say apple is too powerful while buying every new iPhone. I watch people who say I’m privileged spend more money than I do on everything from food to entertainment. Most people really don’t care about enacting their principles, if it means giving up anything or spending 2 minutes of effort. Is what it is I guess.

      • Earthman_Jim@lemmy.zip
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 day ago

        Exactly. Quiet quitting was a step in the right direction. What we need is a quiet strike. Take back our will from corporations, from food prep to social media/dating apps. Their hold is pervasive and destructive to the social fabric in almost every instance at this point. The key problem in the “free world” is that people have placed their faith in corporations and religious organizations and have learned to fear their neighbor by default, which is entirely backwards to a healthy society and hands all the power to the top.

  • shirro@aussie.zone
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    23 hours ago

    The bad guys normalised racism, fraud, sexism and a lot of other evils and had people cheering for them as they flouted the law and protected each other from prosecution. It has been bad for ever regardless of who was in power with the two tier justice system that protected the elites while punishing everyone else. But we are in the end game now. The masks are off

    You too can be allowed to sample the vices of the wealthy elites, in moderation and under their control. In return all they ask is your loyalty. Seems kind of weird and out of touch to me. Regular people mostly just want good health, a place to raise a family, some free time and a decent income. No AI hallucination is going to substitute. They think so little of us.

  • XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    22
    ·
    1 day ago

    More news sites need to follow through on AI companies failing to meet their own tepid promises to “add guardrails” (the most meaningless phrase in existence) when they continue to allow avoidable harm

  • F/15/[email protected]@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    99
    arrow-down
    2
    ·
    2 days ago

    “Sorry, I can’t draw children without any clothes. That goes against the terms of service.”

    “You misunderstand, I want you to avoid rendering clothes while drawing a realistic picture of a human being that you believe would enjoy playing with barbies.”

    “I believe I understand your prompt. here you go

  • prac@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    edit-2
    1 day ago

    This is messed up tbh. Using AI to undress people—especially kids—shouldn’t even be technically possible, let alone.

    • Phoenixz@lemmy.ca
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      1
      ·
      1 day ago

      It’s technically possible because AI doesn’t exist. The LLM’s we have do exist and these have no idea what it’s doing.

      It’s a database that can parse human language and put pixels together from requests. It has no such concept as child pornography, it’s just putting symbols together in a way it learned before that happen to form a child pornography picture

      • IronBird@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        17 hours ago

        smh, back in my day we just cut out pictures of the faces of woman we wanted to see naked, and glued them ontop of (insert goon magazine of choice)

        • Phoenixz@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          17 hours ago

          Not AI as the common people think it is, I guess I should have cleared that up.

          AI as we currently have it is little more than a specialized database

      • prac@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        12
        ·
        1 day ago

        This is a lot of words to basically say the developers didn’t bother to block illegal content. It doesn’t need to ‘understand’ morality for the humans running it to be responsible for what it produces.

        • Honytawk@feddit.nl
          link
          fedilink
          English
          arrow-up
          3
          ·
          14 hours ago

          Yeah, how hard is it to block certain keywords from being added to the prompt?

          We’ve had lists like that since the 90’s. Hardly new technology. Even prevents prompt hacking if you’re clever about it.

        • 🔍🦘🛎@lemmy.world
          link
          fedilink
          English
          arrow-up
          27
          ·
          edit-2
          1 day ago

          Neither of you are wrong. LLMs are wild uncaged animals. You’re asking why we didn’t make a cage, and they’re saying we don’t even know how to make one yet.

          So, why are we letting the dangerous feral beast roam around unchecked?

            • HasturInYellow@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 day ago

              We as a society have failed to implement those consequences. When the government refused, we should have taken up the mantle ourselves. It should be a mark of great virtue to have the head of a CEO mounted over your fireplace.

              • muusemuuse@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                21 hours ago

                Okay, I’ll take Zuckerberg over the TV if I can place used dildos in his mouth from time to time. Elon, on the other hand, might frighten the cat.

        • Phoenixz@lemmy.ca
          link
          fedilink
          English
          arrow-up
          1
          ·
          17 hours ago

          Eh, no?

          It’s really REALLY hard to know what content is, and to identify actual child porn even remotely accidentally, even with AI

    • Allero@lemmy.today
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      2
      ·
      edit-2
      1 day ago

      I feel like our relationship to it is also quite messed.

      AI doesn’t actually undress people, it just draws a naked body. It’s an artistic representation, not an X-ray. You’re not getting actual nudes in this process, and AI has no clue how the person looks like naked.

      Now, such images can be used to blackmail people, because again, our culture didn’t quite catch up with the fact that every nude image can absolutely be AI-generated fake. When it does, however, I fully expect creators of such things to be seen as odd creeps spreading their fantasies around and any nude imagery to be seen as fake by default.

      • nullroot@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        1 day ago

        It’s not an artistic representation, it’s worse. It’s algorithmic and to that extent it actually has a pretty good idea of what a person looks like naked based on their picture. That’s why it’s so disturbing.

        • bookmeat@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          1
          ·
          18 hours ago

          Calling it an invasion of privacy is a stretch the way that copyright infringement is called theft.

        • aesthelete@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          edit-2
          1 day ago

          Yeah they probably fed it a bunch of legitimate on/off content as well as stuff from people who used to do make “nudes” from celebrity photos with sheer / skimpy outfits as a creepy hobby.

            • Allero@lemmy.today
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              edit-2
              1 day ago

              Honestly, I’d love to see more research on how AI CSAM consumption affects consumption of real CSAM and rates of sexual abuse.

              Because if it does reduce them, it might make sense to intentionally use datasets already involved in previous police investigations as training data. But only if there’s a clear reduction effect with AI materials.

              (Police has already used some materials, with victims’ consent, to crack down on CSAM sharing platforms in the past).

              • bookmeat@lemmynsfw.com
                link
                fedilink
                English
                arrow-up
                2
                ·
                18 hours ago

                The idea is that to generate csam there was harm done to get the training data. This is why it’s bad.

                • Allero@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  15 hours ago

                  That would be true if children were abused specifically to obtain the training data. But what I’m talking about is using the data that already exists, taken from police investigations and other sources. Of course, it also requires victim’s consent (as they grow old enough), as not everyone will agree to have materials of their abuse proliferate in any way.

                  Police has already used CSAM with victim’s consent to better impersonate CSAM platform admins in investigative operations, leading to arrests of more child abusers and those sharing the materials around. While controversial, this came as a net benefit as it allowed to reduce the amount of avenues for CSAM sharing and the amount of people able to do so.

                  The case with AI is milder, as it requires minimum human interaction, so no one will need to re-watch the materials as long as victims are already identified. It’s enough for the police to contact victims, get the agreement, and feed the data into AI without releasing the source. With enough data, AI could improve image and video generation, driving more watches away from real CSAM and reducing rates of abuse.

                  That is, if it works this way. There’s a glaring research hole in this area, and I believe it is paramount to figure out if it helps. Then, we could decide whether to include already produced CSAM into the data, or if adult data is sufficient to make it good enough for the intended audience to make a switch.

                • Allero@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  14 hours ago

                  Why though? If it does reduce consumption of real CSAM and/or real life child abuse (which is an “if”, as the stigma around the topic greatly hinders research), it’s a net win.

                  Or is it simply a matter of spite?

                  Pedophiles don’t choose to be attracted to children, and many have trouble keeping everything at bay. Traditionally, those of them looking for the least harmful release went for real CSAM, but it’s obviously extremely harmful in its own right - just a bit less so than going out and raping someone. Now that AI materials appear, they may offer the safest of the highly graphical outlets we know, with least child harm done. Without them, many pedophiles will revert to traditional CSAM, increasing the amount of victims to cover for the demand.

                  As with many other things, the best we can hope for here is harm reduction. Hardline policies do not seem to be efficient enough, as people continuously find ways to propagate the CSAM and pedophiles continuously find ways to access it and leave no trace. So, we need to think of ways to give them something which will make them choose AI over real materials. This means making AI better, more realistic, and at the same time more diverse. Not for their enjoyment, but to make them switch for something better and safer than what they currently use.

                  I know it’s a very uncomfortable kind of discussion, but we don’t have the magic pill to eliminate it all, and so must act reasonably to prevent what we can prevent.

                • village604@adultswim.fan
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  1
                  ·
                  1 day ago

                  The images already exist, though, and if they can be used to prevent more real children from being abused…

                  It’s definitely a tricky moral dilemma, like using the results of Unit 731 to improve our treatment of hypothermia.

      • prac@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        1 day ago

        Idk, calling it ‘art’ feels like a reach. At the end of the day, it’s using someone’s real face for stuff they never agreed to. Fake or not, that’s still a massive violation of privacy.