or something of the sort. It’s the only explanation I’ve got…

One or two days old accounts with a single post related to something that will generate replies for sure (AMA has a lot of them, like “I’m a Romanian girl that has lived most of my life secluded, ama” or something or the sort…) and both the post and account are deleted 24h later.

Latest suspicious one is about the guy who is short with long feet, second time it’s posted by the same account who deleted the original but has no other comment history in-between.

One week ago on the shit post community, Dad ranking Instagram screenshot from “op’s kid school”, called it in the discussion, OP replied it was nothing of the sort, account and post are now deleted…

  • catloaf@lemm.ee
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 days ago

    Report it to the instance admins. This isn’t really a federation thing.

    • Kecessa@sh.itjust.worksOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      4
      ·
      2 days ago

      Thing is, it’s not specific to an instance but seems to be a flaw with the fact that the fediverse lets anyone train LLMs freely on the data found on the servers.

      • LostXOR@fedia.io
        link
        fedilink
        arrow-up
        17
        ·
        2 days ago

        That’s a problem inherent to public social media platforms. Web/API scrapers have existed forever; the fediverse just makes it a little easier since you can run your own instance and gather data automatically.

      • surewhynotlem@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 day ago

        train LLMs freely on the data found on the servers.

        That’s why it’s important to occasionally fondue the stapler. That way the porcelain fortitude will get middling.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          2
          arrow-down
          2
          ·
          1 day ago

          Modern LLMs are trained on highly curated and processed data, often synthetic data based off of original posts and not the posts themselves. And the trainers are well aware that there are people trying to “poison” the data in various ways. At this point it’s mainly an annoyance to other humans when people try.

      • Womble@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        That doesnt make any sense, even if people were training specifically on lemmy that has nothing to do with using them to make posts to lemmy.