• Tolstoshev@lemmy.world
    link
    fedilink
    English
    arrow-up
    67
    arrow-down
    3
    ·
    1 year ago

    It was ever thus:

    A lie gets halfway around the world before the truth has a chance to get its pants on.

    Winston Churchill

  • Buffalox@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    1 year ago

    And trust me, these generated images are getting scarily good.

    I have to agree, I would not be able to spot a single one of them as fake. They look really convincingly authentic IMO.

    • Flying Squid@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      92
      arrow-down
      2
      ·
      1 year ago

      Stalin famously ordered people he had killed erased from photos.

      Imagine what current and future autocratic regimes will be able to achieve when they want to rewrite their histories.

        • Carrolade@lemmy.world
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          2
          ·
          1 year ago

          Probably just because some people really like Stalin, and have become convinced his accounts are the truthful ones and everyone else lies about him.

          • Buffalox@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            That’s a scary thought!! But all kinds of crazy exist, and I mean people have to be literally crazy to want to live under a regime like Stalin made.

        • fuckwit_mcbumcrumble@lemmy.world
          link
          fedilink
          English
          arrow-up
          10
          arrow-down
          5
          ·
          1 year ago

          “Photoshopping” something bad existed for a long time at this point.AI generated images doesn’t really change anything other then the entire photo being fake instead of just a small section.

          • TheFriar@lemm.ee
            link
            fedilink
            English
            arrow-up
            16
            arrow-down
            3
            ·
            1 year ago

            I’d disagree. It takes, now, zero know-how to convincingly create a false image. And it takes zero work. So where one photo would take one person a decent amount of time to convincingly pull off, now one person can create 100 images or more in that time, each one a potential time bomb that will go off when it starts getting passed around as evidence of something. And there are uncountable numbers of bad actors on the internet trying to cause a ruckus. This just increased their chances of succeeding at least 100-fold, and opened the access to many, many others who might just do it accidentally, for a joke, or who always wanted to create waves but didn’t have the photoshop skills necessary.

          • Flying Squid@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            2
            ·
            1 year ago

            It changes a lot. Good Photoshopping skills would not create the images as shown in the article.

            • Aniki 🌱🌿@lemm.ee
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Yeah some of these would be like 100 layer creations if someone was doing it themselves in photoshop – It would take a professional or near-professional level of skills.

          • uienia@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            The easy and speed with which AI created photos, of a quality most photoshoppers could only dream, can be created of does very much change everything.

      • Cosmic Cleric@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Honestly, it looks like the picture on the left is fake, like the guy was inserted into it. Just look at his outline, compared with the rest of the background.

        (I’m no Stalin fan, just commenting on the picture itself.)

  • hamid@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    1 year ago

    The past we know is a carefully crafted and curated story and not at all accurate as it is. It is valuable to learn and understand but also be skeptical. I don’t really think wide spread forgery changes that. Historiography is a very important field.

    Any serious historical research will have to verify the physical copies exist or existed in a documented way to be admitted as evidence. This is called chain of custody and is already required.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    1 year ago

    AI is creating fake XY, and that is problems, problems, problems everywhere…

    During the last decades, IT guys and scientists have always dreamed about using AI for good things. But now AI has become so much better at creating fake things than good things :-(

    • Carrolade@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      6
      ·
      1 year ago

      It’s not really a new problem, people were doing it with their imaginations and stories long before AI came around. The tools of the digital age simply amplified the effect. Healthy skepticism is still the solution, that hasn’t changed.

      It’ll never actually go away, though. Of all the possibile ways of looking at any given situation, the vast majority will always be inaccurate. Fiction simply outnumbers nonfiction. Wrong answers outnumber correct answers.

      So, the adjustment has to be inside of us, and again, it’s always been necessary. This isn’t fundamentally new.

      • uienia@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        edit-2
        1 year ago

        The new thing is the scope in which fake content is being created. In a very near future most internet content will be fake, including history. That is not something that has happened before in history.

        The current AI situation is completely unprecedented in history.

        • Carrolade@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          I would disagree. I think if we go back even a few centuries, we find that virtually nobody had a firm grasp on historical fact, due to the printing press not being invented yet, alongside archeological techniques not existing.

        • djnattyp@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I mean, maybe it has happened before in history, but someone changed it via AI and we just don’t know…

  • Cosmic Cleric@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    From the article…

    The real danger lies in those images that are crafted with the explicit intention of deceiving people — the ones that are so convincingly realistic that they could easily pass for authentic historical photographs.

    Fundamentally/meta level, the issue is one of is; are people allowed to deceive other people by using AI to do so?

    Should all realistic AI generated things be labeled as such?

    • Cosmic Cleric@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      For the worse.

      Not necessarily.

      But we’re going to have to deal with the basic issue of deceiving someone with AI, and if any AI generated thing should be labeled or not as such.

      Basically, a legislative fix, and not just a free market free for all.

      • Couldbealeotard@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        How do you enforce labelling when there will never be a way to reliably test if something was ai generated?

        Basic is not a word that fits the situation.

        • Cosmic Cleric@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          1 year ago

          How do you enforce labelling when there will never be a way to reliably test if something was ai generated?

          If the icon is not there and then it’s determined that it’s AI generated, as it was with that British royal family picture the other day; crowd sourced.

          • Couldbealeotard@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            I am not understanding you. Or perhaps you’re not understanding me.

            Firstly, the British royal family photo was not ai generated.

            If you can’t find a way to test if something is ai generated, who decides what is or isn’t ai generated?

  • LockheedTheDragon@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    4
    ·
    1 year ago

    When I read the title I sarcastically thought “Oh no, why is AI deciding to create fake historical photos? Is this the first stage of the robot apocalypse?” I find the title mildly annoying because it putting the blame on the tool and ignoring that people are using it to do bad things. I find a lot of discussions about AI do this. It is like people want to avoid that it is how people are using and training the tool is the issue.

    • Flying Squid@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      8
      ·
      1 year ago

      Isn’t the tool part of the issue? If you sell bomb-making parts to someone who then blows up a preschool with them, aren’t you in some way culpable for giving them the tool to do it? Even if you only intended it to be used in limestone quarries?

      • WhatAmLemmy@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        edit-2
        1 year ago

        That really depends on whether the bomb making part is specific to bombs, and if their purchase of that item could be considered legitimately suspicious. Many over the counter products have the potential to be turned into bombs with enough time or effort.

        If a murderer uses a hammer, do you think the hardware store they purchased the hammer from should be liable?

        You can make crude chemical weapons by mixing bleach with other household items. Should the supermarket be liable for people who use their products in ways they never intended?

        • kromem@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Exactly this, many times over.

          Most tools with legitimate uses also have unethical uses.

      • Grimy@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        1 year ago

        Everything needed to make a bomb can be found at your local Walmart. Nobody blames the gas companies when something gets molotoved.

      • Grangle1@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I would say the supplier is culpable if the tool supplied is made for the purpose of the harm intended or if the supplier is giving the tool to the person who does the harm with the explicit intent for that person to use it for that harm. For example, giving someone an AK-47 to shoot someone or a handgun/rifle with the intent that the user shoot someone with it. If the supplier gives someone a tool to use for one legit purpose but the user uses it for a harmful purpose instead, I don’t think you can blame the supplier for that. For example, giving someone a knife to cut food with, and then the user goes and stabs someone with it instead. That’s entirely on the user and nobody else.

          • Grangle1@lemm.ee
            link
            fedilink
            English
            arrow-up
            6
            ·
            1 year ago

            To clarify, instead of intent a better word may be knowledge. If the supplier knows that the user is going to use the tool for harm but gives the tool to the user anyway, then the supplier shares culpability. If the supplier does not (reasonably) know, either through invincible ignorance (the supplier could not reasonably know) or the user’s deception (lying to the supplier), then the supplier is not culpable.

      • 4am@lemm.ee
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        1 year ago

        Maybe if the tool’s singular purpose was for killing. I think guns might be a better metaphor there. Explosives have legitimate uses and if you took the proper precautions to vet your customers then it’d be hard to blame you if someone convincingly forged credentials, for example.

    • plz1@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I just listened to the Criminal podcast on that, recently. Fascinating cultural moment.

  • Gakomi@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    1 year ago

    Does that AI think that Indian people are monkeys or something case there is a photo there where it clearly made their face look more inline with a monkey