• Mac@mander.xyz
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    5
    ·
    edit-2
    14 hours ago

    ??
    You’re specifically making claims about me in your comment. “Source?” for those claims.

    Maybe you’ve become so reliant on AI you cant read and understand comments anymore? Put this exchange into ChatGPT and have it explain for you.

    • FaceDeer@fedia.io
      link
      fedilink
      arrow-up
      4
      arrow-down
      3
      ·
      14 hours ago

      Okay, so how do you go about the process of fact checking every news article you read?

      • Mac@mander.xyz
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        2
        ·
        edit-2
        13 hours ago

        You’re never going to believe this: i can take an article at face value because it’s not being routed through a slop generator when i read it.

        Whether or not a source can be believed to be true is not within the scope of this thread.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          4
          arrow-down
          2
          ·
          13 hours ago

          Right, you take the article at face value. So exactly as I originally said:

          you sure are relying on just believing whatever you read without any checking whatsoever.

        • FaceDeer@fedia.io
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          14 hours ago

          Okay, we’ve established how you don’t do it. So how do you go about the process of fact checking every news article you read?

            • FaceDeer@fedia.io
              link
              fedilink
              arrow-up
              5
              ·
              13 hours ago

              For every news article you read?

              That’s the point here. AI can allow for tedious tasks to be automated. I could have a button in my browser that, when clicked, tells the AI to follow up on those sources to confirm that they say what the article says they say. It can highlight the ones that don’t. It can add notes mentioning if those sources happen to be inherently questionable - environmental projections from a fossil fuel think tank, for example. It can highlight claims that don’t have a source, and can do a web search to try to find them.

              These are all things I can do myself by hand, sure. I do that sometimes when an article seems particularly important or questionable. It takes a lot of time and effort, though. I would much rather have an AI do the grunt work of going through all that and highlighting problem areas for me to potentially check up on myself. Even if it makes mistakes sometimes that’s still going to give me a far more thoroughly checked and vetted view of the news than the existing process.

              Did you look at the link I gave you about how this sort of automated fact-checking has worked out on Wikipedia? Or was it too much hassle to follow the link manually, read through it, and verify whether it actually supported or detracted from my argument?

              • asudox@lemmy.asudox.dev
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                13 hours ago

                I think you’ll like digg. All of the features you love there. Why don’t you try it out? It’s the pinnacle of innovation (AI) there. I even heard Sam Altman is there, thank god!

                AIs summarize posts and moderate the platform. Oh, literally utopia!

                Friendly reminder: Don’t forget to try out OpenAI’s new AI browser. It literally does what you described.

            • Mac@mander.xyz
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              13 hours ago

              Don’t fall for their redirect. This thread is about them trusting “AI”.