The California Supreme Court will not prevent Democrats from moving forward Thursday with a plan to redraw congressional districts.

Republicans in the Golden State had asked the state’s high court to step in and temporarily block the redistricting efforts, arguing that Democrats — who are racing to put the plan on the ballot later this year — had skirted a rule requiring state lawmakers to wait at least 30 days before passing newly introduced legislation.

But in a ruling late Wednesday, the court declined to act, writing that the Republican state lawmakers who filed the suit had “failed to meet their burden of establishing a basis for relief at this time.”

  • jordanlund@lemmy.world
    shield
    M
    link
    fedilink
    arrow-up
    13
    ·
    2 days ago

    Reported as “AI Slop Post”

    but a) we don’t have a rule against that.

    and 2) OP clearly noted the used Co-Pilot to generate it, they aren’t trying to pass it off as their own.

    I’m actually OK with this. Obviously we’ll remove AI generated ARTICLES that get posted, same as we’d remove videos and such, but in a comment? Clearly noted as AI? I think I’m OK with that.

    If y’all WANT a rule about it, hit me up. I’ll bring it up with the other mods and admins.

    • ToastedPlanet@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 hours ago

      I’ve got three arguments for you on why you should make a rule against LLM comments, even those publicly marked as AI. And I’m going to refer to AI as LLM because large language models are what we are dealing with here.

      First, LLMs aren’t a reliable source of information, especially for recent events. They regurgitate training data based on weights calibrated during training. These weights are used to create results that, especially for numbers, can look accurate for the topic but still be the wrong number. For recent events, they will lack the relevant data because it won’t have been in the data set they were trained on. So until that data is added in, the LLMs are giving an answer to something they don’t know, for lack of a better phrasing. These are commonly known limitations of the LLMs we are discussing.

      If people start using LLMs to argue then the comments sections are going to be filled with pages of made up LLM garbage. LLMs will generate more misinformation than anyone can keep up with to debunk. Especially when misinformation could do the most damage like in the weeks leading up to the special election this November 4th in California.

      I find it unlikely that all of the statistics listed, without sources, by the LLM are accurate. But regardless of that, if a user was to respond by taking that comment and putting it in a LLM it’s not likely that the LLM would be able to keep those numbers consistent. These errors would compound the longer the discussion went on between two LLMs.

      At best this all wastes peoples’ time and lemmy becomes an extension of the misinformation LLM machine. At worst this becomes an attack vector for bad actors. Bad actors fill up comment sections with LLM discussions that promote one view point and bury the rest. Knowing the comments are LLM generated doesn’t solve these problems on its own.

      Second, we shouldn’t want to automate thinking. Tools are supposed to save time while retaining agency. My laptop saves me the time of having to send you a letter in the mail and having to wait for the response. My laptop doesn’t deny me agency when it does this. I get to decide what I value and how that is communicated to you. The LLM saved OP’s time, if all OP wanted was text that looks correct at a glance, but it removed OP’s agency to think.

      Facts and data, purportedly accurate, are assembled into a structure to deliver a central point, but none of that is done with the agency of OP. It’s not the OP’s thoughts or values being delivered to any of us. It’s not even a position held for the sake of a debate. This is the LLM regurgitating the position it received in the prompt in the affirmative, because that’s what the LLMs we have access to do. Like shouting into a cave and getting the echo back out.

      We aren’t getting what we want faster with LLM content, we are being denied it. The LLM takes away our ability to have a discussion with each other. Anyone using an LLM to think for them is by definition not participating in the discussion. No one can have a conversation, argument, or debate with this OP because despite OP having commented OP didn’t write it. For lack of a better analogy, I might as well have a discussion with a parrot.

      What are we doing on this website if we are all going to roll out our LLMs and have them talk to each other for us? We can all open two windows, position them side by side, and copy and paste prompts back and forth without needing a decentralized social media website as the middle man. The goal of social media and lemmy is to talk to other people.

      Third, do you really want to volunteer to moderate LLM content? ChatGPT prose gets repetitive and it can never come up with anything new. I would not want to be stuck reading that all day.

      • jordanlund@lemmy.world
        shield
        M
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        4 hours ago

        I can definitely see the argument, OTOH, if someone actually owns up to it and says something on the order of “I dunno, so I asked Chat GPT and it says…”

        I think the admission/disclosure model is fine, AND it actually opens up discussion for “OK, here’s why Chat GPT is wrong…” which is a healthy discussion to have.

        But I can definitely bring it up with the group and see what people think!

        • ToastedPlanet@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 hours ago

          The issue is the scale. One comment can be fact checked in under an hour. Thousands not so much.

          Also, it’s not purely about accuracy. I want to be having discussions with other humans. Not software.

          Thanks for bringing this up to the group, I appreciate it! edit: typo

          • jordanlund@lemmy.worldM
            link
            fedilink
            arrow-up
            1
            ·
            3 hours ago

            Scale is always a problem, and if someone is using it to spam, we’d ban it for spam.

            I see a LOT of generative spam posts, those get removed with a quickness, but it’s because of the spam, not because it’s generated.

            Discussion is open now, so far it’s leaning on “hey as long as they disclose it…” which still leaves it open to remove undisclosed generated comments.

            But then you have the trap of “Well, how do you prove it if they don’t disclose it?” 🤔 There really is no LLM detector yet.

            • ToastedPlanet@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              2 hours ago

              Bots could be used to spam LLM comments, but users can effectively act as a manual bot with a LLM assisting them.

              There really is no LLM detector yet.

              Unless the prompter goes out of their way to obfuscate the text manually, which sort of defeats the purpose, they tend to be very samey. The generated text would stand out if multiple users were using the same or even similar prompts. And OPs stands out even without the admission.

              edit: to clarify I mean stand out to the human eye, human mods would have to be the ones removing the comments