The California Supreme Court will not prevent Democrats from moving forward Thursday with a plan to redraw congressional districts.

Republicans in the Golden State had asked the state’s high court to step in and temporarily block the redistricting efforts, arguing that Democrats — who are racing to put the plan on the ballot later this year — had skirted a rule requiring state lawmakers to wait at least 30 days before passing newly introduced legislation.

But in a ruling late Wednesday, the court declined to act, writing that the Republican state lawmakers who filed the suit had “failed to meet their burden of establishing a basis for relief at this time.”

  • techt@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    1
    ·
    2 days ago

    The issue is you didn’t confirm anything the text prediction machine told you before posting it as a confirmation of someone else’s point, and then slid into a victimized, self-righteous position when pushed back upon. One of the worst things about how we treat LLMs is comparing their output to humans – they are not, figuratively or literally, the culmination of all human knowledge, and the only fault they have comparable to humans is a lack of checking the validity of its answers. In order to use an LLM responsibly, you have to already know the answer to what you’re requesting a response to and be able to fact-check it. If you don’t do that, then the way you use it is wrong. It’s good for programming where correctness is a small set of rules, or discovering patterns where we are limited, but don’t treat it like a source of knowledge when it constantly crosses its wires.

      • techt@lemmy.world
        link
        fedilink
        arrow-up
        6
        ·
        2 days ago

        You have yet to suggest or confirm otherwise, so my point stands that your original post is unhelpful and non-contributive

        • melsaskca@lemmy.ca
          link
          fedilink
          arrow-up
          3
          ·
          1 day ago

          I read the post and it was not unhelpful. My concern is that we are starting to use the magic 8-ball too much. Pretty soon we won’t be able to distinguish good information from bad, regardless of the source.

          • techt@lemmy.world
            link
            fedilink
            arrow-up
            4
            ·
            21 hours ago

            Yeah I feel you. I don’t think the content is necessarily bad, but LLM output posing as a factual post at a bare, bare minimum needs to also include the sources that the bot used to synthesize its response. And, ideally, a statement from the poster that they checked and verified against all of them. As it is now, no one except the author has any means of checking any of that; it could be entirely made up, and very likely is misleading. All I can say is it sounds good, I guess, but a vastly more helpful response would have been a simple link to a reputable source article.