The California Supreme Court will not prevent Democrats from moving forward Thursday with a plan to redraw congressional districts.
Republicans in the Golden State had asked the state’s high court to step in and temporarily block the redistricting efforts, arguing that Democrats — who are racing to put the plan on the ballot later this year — had skirted a rule requiring state lawmakers to wait at least 30 days before passing newly introduced legislation.
But in a ruling late Wednesday, the court declined to act, writing that the Republican state lawmakers who filed the suit had “failed to meet their burden of establishing a basis for relief at this time.”
If you couldn’t be bothered to think or write for yourself, why would you think anyone would be bothered to read that?? It’s literally just pollution.
Now I know how liberal gun owners feel. Very rarely do I not agree with the left platform, but y’all opting to dismiss one of the most powerful tools ever given to mankind is going to be at your peril.
It has its faults just like humans do, but it is literally the culmination of all human knowledge. It’s Wikipedia for nearly everything at your fingertips.
Perhaps the way y’all use it is wrong. It’s not meant to make the decisions for you, it’s a tool to get you 80% there quickly then you do the last mile of work.
Anywho, the premise stands. Democrats have more leverage to use gerrymandering if they do chose it, though I wish we weren’t in a place where they had to go with a nuclear option that threatens US democracy even more.
The issue is you didn’t confirm anything the text prediction machine told you before posting it as a confirmation of someone else’s point, and then slid into a victimized, self-righteous position when pushed back upon. One of the worst things about how we treat LLMs is comparing their output to humans – they are not, figuratively or literally, the culmination of all human knowledge, and the only fault they have comparable to humans is a lack of checking the validity of its answers. In order to use an LLM responsibly, you have to already know the answer to what you’re requesting a response to and be able to fact-check it. If you don’t do that, then the way you use it is wrong. It’s good for programming where correctness is a small set of rules, or discovering patterns where we are limited, but don’t treat it like a source of knowledge when it constantly crosses its wires.
Your premise is incorrect - you are inferring that I did not confirm the output.
You have yet to suggest or confirm otherwise, so my point stands that your original post is unhelpful and non-contributive
I read the post and it was not unhelpful. My concern is that we are starting to use the magic 8-ball too much. Pretty soon we won’t be able to distinguish good information from bad, regardless of the source.
Yeah I feel you. I don’t think the content is necessarily bad, but LLM output posing as a factual post at a bare, bare minimum needs to also include the sources that the bot used to synthesize its response. And, ideally, a statement from the poster that they checked and verified against all of them. As it is now, no one except the author has any means of checking any of that; it could be entirely made up, and very likely is misleading. All I can say is it sounds good, I guess, but a vastly more helpful response would have been a simple link to a reputable source article.
People just don’t like reading slop from lying machines. It’s really just that simple.
Polluting a chat thread with slop is just a rude thing to do. Nobody like sloppers.
Please define slop. Please provide examples of LLM generated text that you do not consider as slop.