

Conservatism triggers extreme reactions.
Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast
Conservatism triggers extreme reactions.
Anthropic didn’t lose their lawsuit. They settled. Also, that was about their admission that they pirated zillions of books.
From a legal perspective, none of that has anything to do with AI.
Company pirates books -> gets sued for pirating books. Companies settles with the plaintiffs.
It had no legal impact on training AI with copyrighted works or what happens if the output is somehow considered to be violating someone’s copyright.
What Anthropic did with this settlement is attack their Western competitor: OpenAI, specifically. Because Google already settled with the author’s guild for their book scanning project over a decade ago.
Now OpenAI is likely going to have to pay the author’s guild too. Even though they haven’t come out and openly admitted that they pirated books.
Meta is also being sued for the same reason but they appear to be ready to fight in court about it. That case is only just getting started though so we’ll see.
The real, long-term impact of this settlement is that it just became a lot more expensive to train an AI in the US (well, the West). Competition in China will never have to pay these fees and will continue to offer their products to the West at a fraction of the cost.
As a Democrat, I say don’t redact any of them! If there’s Democrats in there, everyone needs to know. Everyone needs to know all the people involved!
Stop being pussies and use this as a proper stepping stone to remove your rivals in the GOP! That’s how villain organizations are supposed to work!
Also, stuff that gets mis-labeled as AI can be just as dangerous. Especially when you consider that the AI detection might use such labels to train itself. So someone who’s face is weirdly symmetrical might get marked as AI and then have hard time applying for jobs, purchasing things, getting credit, etc.
I want to know what counts as AI. If someone uses AI to remove the background in an image or just to remove someone standing in the background is technically generative AI but that’s something you can do in any photo editor anyway with a bit of work.
Meh. Nothing in this article is strong evidence of anything. They’re only looking at a tiny sample of data and wildly speculating about which entry-level jobs are being supplanted by AI.
As a software engineer who uses AI, I fail to see how AI can replace any given entry-level software engineering position. There’s no way! Any company that does that is just asking for trouble.
What’s more likely, is that AI is making senior software engineers more productive so they don’t need to hire more developers to assist them with more trivial/time consuming tasks.
This is a very temporary thing, though. As anyone in software can tell you: Software only gets more complex over time. Eventually these companies will have to start hiring new people again. This process usually takes about six months to a year.
If AI is causing a drop in entry-level hiring, my speculation (which isn’t as wild as in the article since I’m actually there on the ground using this stuff) is that it’s just a temporary blip while companies work out how to take advantage the slightly-enhanced productivity.
It’s inevitable: They’ll start new projects to build new stuff because now—suddenly—they have the budget. Then they’ll hire people to make up the difference.
This is how companies have worked since the invention of bullshit jobs. The need for bullshit grows with productivity.
AI adds too many details. When a person draws an anime/cartoon character they will usually put in minimal details or they’ll simply paste the character on to an existing background (that could’ve been drawn by a different artist).
AI doesn’t have human limitations so it’ll often add a ton of unnecessary details to a given scene. This is why the most convincing AI-generated anime pictures are of one or two characters in a very simple setting (e.g. a plain street/sidewalk) or even a white or gradient background.
Humans can tell when art was put together by different artists. Such as when the background is a completely different style. AI doesn’t differentiate like that and will make the entire image using the exact style given by the prompt. So it’ll all look like it was “drawn” using the same exact style… Even though anime/cartoons IRL aren’t that uniform.
Incorrect. No court has ruled in favor of any plaintiff bringing a copyright infringement claim against an AI LLM. Here’s a breakdown of the current court cases and their rulings:
https://www.skadden.com/insights/publications/2025/07/fair-use-and-ai-training
In both cases, the courts have ruled that training an LLM with copyrighted works is highly transformative and thus, fair use.
The plaintiffs in one case couldn’t even come up with a single iota of evidence of copyright infringement (from the output of the LLM). This—IMHO—is the single most important takeaway from the case: Because the only thing that really mattered was the point where the LLMs generate output. That is, the point of distribution.
Until an LLM is actually outputting something, copyright doesn’t even come into play. Therefore, the act of training an LLM is just like I said: A “Not Applicable” situation.
This just proves he’s closer to soulless undead than a living being.
Training an AI is orthogonal to copyright since the process of training doesn’t involve distribution.
You can train an AI with whatever TF you want without anyone’s consent. That’s perfectly legal fair use. It’s no different than if you copy a song from your PC to your phone.
Copyright really only comes into play when someone uses an AI to distribute a derivative of someone’s copyrighted work. Even then, it’s really the end user that is even capable of doing such a thing by uploading the output of the AI somewhere.
Republicans: “Google keeps blocking our emails!” Google: “Yep. Stop sending spam!”
Zawinski’s law: Every program attempts to expand until it can read mail. Those programs which cannot expand are replaced by ones which can.
This is just the modern equivalent: Intra-site messaging.
This assumes his deportation figures are accurate. I doubt they’re deporting 750 people/day.
In order to deport lots and lots of people you actually need other undocumented folks to tattle on the actual bad guys. Except when you deport everyone (including US citizens) you don’t get those tattlers anymore. Instead, you get neighborhoods and sometimes entire cities worth of people who will not help ICE in the slightest.
Linux users: “See what we mean?”
Windows users: “La la la! I can’t hear you! Losing my data is clearly better than having to learn something new!”
Ya know, there’s an entire organization within the US government who’s whole job it is to prevent this kind of discrimination (that isn’t actually happening).
The CFPB
Which organization has Trump (and the Republicans in general) tried to get rid of over and over again?
The CFPB
Management: “Perfect!”
Orgasm tokens and body paint markers for drawing faces.
I didn’t think people laughed at that. More like, “humans can be so evil.”
The other reaction—if you’re a conservative—is this, “Yeah we should do stuff like that here!”
Do it! Let’s set a precedent where it’s normal to charge former presidents with crimes!
Let’s see which charges stick when put before a jury…
To be fair, JD Vance doesn’t really know what a “fact” is. It’s not that he’s willfully ignorant; he doesn’t have the mental capacity to understand.