Father, Hacker (Information Security Professional), Open Source Software Developer, Inventor, and 3D printing enthusiast

  • 1 Post
  • 203 Comments
Joined 3 years ago
cake
Cake day: June 23rd, 2023

help-circle

  • The mistakes it makes depends on the model and the language. GPT5 models can make horrific mistakes though where it randomly removes huge swaths of code for no reason. Every time it happens I’m like, “what the actual fuck?” Undoing the last change and trying usually fixes it though 🤷

    They all make horrific security mistakes quite often. Though, that’s probably because they’re trained on human code that is *also" chock full of security mistakes (former security consultant, so I’m super biased on that front haha).



  • You want to see someone using say, VS Code to write something using say, Claude Code?

    There’s probably a thousand videos of that.

    More interesting: I watched someone who was super cheap trying to use multiple AIs to code a project because he kept running out of free credits. Every now and again he’d switch accounts and use up those free credits.

    That was an amazing dance, let me tell ya! Glorious!

    I asked him which one he’d pay for if he had unlimited money and he said Claude Code. He has the $20/month plan but only uses it in special situations because he’ll run out of credits too fast. $20 really doesn’t get you much with Anthropic 🤷

    That inspired me to try out all the code assist AIs and their respective plugins/CLI tools. He’s right: Claude Code was the best by a HUGE margin.

    Gemini 3.0 is supposed to be nearly as good but I haven’t tried it yet so I dunno.

    Now that I’ve said all that: I am severely disappointed in this article because it doesn’t say which AI models were used. In fact, the study authors don’t even know what AI models were used. So it’s 430 pull requests of random origin, made at some point in 2025.

    For all we know, half of those could’ve been made with the Copilot gpt5-mini that everyone gets for free when they install the Copilot extension in VS Code.



  • Good games are orthogonal to AI usage. It’s possible to have a great game that was written with AI using AI-generated assets. Just as much as it’s possible to have a shitty one.

    If AI makes creating games easier, we’re likely to see 1000 shitty games for every good one. But at the same time we’re also likely to see successful games made by people who had great ideas but never had the capital or skills to bring them to life before.

    I can’t predict the future of AI but it’s easy to imagine a state where everyone has the power to make a game for basically no cost. Good or bad, that’s where we’re heading.

    If making great games doesn’t require a shitton of capital, the ones who are most likely to suffer are the rich AAA game studios. Basically, the capitalists. Because when capital isn’t necessary to get something done anymore, capital becomes less useful.

    Effort builds skill but it does not build quality. You could put in a ton of effort and still fail or just make something terrible. What breeds success is iteration (and luck). Because AI makes iteration faster and easier, it’s likely we’re going to see a lot of great things created using it.




  • “Finish high school, get a job, get married, have kids, go to church. Those are all in your control,” -Ben Shapiro

    …all within the comfort of your parents home (if they even have one big enough for you, your wife, and your kids). Because that’s all they can afford.

    Also: WTF does going to church have to do with anything‽ That’s not going to land you a good job (blessed are the poor)! There’s no marketable skills to be learned from going to church either (unless you want to socialize with pedophiles and other sex offenders in order to better understand Trump’s social circle; see: “pastor arrested”).



  • Data centers typically use closed loop cooling systems but those do still lose a bit of water each day that needs to be replaced. It’s not much—compared to the size of the data center—but it’s still a non-trivial amount.

    A study recently came out (it was talked about extensively on the Science VS podcast) that said that a long conversation with an AI chat bot (e.g. ChatGPT) could use up to half a liter of water—in the worst case scenario.

    This statistic has been used in the news quite a lot recently but it’s a bad statistic: That water usage counts the water used by the power plant (for its own cooling). That’s typically water that would come from ponds and similar that would’ve been built right alongside the power plant (your classic “cooling pond”). So it’s not like the data centers are using 0.5L of fresh water that could be going to people’s homes.

    For reference, the actual data center water usage is 12% of that 0.5L: 0.06L of water (for a long chat). Also remember: This is the worst-case scenario with a very poorly-engineered data center.

    Another stat from the study that’s relevant: Generating images uses much less energy/water than chat. However, generating videos uses up an order of magnitude more than both (combined).

    So if you want the lowest possible energy usage of modern, generative AI: Use fast (low parameter count), open source models… To generate images 👍



  • The power use from AI is orthogonal to renewable energy. From the news, you’d think that AI data centers have become the number one cause of global warming. Yet, they’re not even in the top 100. Even at the current pace of data center buildouts, they won’t make the top 100… ever.

    AI data center power utilization is a regional problem specific to certain localities. It’s a bad idea to build such a data center in certain places but companies do it anyway (for economic reasons that are easy to fix with regulation). It’s not a universal problem across the globe.

    Aside: I’d like to point out that the fusion reactor designs currently being built and tested were created using AI. Much of the advancements in that area are thanks to “AI data centers”. If fusion power becomes a reality in the next 50 years it’ll have more than made up for any emissions from data centers. From all of them, ever.



  • It’s even more complicated than that: “AI” is not even a well-defined term. Back when Quake 3 was still in beta (“the demo”), id Software held a competition to develop “bot AIs” that could be added to a server so players would have something to play against while they waited for more people to join (or you could have players VS bots style matches).

    That was over 25 years ago. What kind of “AI” do you think was used back then? 🤣

    The AI hater extremists seem to be in two camps:

    • Data center haters
    • AI-is-killing-jobs

    The data center haters are the strangest, to me. Because there’s this default assumption that data centers can never be powered by renewable energy and that AI will never improve to the point where it can all be run locally on people’s PCs (and other, personal hardware).

    Yet every day there’s news suggesting that local AI is performing better and better. It seems inevitable—to me—that “big AI” will go the same route as mainframes.




  • We learned this lesson in the 90s: If you put something on the (public) Internet, assume it will be scraped (and copied and used in various ways without your consent). If you don’t want that, don’t put it on the Internet.

    There’s all sorts of clever things you can do to prevent scraping but none of them are 100% effective and all have negative tradeoffs.

    For reference, the big AI players aren’t scraping the Internet to train their LLMs anymore. That creates too many problems, not the least of which is making yourself vulnerable to poisoning. If an AI is scraping your content at this point it’s either amateurs or they’re just indexing it like Google would (or both) so the AI knows where to find it without having to rely on 3rd parties like Google.

    Remember: Scraping the Internet is everyone’s right. Trying to stop it is futile and only benefits the biggest of the big search engines/companies.



  • It’s not a shame. Have you tried this? Try it now! It only takes a minute.

    Test a bunch of images against ChatGPT, Gemini, and Claude. Ask it if the image was AI-generated. I think you’ll be surprised.

    Gemini is the current king of that sort of image analysis but the others should do well too.

    What do you think the experts use? LOL! They’re going to run an image through the same exact process that the chatbots would use plus some additional steps if they didn’t find anything obvious on the first pass.