Just want to clarify, this is not my Substack, I’m just sharing this because I found it insightful.

The author describes himself as a “fractional CTO”(no clue what that means, don’t ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

  • dejected_warp_core@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    3 hours ago

    To quote your quote:

    I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

    I think the author just independently rediscovered “middle management”. Indeed, when you delegate the gruntwork under your responsibility, those same people are who you go to when addressing bugs and new requirements. It’s not on you to effect repairs: it’s on your team. I am Jack’s complete lack of surprise. The idea that relying on AI to do nuanced work like this and arrive at the exact correct answer to the problem, is naive at best. I’d be sweating too.

    • BarneyPiccolo@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      13 minutes ago

      I don’t know shit about anything, but it seems to me that the AI already thought it gave you the best answer, so going back to the problem for a proper answer is probably not going to work. But I’d try it anyway, because what do you have to lose?

      Unless it gets pissed off at being questioned, and destroys the world. I’ve seen more than few movies about that.

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        31 seconds ago

        You are in a way correct. If you keep sending the context of the “conversation” it will reinforce its previous implementation. But once you start a new conversation "meaning you fint give any previous chat history " it’s essentially a new ai.

        With a new random seed and if you ask that to look for mistakes etc it will happily tell you that the last Implementation was all wrong and here’s how to fix it.

        It’s like a minecraft world, same seed will get you the same map every time. So with AIs it’s the same thing ish. start a new conversation or ask a different model (gpt, Google, Claude etc) and it will do things in a new way.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 hours ago

      AI isn’t good at changing code, or really even understanding it… It’s good at writing it, ideally 50-250 lines at a time

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        3 hours ago

        I’m just not following the mindset of “get ai to code your whole program” and then have real people maintain it? Sounds counter productive

        I think you need to make your code for an Ai to maintain. Use Static code analysers like SonarQube to ensure that the code is maintainable (cognitive complexity)!and that functions are small and well defined as you write it.

      • lepinkainen@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        3 hours ago

        I’ve made full-ass changes on existing codebases with Claude

        It’s a skill you can learn, pretty close to how you’d work with actual humans

  • lepinkainen@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    3 hours ago

    Same thing would happen if they were a non-coder project manager or designer for a team of actual human progress.

    Stuff done, shipped and working.

    “But I can’t understand the code 😭”, yes. You were the project manager why should you?

    • JcbAzPx@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 hours ago

      I think the point is that someone should understand the code. In this case, no one does.

  • Agent641@lemmy.world
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    11 hours ago

    I cannot understand and debug code written by AI. But I also cannot understand and debug code written by me.

    Let’s just call it even.

    • I Cast Fist@programming.dev
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      4 hours ago

      At least you can blame yourself for your own shitty code, which hopefully will never attempt to “accidentally” erase the entire project

  • Suffa@lemmy.wtf
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    5
    ·
    12 hours ago

    AI is really great for small apps. I’ve saved so many hours over weekends that would otherwise be spent coding a small thing I need a few times whereas now I can get an AI to spit it out for me.

    But anything big and it’s fucking stupid, it cannot track large projects at all.

      • utopiah@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 hours ago

        FWIW that’s a good question but IMHO the better question is :

        What kind of small things have you vibed out that you needed that didn’t actually exist or at least you couldn’t find after a 5min search on open source forges like CodeBerg, Gitblab, Github, etc?

        Because making something quick that kind of works is nice… but why even do so in the first place if it’s already out there, maybe maintained but at least tested?

        • Victor@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 hours ago

          Since you put such emphasis on “better”: I’d still like to have an answer to the one I posed.

          Yours would be a reasonable follow-up question if we noticed that their vibed projects are utilities already available in the ecosystem. 👍

          • utopiah@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            4 hours ago

            Sure, you’re right, I just worry (maybe needlessly) about people re-inventing the wheel because it’s “easier” than searching without properly understand the cost of the entire process.

          • utopiah@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            3 hours ago

            Open an issue to explain why it’s not enough for you? If you can make a PR for it that actually implements the things you need, do it?

            My point to say everything is already out there and perfectly fits your need, only that a LOT is already out there. If all re-invent the wheel in our own corner it’s basically impossible to learn from each other.

            • lepinkainen@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              2 hours ago

              These are the principles I follow:

              https://indieweb.org/make_what_you_need

              https://indieweb.org/use_what_you_make

              I don’t have time to argue with FOSS creators to get my stuff in their projects, nor do I have the energy to maintain a personal fork of someone else’s work.

              It’s much faster for me to start up Claude and code a very bespoke system just for my needs.

              I don’t like web UIs nor do I want to run stuff in a Docker container. I just want a scriptable CLI application.

              Like I just did a subtitle translation tool in 2-3 nights that produces much better quality than any of the ready made solutions I found on GitHub. One of which was an *arr stack web monstrosity and the other was a GUI application.

              Neither did what I needed in the level of quality I want, so I made my own. One I can automate like I want and have running on my own server.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 hours ago

          So if it can be vibe coded, it’s pretty much certainly already a “thing”, but with some awkwardness.

          Maybe what you need is a combination of two utilities, maybe the interface is very awkward for your use case, maybe you have to make a tiny compromise because it doesn’t quite match.

          Maybe you want a little utility to do stuff with media. Now you could navigate your way through ffmpeg and mkvextract, which together handles what you want, with some scripting to keep you from having to remember the specific way to do things in the myriad of stuff those utilities do. An LLM could probably knock that script out for you quickly without having to delve too deeply into the documentation for the projects.

            • jj4211@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              It’s certainly a use case that LLM has a decent shot at.

              Of course, having said that I gave it a spin with Gemini 3 and it just hallucinated a bunch of crap that doesn’t exist instead of properly identifying capable libraries or frontending media tools…

              But in principle and upon occasion it can take care of little convenience utilities/functions like that. I continue to have no idea though why some people seem to claim to be able to ‘vibe code’ up anything of significance, even as I thought I was giving it an easy hit it completely screwed it up…

      • Random Dent@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 hours ago

        Not OP but I made a little menu thing for launching VMs and a script for grabbing trailers for downloaded movies that reads the name of the folder, finds the trailer and uses yt-dlp to grab it, puts it in the folder and renames it.

      • 6nk06@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        7 hours ago

        I’m curious about that too since you can “create” most small applications with a few lines of Bash, pipes, and all the available tools on Linux.

  • deathbird@mander.xyz
    link
    fedilink
    English
    arrow-up
    26
    ·
    15 hours ago

    I think this kinda points to why AI is pretty decent for short videos, photos, and texts. It produces outputs that one applies meaning to, and humans are meaning making animals. A computer can’t overlook or rationalize a coding error the same way.

  • pdxfed@lemmy.world
    link
    fedilink
    English
    arrow-up
    56
    ·
    18 hours ago

    Great article, brave and correct. Good luck getting the same leaders who blindly believe in a magical trend for this or next quarters numbers; they don’t care about things a year away let alone 10.

    I work in HR and was stuck by the parallel between management jobs being gutted by major corps starting in the 80s and 90s during “downsizing” who either never replaced them or offshore them. They had the Big 4 telling them it was the future of business. Know who is now providing consultation to them on why they have poor ops, processes, high turnover, etc? Take $ on the way in, and the way out. AI is just the next in long line of smart people pretending they know your business while you abdicate knowing your business or employees.

    Hope leaders can be a bit braver and wiser this go 'round so we don’t get to a cliffs edge in software.

    • mirshafie@europe.pub
      link
      fedilink
      English
      arrow-up
      3
      ·
      14 hours ago

      Exactly. The problem isn’t moving part of production to some other facility or buying a part that you used to make in-house. It’s abdicating an entire process that you need to be involved in if you’re going to stay on top of the game long-term.

      Claude Code is awesome but if you let it do even 30% of the things it offers to do, then it’s not going to be your code in the end.

  • Unlearned9545@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    ·
    20 hours ago

    Fractional CTO: Some small companies benefit from the senior experience of these kinds of executives but don’t have the money or the need to hire one full time. A fraction of the time they are C suite for various companies.

  • raspberriesareyummy@lemmy.world
    link
    fedilink
    English
    arrow-up
    100
    arrow-down
    38
    ·
    21 hours ago

    So there’s actual developers who could tell you from the start that LLMs are useless for coding, and then there’s this moron & similar people who first have to fuck up an ecosystem before believing the obvious. Thanks fuckhead for driving RAM prices through the ceiling… And for wasting energy and water.

    • psycotica0@lemmy.ca
      link
      fedilink
      English
      arrow-up
      96
      ·
      19 hours ago

      I can least kinda appreciate this guy’s approach. If we assume that AI is a magic bullet, then it’s not crazy to assume we, the existing programmers, would resist it just to save our own jobs. Or we’d complain because it doesn’t do things our way, but we’re the old way and this is the new way. So maybe we’re just being whiny and can be ignored.

      So he tested it to see for himself, and what he found was that he agreed with us, that it’s not worth it.

      Ignoring experts is annoying, but doing some of your own science and getting first-hand experience isn’t always a bad idea.

      • 5too@lemmy.world
        link
        fedilink
        English
        arrow-up
        44
        ·
        16 hours ago

        And not only did he see for himself, he wrote up and published his results.

      • bassomitron@lemmy.world
        link
        fedilink
        English
        arrow-up
        38
        arrow-down
        1
        ·
        18 hours ago

        100% this. The guy was literally a consultant and a developer. It’d just be bad business for him to outright dismiss AI without having actual hands on experience with said product. Clients want that type of experience and knowledge when paying a business to give them advice and develop a product for them.

        • raspberriesareyummy@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          15
          ·
          15 hours ago

          Except that outright dismissing snake oil would not at all be bad business. Calling a turd a diamond neither makes it sparkle, nor does it get rid of the stink.

          • fruitycoder@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            16
            ·
            11 hours ago

            I can’t just call everything snake oil without some actual measurements and tests.

            Naive cynicism is just as naive as blind optimism

            • raspberriesareyummy@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              14
              ·
              11 hours ago

              I can’t just call everything snake oil without some actual measurements and tests.

              With all due respect, you have not understood the basic mechanic of machine learning and the consequences thereof.

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        13
        ·
        15 hours ago

        Problem is that statistical word prediction has fuck-all to do with AI. It’s not and will never be. By “giving it a try” you contribute to the spread of this snake oil. And even if someone came up with actual AI, if it used enough resources to impact our ecosystem, instead of being a net positive, and if it was in the greedy hands of billionaires, then using it is equivalent to selling your executioner an axe.

        • jve@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          3 hours ago

          Terrible take. Thanks for playing.

          It’s actually impressive the level of downvotes you’ve gathered in what is generally a pretty anti-ai crowd.

    • khepri@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      1
      ·
      19 hours ago

      They are useful for doing the kind of boilerplate boring stuff that any good dev should have largely optimized and automated already. If it’s 1) dead simple and 2) extremely common, then yeah an LLM can code for you, but ask yourself why you don’t have a time-saving solution for those common tasks already in place? As with anything LLM, it’s decent at replicating how humans in general have responded to a given problem, if the problem is not too complex and not too rare, and not much else.

      • Lambda@lemmy.ca
        link
        fedilink
        English
        arrow-up
        22
        ·
        18 hours ago

        Thats exactly what I so often find myself saying when people show off some neat thing that a code bot “wrote” for them in x minutes after only y minutes of “prompt engineering”. I’ll say, yeah I could also do that in y minutes of (bash scripting/vim macroing/system architecting/whatever), but the difference is that afterwards I have a reusable solution that: I understand, is automated, is robust, and didn’t consume a ton of resources. And as a bonus I got marginally better as a developer.

        Its funny that if you stick them in an RPG and give them an ability to “kill any level 1-x enemy instantly, but don’t gain any xp for it” they’d all see it as the trap it is, but can’t see how that’s what AI so often is.

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        ·
        15 hours ago

        As you said, “boilerplate” code can be script generated - and there are IDEs that already do this, but in a deterministic way, so that you don’t have to proof-read every single line to avoid catastrophic security or crash flaws.

    • InvalidName2@lemmy.zip
      link
      fedilink
      English
      arrow-up
      25
      arrow-down
      9
      ·
      20 hours ago

      And then there are actual good developers who could or would tell you that LLMs can be useful for coding, in the right context and if used intelligently. No harm, for example, in having LLMs build out some of your more mundane code like unit/integration tests, have it help you update your deployment pipeline, generate boilerplate code that’s not already covered by your framework, etc. That it’s not able to completely write 100% of your codebase perfectly from the get-go does not mean it’s entirely useless.

      • JcbAzPx@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        If it’s boilerplate, copy/paste; find/replace works just as well without needing data centers in the desert to develop.

      • Soggy@lemmy.world
        link
        fedilink
        English
        arrow-up
        31
        arrow-down
        1
        ·
        19 hours ago

        Other than that it’s work that junior coders could be doing, to develop the next generation of actual good developers.

        • SreudianFlip@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          18
          arrow-down
          1
          ·
          edit-2
          19 hours ago

          Yes, and that’s exactly what everyone forgets about automating cognitive work. Knowledge or skill needs to be intergenerational or we lose it.

          If you have no junior developers, who will turn into senior developers later on?

          • pinball_wizard@lemmy.zip
            link
            fedilink
            English
            arrow-up
            6
            ·
            18 hours ago

            If you have no junior developers, who will turn into senior developers later on?

            At least it isn’t my problem. As long as I have CrowdStrike, Cloudflare, Windows11, AWS us-east-1 and log4j… I can just keep enjoying today’s version of the Internet, unchanged.

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        9
        ·
        15 hours ago

        And then there are actual good developers who could or would tell you that LLMs can be useful for coding

        The only people who believe that are managers and bad developers.

        • keegomatic@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          14 hours ago

          You’re wrong, whether you figure that out now or later. Using an LLM where you gatekeep every write is something that good developers have started doing. The most senior engineers I work with are the ones who have adopted the most AI into their workflow, and with the most care. There’s a difference between vibe coding and responsible use.

          • raspberriesareyummy@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            5
            ·
            11 hours ago

            There’s a difference between vibe coding and responsible use.

            There’s also a difference between the occasional evening getting drunk and alcoholism. That doesn’t make an occasional event healthy, nor does it mean you are qualified to drive a car in that state.

            People who use LLMs in production code are - by definition - not “good developers”. Because:

            • a good developer has a clear grasp on every single instruction in the code - and critically reviewing code generated by someone else is more effort than writing it yourself
            • pushing code to production without critical review is grossly negligent and compromises data & security

            This already means the net gain with use of LLMs is negative. Can you use it to quickly push out some production code & impress your manager? Possibly. Will it be efficient? It might be. Will it be bug-free and secure? You’ll never know until shit hits the fan.

            Also: using LLMs to generate code, a dev will likely be violating copyrights of open source left and right, effectively copy-pasting licensed code from other people without attributing authorship, i.e. they exhibit parasitic behavior & outright violate laws. Furthermore the stuff that applies to all users of LLMs applies:

            • they contribute to the hype, fucking up our planet, causing brain rot and skill loss on average, and pumping hardware prices to insane heights.
            • keegomatic@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 hours ago

              We have substantially similar opinions, actually. I agree on your points of good developers having a clear grasp over all of their code, ethical issues around AI (not least of which are licensing issues), skill loss, hardware prices, etc.

              However, what I have observed in practice is different from the way you describe LLM use. I have seen irresponsible use, and I have seen what I personally consider to be responsible use. Responsible use involves taking a measured and intentional approach to incorporating LLMs into your workflow. It’s a complex topic with a lot of nuance, like all engineering, but I would be happy to share some details.

              Critical review is the key sticking point. Junior developers also write crappy code that requires intense scrutiny. It’s not impossible (or irresponsible) to use code written by a junior in production, for the same reason. For a “good developer,” many of the quality problems are mitigated by putting roadblocks in place to…

              1. force close attention to edits as they are being written,
              2. facilitate handholding and constant instruction while the model is making decisions, and
              3. ensure thorough review at the time of design/writing/conclusion of the change.

              When it comes to making safe and correct changes via LLM, specifically, I have seen plenty of “good developers” in real life, now, who have engineered their workflows to use AI cautiously like this.

              Again, though, I share many of your concerns. I just think there’s nuance here and it’s not black and white/all or nothing.

    • jali67@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      19 hours ago

      Don’t worry. The people on LinkedIn and tech executives tell us it will transform everything soon!

    • ImmersiveMatthew@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      12
      ·
      18 hours ago

      I really have not found AI to be useless for coding. I have found it extremely useful and it has saved me hundreds of hours. It is not without its faults or frustrations, but the it really is a tool I would not want to be without.

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        12
        ·
        15 hours ago

        That’s because you are not a proper developer, as proven by your comment. And you create tech legacy that will have a net cost in terms of maintenance or downtime.

        • ImmersiveMatthew@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          3
          ·
          12 hours ago

          I am for sure not a coder as it has never been my strong suite, but I am without a doubt an awesome developer or I would not have a top rated multiplayer VR app that is pushing the boundaries of what mobile VR can do.

          The only person who will have to look at my code is me so any and all issues be it my code or AI code will be my burden and AI has really made that burden much less. In fact, I recently installed Coplay in my Unity Engine Editor and OMG it is amazing at assisting not just with code, but even finding little issues with scene setup, shaders, animations and more. I am really blown away with it. It has allowed me to spend even less time on the code and more time imagineering amazing experiences which is what fans of the app care about the most. They couldn’t care less if I wrote the code or AI did as long as it works and does not break immersion. Is that not what it is all about at the end of the day?

          As long as AI helps you achieve your goals and your goals are grounded, including maintainability, I see no issues. Yeah, misdirected use of AI can lead to hard to maintain code down the line, but that is why you need a human developer in the loop to ensure the overall architecture and design make sense. Any code base can become hard to maintain if not thought through be is human or AI written.

          • raspberriesareyummy@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            4
            ·
            11 hours ago

            Look, bless your heart if you have a successful app, but success / sales is not exclusive to products of quality. Just look around at all the slop that people buy nowadays.

            As long as AI helps you achieve your goals and your goals are grounded, including maintainability, I see no issues.

            Two issues with that

            1. what you are using has nothing whatsoever to do with AI, it’s a glorified pattern repeater - an actual parrot has more intelligence
            2. if the destruction of entire ecosystems for slop is not an issue that you see, you should not be allowed anywhere near technology (as by now probably billions of people)
            • ImmersiveMatthew@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              5 hours ago

              I do not understand your point you are making about my particular situation as I am not making slop. Plus one persons slop is another’s treasure. What exactly are you suggesting as the 2 issues you outlined see like they are being directed to someone else perhaps?

              1. I am calling it AI as that is what it is called, but you are correct, it is a pattern predictor
              2. I am not creating slop but something deeply immersive and enjoyed by people. In terms of the energy used, I am on solar and run local LLMs.
              • raspberriesareyummy@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                5 hours ago

                I didn’t say your particular application that I know nothing about is slop, I said success does not mean quality. And if you use statistical pattern generation to save time, chances are high that your software is not of good quality.

                Even solar energy is not harvested waste-free (chemical energy and production of cells). Nevertheless, even if it were, you are still contributing to the spread of slop and harming other people. Both through spreading acceptance of a technology used to harm billions of people for the benefit of a few, and through energy and resource waste.

                • ImmersiveMatthew@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  4 hours ago

                  I am sure my code could be better. I am also sure the SDKs I use could be better and the gam engine could’ve better. For what I need, they all work good enough to get the job done. I am sure issues will come up as a result as it has many times in the past already, even before LLMs helped, but that is par for the course for a developer to tackle.

  • vpol@feddit.uk
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    1
    ·
    22 hours ago

    The developers can’t debug code they didn’t write.

    This is a bit of a stretch.

    • Xyphius@lemmy.ca
      link
      fedilink
      English
      arrow-up
      46
      arrow-down
      1
      ·
      21 hours ago

      agreed. 50% of my job is debugging code I didn’t write.

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 hours ago

        I don’t get this argument. Isn’t the whole point that the ai will debug and implement small changes too?

        • Cyber Yuki@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 hours ago

          Think an interior designer having to reengineer the columns and load bearing walls of a masonry construction.

          What are the proportions of cement and gravel for the mortar? What type of bricks to use? Do they comply with the PSI requirements? What caliber should the rebars be? What considerations for the pouring of concrete? Where to put the columns? What thickness? Will the building fall?

          “I don’t know that shit, I only design the color and texture of the walls!”

          And that, my friends, is why vibe coding fails.

          And it’s even worse: Because there are things you can more or less guess and research. The really bad part is the things you should know about but don’t even know they are a thing!

          Unknown unknowns: Thread synchronization, ACID transactions, resiliency patterns. That’s the REALLY SCARY part. Write code? Okay, sure, let’s give the AI a chance. Write stable, resilient code with fault tolerance, and EASY TO MAINTAIN? Nope. You’re fucked. Now the engineers are gone and the newbies are in charge of fixing bad code built by an alien intelligence that didn’t do its own homework and it’s easier to rewrite everything from scratch.

          • Evotech@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            3 hours ago

            If you need to refractor your program you might aswell start from the beginning

        • _g_be@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          29 minutes ago

          Yes, this is what I intended to write but I submitted it hastily.

          Its like a catch-22, they can’t write code so they vibecode, but to maintain vibed code you would necessarily need to write code to understand what’s actually happening

    • funkless_eck@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      18
      ·
      21 hours ago

      I mean I was trying to solve a problem t’other day (hobbyist) - it told me to create a

      function foo(bar): await object.foo(bar)

      then in object

      function foo(bar): _foo(bar)

      function _foo(bar): original_object.foo(bar)

      like literally passing a variable between three wrapper functions in two objects that did nothing except pass the variable back to the original function in an infinite loop

      add some layers and complexity and it’d be very easy to get lost

      • theparadox@lemmy.world
        link
        fedilink
        English
        arrow-up
        13
        ·
        19 hours ago

        The few times I’ve used LLMs for coding help, usually because I’m curious if they’ve gotten better, they let me down. Last time it was insistent that its solution would work as expected. When I gave it an example that wouldn’t work, it even broke down each step of the function giving me the value of its variables at each step to demonstrate that it worked… but at the step where it had fucked up, it swapped the value in the variable to one that would make the final answer correct. It made me wonder how much water and energy it cost me to be gaslit into a bad solution.

        How do people vibe code with this shit?

      • vpol@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        13 hours ago

        As a learning process it’s absolutely fine.

        You make a mess, you suffer, you debug, you learn.

        But you don’t call yourself a developer (at least I hope) on your CV.

    • mal3oon@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      12 hours ago

      I think it highly depends on the skill and experience of the dev. A lot of the people flocking into the vibe coding hype are not necessarily always people who know how about coding practices (including code review etc …) nor are experienced in directing AI agent to achieve such goals. The result is MIT prediction. Although, this will start to change soon.

      • Rooster326@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        19 hours ago

        If you’ve never had to debug code. Are you really a developer?

        There is zero chance you have never written a big so… Who is fixing them?

        Unless you just leave them because you work for Infosys or worse but then I ask again - are you really a developer?

  • edgemaster72@lemmy.world
    link
    fedilink
    English
    arrow-up
    190
    arrow-down
    3
    ·
    1 day ago

    Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive.

    And all they’ll hear is “not failure, metrics great, ship faster, productive” and go against your advice because who cares about three months later, that’s next quarter, line must go up now. I also found this bit funny:

    I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me… I was proud of what I’d created.

    Well you didn’t create it, you said so yourself, not sure why you’d be proud, it’s almost like the conclusion should’ve been blindingly obvious right there.

    • AutistoMephisto@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      89
      ·
      1 day ago

      The top comment on the article points that out.

      It’s an example of a far older phenomenon: Once you automate something, the corresponding skill set and experience atrophy. It’s a problem that predates LLMs by quite a bit. If the only experience gained is with the automated system, the skills are never acquired. I’ll have to find it but there’s a story about a modern fighter jet pilot not being able to handle a WWII era Lancaster bomber. They don’t know how to do the stuff that modern warplanes do automatically.

      • drosophila@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        18
        ·
        edit-2
        20 hours ago

        The thing about this perspective is that I think its actually overly positive about LLMs, as it frames them as just the latest in a long line of automations.

        Not all automations are created equal. For example, compare using a typewriter to using a text editor. Besides a few details about the ink ribbon and movement mechanisms you really haven’t lost much in the transition. This is despite the fact that the text editor can be highly automated with scripts and hot keys, allowing you to manipulate even thousands of pages of text at once in certain ways. Using a text editor certainly won’t make you forget how to write like using ChatGPT will.

        I think the difference lies in the relationship between the person and the machine. To paraphrase Cathode Ray Dude, people who are good at using computers deduce the internal state of the machine, mirror (a subset of) that state as a mental model, and use that to plan out their actions to get the desired result. People that aren’t good at using computers generally don’t do this, and might not even know how you would start trying to.

        For years ‘user friendly’ software design has catered to that second group, as they are both the largest contingent of users and the ones that needed the most help. To do this software vendors have generally done two things: try to move the necessary mental processes from the user’s brain into the computer and hide the computer’s internal state (so that its not implied that the user has to understand it, so that a user that doesn’t know what they’re doing won’t do something they’ll regret, etc). Unfortunately this drives that first group of people up the wall. Not only does hiding the internal state of the computer make it harder to deduce, every “smart” feature they add to try to move this mental process into the computer itself only makes the internal state more complex and harder to model.

        Many people assume that if this is the way you think about software you are just an elistist gatekeeper, and you only want your group to be able to use computers. Or you might even be accused of ableism. But the real reason is what I described above, even if its not usually articulated in that way.

        Now, I am of the opinion that the ‘mirroring the internal state’ method of thinking is the superior way to interact with machines, and the approach to user friendliness I described has actually done a lot of harm to our relationship with computers at a societal level. (This is an opinion I suspect many people here would agree with.) And yet that does not mean that I think computers should be difficult to use. Quite the opposite, I think that modern computers are too complicated, and that in an ideal world their internal states and abstractions would be much simpler and more elegant, but no less powerful. (Elaborating on that would make this comment even longer though.) Nor do I think that computers shouldn’t be accessible to people with different levels of ability. But just as a random person in a store shouldn’t grab a wheelchair user’s chair handles and start pushing them around, neither should Windows (for example) start changing your settings on updates without asking.

        Anyway, all of this is to say that I think LLMs are basically the ultimate in that approach to ‘user friendliness’. They try to move more of your thought process into the machine than ever before, their internal state is more complex than ever before, and it is also more opaque than ever before. They also reflect certain values endemic to the corporate system that produced them: that the appearance of activity is more important than the correctness or efficacy of that activity. (That is, again, a whole other comment though.) The result is that they are extremely mind numbing, in the literal sense of the phrase.

      • LOGIC💣@lemmy.world
        link
        fedilink
        English
        arrow-up
        50
        arrow-down
        2
        ·
        1 day ago

        It’s more like the ancient phenomenon of spaghetti code. You can throw enough code at something until it works, but the moment you need to make a non-trivial change, you’re doomed. You might as well throw away the entire code base and start over.

        And if you want an exact parallel, I’ve said this from the beginning, but LLM coding at this point is the same as offshore coding was 20 years ago. You make a request, get a product that seems to work, but maintaining it, even by the same people who created it in the first place, is almost impossible.

        • Joe@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          6
          ·
          13 hours ago

          Indeed… Throw-away code is currently where AI coding excels. And that is cool and useful - creating one off scripts, self-contained modules automating boilerplate, etc.

          You can’t quite use it the same way for complex existing code bases though… Not yet, at least…

      • ctrl_alt_esc@lemmy.ml
        link
        fedilink
        English
        arrow-up
        28
        ·
        1 day ago

        I agree with you, though proponents will tell you that’s by design. Supposedly, it’s like with high-level languages. You don’t need to know the actual instructions in assembly anymore to write a program with them. I think the difference is that high-level language instructions are still (mostly) deterministic, while an LLM prompt certaily isn’t.

        • Scubus@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          21 hours ago

          Yep, thats the key issue that so many people fail to understand. They want AI to be deterministic but it simply isnt. Its like expecting a human to get the right answer to any possible question, its just not going to happen. The only thing we can do is bring error rates with ai lower than a human doing the same task, and it will be at that point that the ai becomes useful. But even at that point there will always be the alignment issue and nondeterminism, meaning ai will never behave exactly the way we want or expect it to.

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        18 hours ago

        Once you automate something, the corresponding skill set and experience atrophy. It’s a problem that predates LLMs by quite a bit. If the only experience gained is with the automated system, the skills are never acquired.

        Well, to be fair, different skills are acquired. You’ve learned how to create automated systems, that’s definitely a skill. In one of my IT jobs there were a lot of people who did things manually, updated computers, installed software one machine at a time. But when someone figures out how to automate that, push the update to all machines in the room simultaneously, that’s valuable and not everyone in that department knew how to do it.

        So yeah, I guess my point is, you can forget how to do things the old way, but that’s not always bad. Like, so you don’t really know how to use a scythe, that’s fine if you have a tractor, and trust me, you aren’t missing much.

    • boonhet@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      3
      ·
      1 day ago

      I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me… I was proud of what I’d created.

      Well you didn’t create it, you said so yourself, not sure why you’d be proud, it’s almost like the conclusion should’ve been blindingly obvious right there.

      Does a director create the movie? They don’t usually edit it, they don’t have to act in it, nor do all directors write movies. Yet the person giving directions is seen as the author.

      The idea is that vibe coding is like being a director or architect. I mean that’s the idea. In reality it seems it doesn’t really pan out.

      • rainwall@piefed.social
        link
        fedilink
        English
        arrow-up
        17
        ·
        23 hours ago

        You can vibe write and vibe edit a movie now too. They also turn out shit.

        The issue is that llm isnt a person with skills and knowledge. Its a complex guessing box that gets thing kinda right, but not actually right, and it absolutely cant tell whats right or not. It has no actual skills or experience or humainty that a director can expect a writer or editor to have.

  • ignirtoq@feddit.online
    link
    fedilink
    English
    arrow-up
    126
    ·
    1 day ago

    We’re about to face a crisis nobody’s talking about. In 10 years, who’s going to mentor the next generation? The developers who’ve been using AI since day one won’t have the architectural understanding to teach. The product managers who’ve always relied on AI for decisions won’t have the judgment to pass on. The leaders who’ve abdicated to algorithms won’t have the wisdom to share.

    Except we are talking about that, and the tech bro response is “in 10 years we’ll have AGI and it will do all these things all the time permanently.” In their roadmap, there won’t be a next generation of software developers, product managers, or mid-level leaders, because AGI will do all those things faster and better than humans. There will just be CEOs, the capital they control, and AI.

    What’s most absurd is that, if that were all true, that would lead to a crisis much larger than just a generational knowledge problem in a specific industry. It would cut regular workers entirely out of the economy, and regular workers form the foundation of the economy, so the entire economy would collapse.

    “Yes, the planet got destroyed. But for a beautiful moment in time we created a lot of value for shareholders.”

    • UnspecificGravity@piefed.social
      link
      fedilink
      English
      arrow-up
      24
      ·
      23 hours ago

      Yep, and now you know why all the tech companies suddenly became VERY politically active. This future isn’t compatible with democracy. Once these companies no longer provide employment their benefit to society becomes a big fat question mark.

    • HasturInYellow@lemmy.world
      link
      fedilink
      English
      arrow-up
      21
      ·
      edit-2
      23 hours ago

      According to a study, the lower top 10% accounts for something like 68% of cash flow in the economy. Us plebs are being cut out all together.

      That being said, I think if people can’t afford to eat, things might bet bad. We will probably end up a kept population in these ghouls fever dreams.

      Edit: I’m an idiot.

      • Prior_Industry@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        23 hours ago

        Once Boston Dynamic style dogs and Androids can operate over a number of days independently, I’d say all bets are off that we would be kept around as pets.

        I’m fairly certain your Musks and Altmans would be content with a much smaller human population existing to only maintain their little bubble and damn everything else.

      • kreskin@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        edit-2
        22 hours ago

        Edit: I’m an idiot.

        Same here. Nobody knows what the eff they are doing. Especially the people in charge. Much of life is us believing confident people who talk a good game but dont know wtf they are doing and really shouldnt be allowed to make even basic decisions outside a very narrow range of competence.

        We have an illusion of broad meritocracy and accountability in life but its mostly just not there.

    • Randelung@lemmy.world
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      Also, even if we make it through a wave of bullshit and all these companies fail in 10 years, the next wave will be ready and waiting, spouting the same crap - until it’s actually true (or close enough to be bearable financially). We can’t wait any longer to get this shit under control.

  • Rhoeri@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    17
    ·
    19 hours ago

    AI is hot garbage and anyone using it is a skillless hack. This will never not be true.

    • Joe@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      4
      ·
      12 hours ago

      While this is a popular sentiment, it is not true, nor will it ever be true.

      AI (LLMs & agents in the coding context, in this case) can serve as both a tool and a crutch. Those who learn to master the tools will gain benefit from them, without detracting from their own skill. Those who use them as a crutch will lose (or never gain) their own skills.

      Some skills will in turn become irrelevent in day-to-day life (as is always the case with new tech), and we will adapt in turn.

      • Rhoeri@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        9
        ·
        12 hours ago

        LLMs exist so that skill-less hacks can pretend to be skilled artists. It’s a shortcut to success.

        • Joe@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          edit-2
          12 hours ago

          That this is and will be abused is not in question. :-P

          You are making a leap though.

      • Rhoeri@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        16
        ·
        16 hours ago

        Do you not know the difference between an automated process and machine learning?

        • 5gruel@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          1
          ·
          12 hours ago

          The thing with being cocky is, if you are wrong it makes you look like an even bigger asshole

          https://en.wikipedia.org/wiki/AlphaFold

          The program uses a form of attention network, a deep learning technique that focuses on having the AI identify parts of a larger problem, then piece it together to obtain the overall solution.

            • Suffa@lemmy.wtf
              link
              fedilink
              English
              arrow-up
              8
              arrow-down
              2
              ·
              edit-2
              12 hours ago

              Cool, now do an environmental impact on the server hosting your instance while you pollute by mindlessly talking shit on the Internet.

              I’ll take AI unfolding proteins over you posting any day.

              • Rhoeri@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                10
                ·
                11 hours ago

                Hilarious. You’re comparing a lemmy instance to AI data centers. There’s the proof I needed that you have no fucking clue what you’re talking about.

                “bUt mUh fOLdeD pRoTEinS,” said the AI minion.

        • nullroot@lemmy.world
          link
          fedilink
          English
          arrow-up
          11
          arrow-down
          1
          ·
          14 hours ago

          Yes? Machine learning has been huge for protein folding and not because anyone is stupid, it’s because it’s a task uniquely suited for machine learning, of which there are many. But none of that is what this AI bubble is really about, and even though I find the underlining math and technology fascinating, I share the disdain for how the bulk of it is currently being used.

  • HugeNerd@lemmy.ca
    link
    fedilink
    English
    arrow-up
    30
    ·
    22 hours ago

    Computers are too powerful and too cheap. Bring back COBOL, painfully expensive CPU time, and some sort of basic knowledge of what’s actually going on.

    Pain for everyone!

    • Thorry@feddit.org
      link
      fedilink
      English
      arrow-up
      17
      ·
      22 hours ago

      Yeah I think around the Pentium 200mhz point was the sweet spot. Powerful enough to do a lot of things, but not so powerful that software can be as inefficient and wasteful as it is today.

    • HC4L@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      21 hours ago

      Be careful what you wish for, with RAM prices soaring owning a home computer might become less of an option. Luckily we can get a subscription for computing power easily!

      • Omgpwnies@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        ·
        21 hours ago

        I built a new PC early October, literally 2 weeks later RAM prices went nuts… so glad I pulled the trigger when I did