Just want to clarify, this is not my Substack, I’m just sharing this because I found it insightful.

The author describes himself as a “fractional CTO”(no clue what that means, don’t ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

    • BarneyPiccolo@lemmy.today
      link
      fedilink
      English
      arrow-up
      1
      ·
      20 minutes ago

      I don’t know shit about anything, but it seems to me that the AI already thought it gave you the best answer, so going back to the problem for a proper answer is probably not going to work. But I’d try it anyway, because what do you have to lose?

      Unless it gets pissed off at being questioned, and destroys the world. I’ve seen more than few movies about that.

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        5 minutes ago

        You are in a way correct. If you keep sending the context of the “conversation” (in the same chat) it will reinforce its previous implementation.

        The way ais remember stuff is that you just give it the entire thread of context together with your new question. It’s all just text in text out.

        But once you start a new conversation (meaning you don’t give any previous chat history) it’s essentially a “new” ai which didn’t know anything about your project.

        This will have a new random seed and if you ask that to look for mistakes etc it will happily tell you that the last Implementation was all wrong and here’s how to fix it.

        It’s like a minecraft world, same seed will get you the same map every time. So with AIs it’s the same thing ish. start a new conversation or ask a different model (gpt, Google, Claude etc) and it will do things in a new way.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      11
      ·
      3 hours ago

      AI isn’t good at changing code, or really even understanding it… It’s good at writing it, ideally 50-250 lines at a time

      • Evotech@lemmy.world
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        3 hours ago

        I’m just not following the mindset of “get ai to code your whole program” and then have real people maintain it? Sounds counter productive

        I think you need to make your code for an Ai to maintain. Use Static code analysers like SonarQube to ensure that the code is maintainable (cognitive complexity)!and that functions are small and well defined as you write it.

      • lepinkainen@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        3 hours ago

        I’ve made full-ass changes on existing codebases with Claude

        It’s a skill you can learn, pretty close to how you’d work with actual humans