• Luden@lemmings.world
    link
    fedilink
    arrow-up
    4
    ·
    41 minutes ago

    I am a game developer and a web developer and I use AI sometimes just to make it write template code for me so that I can make the boilerplate faster. For the rest of the code, AI is soooo dumb it’s basically impossible to make something that works!

    • Pyr@lemmy.ca
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 minute ago

      Yes I feel like many people misunderstand AI capabilities

      They think it somehow comes up with the best solution, when really it’s more like lightning and takes the path of least resistance. It finds whatever works the fastest, if it even can without making it up and then lying that it works

      It by no means creates elegant and efficient solutions to anything

      AI is just a tool. You still need to know what you are doing to be able to tell if it’s solution is worth anything and then you still will need to be able to adjust and tweak it

      It’s most useful for being able to maybe give you an idea on how to do something by coming up with a method/solution you may not have known about or wouldn’t have considered. Testing your own stuff as well is useful or having it make slight adjustments.

  • derAbsender@piefed.social
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 hour ago

    Stupid question:

    Are there really no safe guards to the merging process except for human oversight?

    Isnt there some “In Review State” where people who want to see the experimental stuff, can pull this experimental stuff and if enough™ people say “This new shit is okay” it gets merged?

    So the Main Project doesnt get poisoned and everyone can still contribute in a way and those who want to Experiment can test the New Stuff.

    • Kissaki@feddit.org
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      57 minutes ago

      Most projects don’t have enough people or external interest for that kind of process.

      It would be possible to establish some tooling like that, but standard forges don’t provide that. So it’d feel cumbersome.

      And in the end you’re back at having contributors, trustworthiness, and quality control. Because testing and reviewing are contributions too. You don’t want just a popularity contest (I want this) nor blindly trust unknown contribute.

    • Little8Lost@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      47 minutes ago

      It would be nice to bump upthe useful stuff through the community but even then there could be bot accounts that push the crap to the top

  • ZeroOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    3 hours ago

    So I guess it is time to switch to a different style of FOSS development ?

    The cathedral style, which is utilized by Fossil, basically in order to contribute you’ll have to be manually included into the group. It’s a high-trust environment where devs know each other on a 1st-name basis

    • ThirdConsul@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      18 minutes ago

      What if I want to contribue to a FoSS project because I’m using it but I don’t want to make new friends?

    • RemADeus@thelemmy.club
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 hours ago

      That is a wonderful method because it works in a similar way of many FediVerse server administrators admitting people to new accounts. This way is the slop is immediately filtered away

      • nightlily@leminal.space
        link
        fedilink
        English
        arrow-up
        3
        ·
        50 minutes ago

        It’s discussed in the Bluesky thread but the CI costs are too high on Gitlab and Codeberg for Godot‘s workflow.

      • e8d79@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        16
        ·
        2 hours ago

        Codeberg is cool but I would prefer not having all FOSS project centralised on another platform. In my opinion projects of the size of Godot should consider using their own infrastructure.

        • JackbyDev@programming.dev
          link
          fedilink
          English
          arrow-up
          4
          ·
          54 minutes ago

          Let’s be realistic. Not everyone is going to move to Codeberg. Godot moving to Codeberg would be decentralizing.

  • lmr0x61@lemmy.ml
    link
    fedilink
    English
    arrow-up
    122
    ·
    10 hours ago

    Damn, Godot too? I know Curl had to discontinue their bug bounties over the absolutely tidal volume of AI slop reports… Open source wasn’t ever perfect, but whatever cracks in there were are being blown a mile wide by these goddamn slop factories.

    • luciferofastora@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      44 minutes ago

      Open source wasn’t ever perfect, but whatever cracks in there were are being blown a mile wide by these goddamn slop factories.

      This is the perpetual issue, not just with AI: Any system will have flaws and weaknesses, but often, they can generally be papered over with some good will and patience…

      Until selfish, immoral assholes come and ruin it for everyone.

      From teenagers using the playground to smoke and bury their cigs in the sand, so now parents with small children can’t use it any more, over companies exploiting legal loopholes to AI slop drowning volunteers in obnoxious bullshit: Most individual people might be decent, but a single turd is all it takes to ruin the punch bowl.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 hours ago

      Then get ready for people just making slop libraries, not because people are dissatisfied with existing solutions (such as I did with iota, which is a direct media layer similar to SDL, but has better access to some low-level functionality + OOP-ish + memory safe lang), but just because they can.

      I got a link to a popular rectpacking algorithm pretty quickly after asking in a Discord server. Nowadays I’d be asked to “vibecode it”.

      • Jankatarch@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        27 minutes ago

        Can confirm the last part. I am in Uni and if anyone ever asks questions on the class groupchats then first 5-6 answers will be “ask chatgpt.”

    • fuck_u_spez_in_particular@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 hours ago

      Unfortunately it’s a general theme in Open Source. I lost almost all motivation for programming in my free-time because of all these AI-slop(-PRs). It’s kinda sad, how that Art (among others) is flooded with slop…

  • tabular@lemmy.world
    link
    fedilink
    English
    arrow-up
    164
    ·
    edit-2
    12 hours ago

    Before hitting submit I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.

    Do they think the AI written code Just Works ™? Do they feel so detached from that code that they don’t feel embarrassment when it’s shit? It’s like calling yourself a fictional story writer and writing “written by (your name)” on the cover when you didn’t write it, and it’s nonsense.

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      English
      arrow-up
      7
      ·
      5 hours ago

      I would think that they will have to combat AI code with an AI code recognizer tool that auto-flags a PR or issue as AI, then they can simply run through and auto-close them. If the contributor doesn’t come back and explain the code and show test results to show it working, then it is auto-closed after a week or so if nobody responds.

    • atomicbocks@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      60
      ·
      10 hours ago

      From what I have seen Anthropic, OpenAI, etc. seem to be running bots that are going around and submitting updates to open source repos with little to no human input.

      • Notso@feddit.org
        link
        fedilink
        English
        arrow-up
        33
        arrow-down
        2
        ·
        6 hours ago

        You guys, it’s almost as if AI companies try to kill FOSS projects intentionally by burying them in garbage code. Sounds like they took something from Steve Bannon’s playbook by flooding the zone with slop.

    • kadu@scribe.disroot.org
      link
      fedilink
      English
      arrow-up
      123
      arrow-down
      7
      ·
      12 hours ago

      I’d worry I’ve made a silly mistake which would make me look a fool and waste their time.

      AI bros have zero self awareness and shame, which is why I continue to encourage that the best tool for fighting against it is making it socially shameful.

      Somebody comes along saying “Oh look at the image is just genera…” and you cut them with “looks like absolute garbage right? Yeah, I know, AI always sucks, imagine seriously enjoying that hahah, so anyway, what were you saying?”

          • k0e3@lemmy.ca
            link
            fedilink
            English
            arrow-up
            10
            ·
            7 hours ago

            Yeah but then their Facebook accounts will keep producing slop even after they’re gone.

        • Tyrq@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          6
          ·
          6 hours ago

          the data eventually poisons itself when it can do nothing but refer to its own output from however many generations of hallucinated data

    • Feyd@programming.dev
      link
      fedilink
      English
      arrow-up
      85
      ·
      11 hours ago

      LLM code generation is the ultimate dunning Kruger enhancer. They think they’re 10x ninja wizards because they can generate unmaintainable demos.

        • NotMyOldRedditName@lemmy.world
          link
          fedilink
          English
          arrow-up
          15
          ·
          8 hours ago

          Sigh, now in CSI when they enhance a grainy image they AI will make a fake face and send them searching for someone that doesn’t exist, or it’ll use a face of someone in the training set and they go after the wrong person.

          Either way I have a feeling they’ll he some ENHANCE failure episode due to AI.

  • xkbx@startrek.website
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    2
    ·
    12 hours ago

    Couldn’t you just set up actual AI/LLM verification questions, like “how many r’s in strawberry?”

    Or even just have an AI / Manual contribution divide. Wouldn’t stop everything 100% but might help the clean-up process better

    • SkunkWorkz@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      Yeah but that won’t stop people from manually submitting prs made with AI. A lot of the slop isn’t just automated pull requests but people using AI to find and fix “bugs”, without understanding the code at all.

    • CameronDev@programming.dev
      link
      fedilink
      English
      arrow-up
      80
      ·
      12 hours ago

      Those kind of challenges only work for a short while. Chatgpt has solved the strawberry one already.

      That said, I wish these AI people would just create their own projects and contribute to them. Create a LLM fork of the engine, and go nuts. If your AI is actually good, you’ll end up with a better engine and become the dominant fork.

      • XLE@piefed.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        6 hours ago

        People who submit AI-generated code tend to crumble, or sound incomprehensible, in the face of the simplest questions. Thank goodness this works for code reviews… because if you look at AI CEO interviews, journalists can’t detect the BS.

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 hours ago

          LLMs are magic at everything that you don’t understand at all, and they’re horrifically incompetent at anything you do actually understand pretty well.

      • warm@kbin.earth
        link
        fedilink
        arrow-up
        43
        ·
        12 hours ago

        They don’t want to do it in a corner where nobody can see, they want to push it on existing projects and attempt to justify it.

          • mcv@lemmy.zip
            link
            fedilink
            English
            arrow-up
            7
            ·
            edit-2
            6 hours ago

            Use open source maintainers as free volunteers check whether your AI coding experiment works.

      • new_guy@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        ·
        11 hours ago

        There’s a joke in science circles that goes something like this:

        “Do you know how they call alternative medicine that works? Just regular medicine.”

        Good code made by LLM should be indistinguishable from code made by an human… It would simply be “just code”.

        It’s hard to create a project the size of Godot’s and not have a human in the loop somewhere filtering the slop and trying to create a cohesive code base. At that poin they either would be overwhelmed again or the code would be unmaintainable.

        And then we would go full circle and get to the same point described by the article.

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 hours ago

          At the risk of drawing the ire of people…

          … I have a local LLM that I run as a primarily a coding assistant, mostly for GDScript.

          I’ve never like, submitted anything as a potential commit to Godot proper.

          But dear lord, the amount of shennanigans I have had to figure out just to get an LLM to even understand GDScript’s syntax and methods properly is… substantial.

          They tend to just default back to using things that work in Python or JS, but… do not work or exist in GDScript.

          Like one recurring quirk is they will keep trying to use ? ternary instead of if x else(if) y constructions.

          That or they will constantly fuck up trying to custom sorting properly, they’ll either do it syntactically wrong, or, just hallucinate various kinds of set/array methods and properties that don’t exist in GDScript.

          And its a genuine stuggle to get them to comprehend more than roughly 750 lines of code at the same time, without confusing themselves.

          It is possible to use an LLM to be like, hey, look at this code, help me refactor it to be more modular, or, standardize this kind of logic into a helper function… but you basically have to browbeat them with a custom prompt that tells them to stop doing all these dumb, basic things.

          Even if you tell them in conversation " hey you did this wrong, heres how it actually works ", it doesnt matter, keep that conversation going and they will forget it and repeat the mistake… you have to have it contstantly present in the prompt.

          The amount of babysitting and constantly telling an LLM the number of errors it is making is quite substantial.

          It can be a thing that makes some sense to do in some situations, but it is extremely, extremely far away from ‘Make a game for me in Godot’, or even like ‘Make a third person camera script’.

          You have to break things down into much, much more conceptually smaller chunks.

        • CameronDev@programming.dev
          link
          fedilink
          English
          arrow-up
          20
          ·
          11 hours ago

          They can fork Godot and let their LLMs go at it. They don’t have to use the Godot human maintainers as free slop filters.

          But of course, if they did that, their LLMs would have to stand on their own merits.

    • one_old_coder@piefed.social
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      10 hours ago

      You could also ask users to type the words fuck or shit in the description somewhere. LLMs cannot do that AFAIK.

      • Pamasich@kbin.earth
        link
        fedilink
        arrow-up
        2
        ·
        2 hours ago

        I mean, ChatGPT can do it. I just tested it. And if you run your own AI, you can probably remove most such rules anyway.

    • turboSnail@piefed.europe.pub
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      3
      ·
      8 hours ago

      How about asking it to write a short political speech on climate change. Then, just count the number of rhetoric devices and em-dashes. A human dev wouldn’t be bothered to write anything fancy or impactful when they just want to submit a bug fix. It would be simple, poorly written, and filled with typos. LLMs try to make it way too impressive and impactful.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 hour ago

        The funnier thing is when you try to get an LLM to do like, a report on its creators.

        You can keep feeding them articles detailing the BS their company is up to, and it will usually just keep reverting to the company line, despite a preponderance of evidence that said company line is horseshit.

        Like uh, try to get an LLM to give you an exact number of uh, how much will this conversation we are having, how much will that increase RAM prices in a 3 month period?

        What do you think about ~95% of companies implementing ‘AI’ into their business processes reporting a 0 to negative boost to productivity?

        What are the net economic damages of this malinvestment?

        Give it a bunch of economic data, reports, etc.

        Results are usually what I would describe as ‘comical’.

  • bluGill@fedia.io
    link
    fedilink
    arrow-up
    22
    arrow-down
    16
    ·
    12 hours ago

    I’ve been writting a lot of code with ai - for every half hour the ai needs to write the code I need a full week to revise it into good code. If you don’t do that hard work the ai is going to overwhelm the reviewers with garbage

    • Peehole@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      3 hours ago

      With proper prompting you can let it do a lot of annoying stuff like refactors reasonably well. With a very strict linter you can avoid the most stupid mistakes and shortcuts. If I work on a more complex PR it can take me a couple days to plan it correctly and the actual implementation of the correct plan will take no time at all.

      I think for small bug fixes on a maintainable codebase it works, and it works for writing plans and then implementing them. But I honestly don’t know if it’s any faster than just writing the code myself, it‘s just different.

      • fuck_u_spez_in_particular@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        reasonably well

        hmm not in my experience, if you don’t care about code-quality you can quickly prototype slop, and see if it generally works, but maintainable code? I always fall back to manual coding, and often my code is like 30% of the length of what AI generates, more readable, efficient etc.

        If you constrain it a lot, it might work reasonably, but then I often think, that instead of writing a multi-paragraph prompt, just writing the code might’ve been more effective (long-term that is).

        plan it correctly and the actual implementation of the correct plan will take no time at all.

        That’s why I don’t think AI really helps that much, because you still have to think and understand (at least if you value your product/code), and that’s what takes the most time, not typing etc.

        it‘s just different.

        Yeah it makes you dumber, because you’re tempted to not think into the problem, and reviewing code is less effective in understanding what is going on within code (IME, although I think especially nowadays it’s a valuable skill to be able to review quickly and effectively).

        • Peehole@piefed.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          8 minutes ago

          Eh I don’t disagree with you, it’s just the reality for me that I am now expected to work on much more stuff at the same time because of AI, it’s exhausting but at least in my job I have no choice and I try to arrange myself with the situation.

          I sure lost a lot of understanding of the details of the codebase but I do read every line of code these LLMs spit out and manually review all PRs for obvious bullshit. I also think code quality got worse despite me doing everything I can to keep it decent.

      • bluGill@fedia.io
        link
        fedilink
        arrow-up
        16
        arrow-down
        1
        ·
        11 hours ago

        I’m writing code because it is often faster than explaining to the ai how to do it. I’m spending this month seeing what ai can do - it ranges from saving me a lot of tedious effort to making a large mess to clean up

        • LedgeDrop@lemmy.zip
          link
          fedilink
          English
          arrow-up
          7
          ·
          7 hours ago

          I’ve had better success, when using AI agents in repeated, but small and narrow doses.

          It’s been kinda helpful in brainstorming interfaces (and I always have to append at the end of every statement “… in the most maintainable way possible.”)

          It’s been really helpful in writing unit tests (I follow Test Driven Development), and sometimes it picks up edge cases I would have overlooked.

          I wouldn’t blindly trust any of it, as all too often it’s happy to just disregard any sort of error handling (unless explicitly mentioned, after the fact). It’s basically like being paired up with an over-eager, under-qualified junior developer.

          But, yeah, you’re gonna have a bad time if you prompt it to “write me a Unix operating system in web assembly”.

        • Thorry@feddit.org
          link
          fedilink
          English
          arrow-up
          5
          ·
          7 hours ago

          I totally get it. I’ve been critical about using AI for code purposes at work and have pleaded to stop using it (management is forcing it, less experienced folk want it). So I’ve been given a challenge by one of the proponents to use a very specific tool. This one should be one of the best AI slop generators out there.

          So I spent a lot of time thoroughly writing specs for a task in a way the tool should be able to do it. It failed miserably, didn’t even produce any usable result. So I asked the dude that challenged me to help me refine the specs, tweak the tool, make everything perfect. The thing still failed hard. It was said it was because I was forcing the tool into decisions it couldn’t handle and to give it more freedom. So we did that, it made up the rules themselves and subsequently didn’t follow those rules. Another failure. So we split up the task into smaller pieces, it still couldn’t handle it. So we split it up even further, to a ridiculous level, at which point it would definitely be faster just to create the code manually. It’s also no longer realistic, as we pretty much have the end result all worked out and are just coaching the tool to get there. And even then it’s making mistakes, having to be corrected all the time, not following specs, not following code guidelines or best practices. Another really annoying thing is it keeps on changing code it shouldn’t touch, since we’ve made the steps so small, it keeps messing up work it did previously. And the comments it creates are crazy, either just about every line has a comment attached and functions get a whole story, or it has zero comments. As soon as you say to limit the comments to where they are useful, it just deletes all the comments, even the ones it put in before or we put in manually.

          I’m ready to give up on the thing and have the use of AI tools for coding limited if not outright stopped entirely. But I’ll know how that discussion will go: Oh you used tool A? No, you should be using tool B, it’s much better. Maybe the tools aren’t there now, but they are getting better all the time, so we’ll benefit any day now.

          When I hear even experienced devs be enthusiastic about AI tools, I really feel like I’m going crazy. They suck a lot and aren’t useful at all (on top of the thousand other issues with AI), why are people liking it? And why have we hedged the entire economy on it?

          • mcv@lemmy.zip
            link
            fedilink
            English
            arrow-up
            5
            ·
            6 hours ago

            I’ve started using it as an interactive rubber duck. When I’ve got a problem, I explain it to the AI, after which it gives a response that I ignore because after explaining it, I figured it out myself.

            AI has been very helpful for finding my way around Azure deploy problems, though. And other complex configuration issues (I was missing a certificate to use az login). I fixed problems I probably couldn’t have solved without it.

            But I’ve lost a lot of time trying to get it to solve complex coding problems. It makes a heroic effort trying to combine aspects of known patterns and algorithms into something resembling a solution, and it can “reason” about how it should work, but it doesn’t really understand what it’s doing.

            • Ænima@lemmy.zip
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              3 hours ago

              after explaining it, I figured it out myself.

              I use colleagues or people on Discord for this. I get the solution immediately after asking AND those that saw me, or heard me, ask now think I’m an idiot. It’s my neurodivergent kink!

            • addie@feddit.uk
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 hours ago

              Which is strange, because Azure’s documentation is complete dogshit.

              We were trying to solve something at work (send SMTP messages using OAuth authentication, not rocket science) and Azure’s own chatbot kept on making up non-existent server commands, rest endpoints that don’t exist, and phantom permissions that needed to be added to the account.

              Seriously; fuck Azure, fuck Copilot. Made a task that should have taken hours, take weeks.

        • Joe@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          9 hours ago

          You will need more than a month to figure out what its good for and what not, and to learn how to effectively utilize it as a tool.

          If can properly state a problem, outline the approach I want, and can break it down into testable stages, it can be an accelerator. If not, it’s often slop.

          The most valuable time is up front design and planning, and learning how to express it. Next up is the ability to quickly make judgement calls, and to backtrack without getting bogged down.

      • bluGill@fedia.io
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        8 hours ago

        That is a question I’n trying to answer. Until I know what ai can do I can’t have a valid opinion.

        • leftzero@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          4
          ·
          7 hours ago

          We know what “AI” can do.

          • Create one of the largest and most dangerous economic bubbles in history.
          • Be a massive contributor to the climate catastrophe.
          • Consume unfathomable amounts of resources like water, destroying the communities that need them.
          • Make personal computing unaffordable. (And eventually any for of offline computing; if it’s up to these bastards we’ll end up back with only mainframes and dumb terminals, with them controlling the mainframes).
          • Promote mass surveillance and constant erosion of privacy.
          • Replace search engines making it impossible to find trustworthy information on the Internet.
          • Destroy the open web by drowning it on useless slop.
          • Destroy open source by overwhelming the maintainers with unusable slop.
          • Destroy the livelihood of artists and programmers using their own stolen works as training data, without providing a useable replacement for the works they would have produced.
          • Infect any code they touch with such an amount of untraceable bugs that it becomes unusable and dangerous (see windows updates since they replaced their programmers with copilot, for instance.
          • Support the parasitic billionaire class and increase the wealth divide even more.
          • Make you look like a monstrous moronic asshole for supporting all that shit.

          It maybe being able to save you five minutes of coding in exchange for several hours of debugging (either by you or by whoever is burdened with your horrible slop) is not worth being an active contributor to all that monstrous harm on humanity and the world.

    • Seefra 1@lemmy.zip
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      10
      ·
      10 hours ago

      Not sure why you’re getting down votes, AI is a good tool when used properly.

      • RalfWausE@feddit.org
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        9
        ·
        8 hours ago

        Its not, its an abomination that should be wiped of the face of this earth and its shills should be shunned

  • zr0@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    39
    ·
    7 hours ago

    What people don’t realize is that AI does not write good code unless you tell it to. I am playing a lot with AI doing the writing, while I give it specific prompts, but even then, very often it changes code that was totally unnecessary. And this is the dangerous part.

    I believe the only thing repo owners could do is use AI against AI. Let the blind AI contributors drown in work by constantly telling them to improve the code, and by asking critical questions.

        • mcv@lemmy.zip
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          6 hours ago

          It sounds crazy, but it can have impact. It might follow some coding standards it wouldn’t otherwise.

          But you don’t really know. You can also explicitly tell it which coding standards to follow and it still won’t.

          All code needs to be verified by a human. If you can tell it’s AI, it should be rejected. Unless it’s a vibe coding project I suppose. They have no standards.

          • uniquethrowagay@feddit.org
            link
            fedilink
            English
            arrow-up
            7
            ·
            5 hours ago

            But you don’t really know. You can also explicitly tell it which coding standards to follow and it still won’t.

            That’s the problem with LLMs in general, isn’t it? It may give you the perfect answer. It may also give you the perfect sounding answer while being terribly incorrect. Often, the only way to notice is if you knew the answer in the first place.

            They can maybe be used to get a first draft for an E-Mail you don’t know how to start. Or to write a “funny” poem for the retirement party of Christine from Accounting that makes cringe to death on the spot. Yet people treat them like this hyper competent all-knowing assistant. It’s maddening.

            • mcv@lemmy.zip
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 hours ago

              Exactly. They’re trained to produce plausible answers, not correct ones. Sometimes they also happen to be correct, which is great, but you can never trust them.

      • zr0@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        8
        ·
        6 hours ago

        Obviously you have no clue how LLM’s work and it is way more complex than just telling it to weite good code. What I was saying is, that even with a very good prompt, it will make up things and you have to double check it. However, for that you need to be able to read and understand code, which is not the case for 98% of the vibe coders.

        • anon_8675309@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          42 minutes ago

          So just dont use LLMs then. The very issue is that mediocre devs just accept whatever and try to PR that.

          Don’t be a mediocre dev.

        • Chais@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          9
          ·
          4 hours ago

          So what you’re saying is in order for “AI” to write good code I need to double check everything it spits out and correct it. But sure, tell yourself that it saves any amount of time.

        • porous_grey_matter@lemmy.ml
          link
          fedilink
          English
          arrow-up
          13
          ·
          edit-2
          6 hours ago

          So what you’re saying is directly contradictory to your previous comment, in fact it doesn’t produce good code even when you tell it to.

    • vane@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      arrow-down
      1
      ·
      edit-2
      6 hours ago

      You’re absolutely right. I haven’t realized that I can just tell it to write good code. Thank you, it changed my life.