What people don’t realize is that AI does not write good code unless you tell it to. I am playing a lot with AI doing the writing, while I give it specific prompts, but even then, very often it changes code that was totally unnecessary. And this is the dangerous part.
I believe the only thing repo owners could do is use AI against AI. Let the blind AI contributors drown in work by constantly telling them to improve the code, and by asking critical questions.
It sounds crazy, but it can have impact. It might follow some coding standards it wouldn’t otherwise.
But you don’t really know. You can also explicitly tell it which coding standards to follow and it still won’t.
All code needs to be verified by a human. If you can tell it’s AI, it should be rejected. Unless it’s a vibe coding project I suppose. They have no standards.
But you don’t really know. You can also explicitly tell it which coding standards to follow and it still won’t.
That’s the problem with LLMs in general, isn’t it? It may give you the perfect answer. It may also give you the perfect sounding answer while being terribly incorrect. Often, the only way to notice is if you knew the answer in the first place.
They can maybe be used to get a first draft for an E-Mail you don’t know how to start. Or to write a “funny” poem for the retirement party of Christine from Accounting that makes cringe to death on the spot. Yet people treat them like this hyper competent all-knowing assistant. It’s maddening.
Exactly. They’re trained to produce plausible answers, not correct ones. Sometimes they also happen to be correct, which is great, but you can never trust them.
Obviously you have no clue how LLM’s work and it is way more complex than just telling it to weite good code. What I was saying is, that even with a very good prompt, it will make up things and you have to double check it. However, for that you need to be able to read and understand code, which is not the case for 98% of the vibe coders.
Of course. It makes it easy to appear you actually have done something smart, but in reality it just causes more work for others. I believe that senior devs and engineers know how and when to use an LLM. But if you are a crypto bro and try to develop an ecosystem from scratch, it will be a huge mess.
It is obvious the we will not be able to stop those PR’s, so we need to come up with other means, with automatisms that help the maintainers save time. I only saw very few using automated LLM actions in repos, and I think the main reason for that are the cost of running them.
So what you’re saying is in order for “AI” to write good code I need to double check everything it spits out and correct it. But sure, tell yourself that it saves any amount of time.
What people don’t realize is that AI does not write good code unless you tell it to. I am playing a lot with AI doing the writing, while I give it specific prompts, but even then, very often it changes code that was totally unnecessary. And this is the dangerous part.
I believe the only thing repo owners could do is use AI against AI. Let the blind AI contributors drown in work by constantly telling them to improve the code, and by asking critical questions.
Ohhh, that’s what I was missing, just tell it to write good code, of course.
“Okay, ChatGPT. Write me a game that will surpass Metal Gear. And make sure the code is actually good.”
It sounds crazy, but it can have impact. It might follow some coding standards it wouldn’t otherwise.
But you don’t really know. You can also explicitly tell it which coding standards to follow and it still won’t.
All code needs to be verified by a human. If you can tell it’s AI, it should be rejected. Unless it’s a vibe coding project I suppose. They have no standards.
That’s the problem with LLMs in general, isn’t it? It may give you the perfect answer. It may also give you the perfect sounding answer while being terribly incorrect. Often, the only way to notice is if you knew the answer in the first place.
They can maybe be used to get a first draft for an E-Mail you don’t know how to start. Or to write a “funny” poem for the retirement party of Christine from Accounting that makes cringe to death on the spot. Yet people treat them like this hyper competent all-knowing assistant. It’s maddening.
Exactly. They’re trained to produce plausible answers, not correct ones. Sometimes they also happen to be correct, which is great, but you can never trust them.
Obviously you have no clue how LLM’s work and it is way more complex than just telling it to weite good code. What I was saying is, that even with a very good prompt, it will make up things and you have to double check it. However, for that you need to be able to read and understand code, which is not the case for 98% of the vibe coders.
So just dont use LLMs then. The very issue is that mediocre devs just accept whatever and try to PR that.
Don’t be a mediocre dev.
Of course. It makes it easy to appear you actually have done something smart, but in reality it just causes more work for others. I believe that senior devs and engineers know how and when to use an LLM. But if you are a crypto bro and try to develop an ecosystem from scratch, it will be a huge mess.
It is obvious the we will not be able to stop those PR’s, so we need to come up with other means, with automatisms that help the maintainers save time. I only saw very few using automated LLM actions in repos, and I think the main reason for that are the cost of running them.
So how would you fight the wave of useless PR’s?
So what you’re saying is in order for “AI” to write good code I need to double check everything it spits out and correct it. But sure, tell yourself that it saves any amount of time.
It saves my time. That’s all I need.
So what you’re saying is directly contradictory to your previous comment, in fact it doesn’t produce good code even when you tell it to.
👍
You’re absolutely right. I haven’t realized that I can just tell it to write good code. Thank you, it changed my life.