Obviously you have no clue how LLM’s work and it is way more complex than just telling it to weite good code. What I was saying is, that even with a very good prompt, it will make up things and you have to double check it. However, for that you need to be able to read and understand code, which is not the case for 98% of the vibe coders.
Of course. It makes it easy to appear you actually have done something smart, but in reality it just causes more work for others. I believe that senior devs and engineers know how and when to use an LLM. But if you are a crypto bro and try to develop an ecosystem from scratch, it will be a huge mess.
It is obvious the we will not be able to stop those PR’s, so we need to come up with other means, with automatisms that help the maintainers save time. I only saw very few using automated LLM actions in repos, and I think the main reason for that are the cost of running them.
So what you’re saying is in order for “AI” to write good code I need to double check everything it spits out and correct it. But sure, tell yourself that it saves any amount of time.
Obviously you have no clue how LLM’s work and it is way more complex than just telling it to weite good code. What I was saying is, that even with a very good prompt, it will make up things and you have to double check it. However, for that you need to be able to read and understand code, which is not the case for 98% of the vibe coders.
So just dont use LLMs then. The very issue is that mediocre devs just accept whatever and try to PR that.
Don’t be a mediocre dev.
Of course. It makes it easy to appear you actually have done something smart, but in reality it just causes more work for others. I believe that senior devs and engineers know how and when to use an LLM. But if you are a crypto bro and try to develop an ecosystem from scratch, it will be a huge mess.
It is obvious the we will not be able to stop those PR’s, so we need to come up with other means, with automatisms that help the maintainers save time. I only saw very few using automated LLM actions in repos, and I think the main reason for that are the cost of running them.
So how would you fight the wave of useless PR’s?
So what you’re saying is in order for “AI” to write good code I need to double check everything it spits out and correct it. But sure, tell yourself that it saves any amount of time.
It saves my time. That’s all I need.
So what you’re saying is directly contradictory to your previous comment, in fact it doesn’t produce good code even when you tell it to.
👍