• 0 Posts
  • 11 Comments
Joined 13 days ago
cake
Cake day: July 15th, 2025

help-circle
  • Quite frankly I disagree on almost everything. While the — or repeating the question may be legitimate red flags, the rest is not. There is a reason for that.

    LLMs are trained on old material in which — was more common. Since they imitate what they know, here’s why we get all those —s. Repeating the question is a technique to reduce hallucinations, that’s why it is also quite common. Everything else is just how many people write. The average style may sound more academic and less “natural” simply because the training on academic papers usually has more weight than the training on blog posts. The rule of three is common in AI because it’s common human. Emojis were a thing in corporate messages way before AIs. The word choice highly depends on the writers’ culture, including whether they are native speakers or not. And so on.

    Besides that, one can tweak the style easily. This is generated with AI using a simple prompt

    AI is reshaping society—transforming how we work, communicate, and solve complex problems. Its potential is immense, but responsible innovation is the key to lasting impact.

    This is the same prompt enhanced with some tricks.

    AI is seriously leveling up how we connect with audiences and streamline workflows so if you’re not tapping in you’re already behind.

    Nice video, but the truth is that false positives and false negatives are so common that AI-detection techniques alone cannot be trusted without more context.


  • JumpyWombat@lemmy.mltomemes@lemmy.worldLMAO
    link
    fedilink
    arrow-up
    193
    arrow-down
    1
    ·
    2 days ago

    The problem is that once it will be normal to request IDs for porn, the same will be extended to everything else with excuses like “let’s do it for the kids” or “if you don’t have anything to hide…”.

    VPNs… yeah sure, until there will be a crackdown on those too.



  • LLMs do not give the correct answer, just the most probable sequence of words based on the training.

    That kind of studies (because there are hundreds) highlight two things:

    1- LLMs could be incorrect, biased, or give fake information (the so called hallucinations). 2- the previous point stems from the training material proving the existence of bias in the society.

    In other words, having an LLM recommending lower salaries for women is a proof that there is a gender gap.