Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the ‘reasoning’ models.

  • Womble@piefed.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    22 hours ago

    Torture can be a useful way of extracting information if you have a way to instantly verify it, which actually makes it a good analogy to LLMs. If I want to know the password to your laptop and torture you until you give me the correct password and I log in then that works.

    • [deleted]@piefed.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      21 hours ago

      If you can instantly verify it then you don’t need the torture.

      Getting the person to volunteer the information is proven to be far, far more successful and being able to instantly verofy means you know when you have the answers.

    • JcbAzPx@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 hours ago

      In fact it cannot ever be a useful way of extracting information. Even just randomly guessing is a better way to get the information you want than torture.

      • Womble@piefed.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        14 hours ago

        I’m not saying its anything other than morally repugnant, obviously, but in the example of a password with billions or trillions of combinations and where you can check the answers given torture pretty obviously is better than guessing.

        That’s not a scenario that is ever likely to come up, and wouldn’t be justifiable even if it did, but pretending it wouldnt be effective is ridiculous.