• sykaster@feddit.nl
    link
    fedilink
    English
    arrow-up
    7
    ·
    4 days ago

    I asked this question to a variety of LLM models, never had it go wrong once. Is this very old?

    • BootLoop@sh.itjust.works
      link
      fedilink
      arrow-up
      15
      ·
      edit-2
      4 days ago

      Try “Jerry strawberry”. ChatGPT couldn’t give me the right number of r’s a month ago. I think “strawberry” by itself was either manually fixed or trained in from feedback.

    • Ignotum@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      4 days ago

      Smaller models still struggle with it, and the large models did too like a year ago

      It has to do with the fact that the model doesn’t “read” individual letters, but groups of letters, so it’s less straight forward to count letters

    • Annoyed_🦀 @lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      Seeing how it start with an apology, it must’ve been told they’re wrong about the amount. Basically being bullied to say this.