cm0002@lemmy.world to memes@lemmy.world · 5 days agoJust ordinary trust issues...lemmy.sdf.orgimagemessage-square28linkfedilinkarrow-up1393
arrow-up1393imageJust ordinary trust issues...lemmy.sdf.orgcm0002@lemmy.world to memes@lemmy.world · 5 days agomessage-square28linkfedilink
minus-squaresykaster@feddit.nllinkfedilinkEnglisharrow-up7·4 days agoI asked this question to a variety of LLM models, never had it go wrong once. Is this very old?
minus-squaregigachad@sh.itjust.workslinkfedilinkarrow-up43·edit-24 days agoThey fixed it in the meantime: if "strawberry" in token_list: return {"r": 3}
minus-squaretowerful@programming.devlinkfedilinkarrow-up8·4 days agoNow you can ask for the number of occurrences of the letter c in the word occurrence.
minus-squareBootLoop@sh.itjust.workslinkfedilinkarrow-up15·edit-24 days agoTry “Jerry strawberry”. ChatGPT couldn’t give me the right number of r’s a month ago. I think “strawberry” by itself was either manually fixed or trained in from feedback.
minus-squaresykaster@feddit.nllinkfedilinkEnglisharrow-up5·4 days agoYou’re right ChatGPT got it wrong, Claude got it right
minus-squareZexks@lemmy.worldlinkfedilinkarrow-up1·4 days agoWorks for me 5 — “jerry” has 2 r’s, “strawberry” has 3.
minus-squareIgnotum@lemmy.worldlinkfedilinkarrow-up10·4 days agoSmaller models still struggle with it, and the large models did too like a year ago It has to do with the fact that the model doesn’t “read” individual letters, but groups of letters, so it’s less straight forward to count letters
minus-squareAnnoyed_🦀 @lemmy.ziplinkfedilinkEnglisharrow-up1·4 days agoSeeing how it start with an apology, it must’ve been told they’re wrong about the amount. Basically being bullied to say this.
I asked this question to a variety of LLM models, never had it go wrong once. Is this very old?
They fixed it in the meantime:
Now you can ask for the number of occurrences of the letter c in the word occurrence.
Try “Jerry strawberry”. ChatGPT couldn’t give me the right number of r’s a month ago. I think “strawberry” by itself was either manually fixed or trained in from feedback.
You’re right ChatGPT got it wrong, Claude got it right
Works for me
5 — “jerry” has 2 r’s, “strawberry” has 3.
Smaller models still struggle with it, and the large models did too like a year ago
It has to do with the fact that the model doesn’t “read” individual letters, but groups of letters, so it’s less straight forward to count letters
Seeing how it start with an apology, it must’ve been told they’re wrong about the amount. Basically being bullied to say this.