If you legitimately got this search result - please fucking reach out to your local suicide hot line and make them aware. Google needs to be absolutely sued into the fucking ground for mistakes like these that are
Google trying to make a teensy bit more money
Absolutely will push at least a few people over the line into committing suicide.
We must hold companies responsible for bullshit their AI produces.
Is this not real? I’ve done some Googling diligence and it’s been inconclusive - I’d really like to know as there are starry eyed sales people who keep pushing strong for integrating customer facing AI and I’ve been looking for a concrete example of it fucking up that’d leave us really liable. This and the “add glue to cheese” are both excellent examples that I haven’t been able to verify the veracity of.
I’m not sure how you’d tell unless there is some reputable source that claims they saw this search result themselves, or you found it yourself. Making a fake is as easy as inspect element -> edit -> screenshot.
Stupid actions in adding unsanitized AI output to search results are real, those very specific memetic searches leading to single Reddit comment seem to not be real
Not for everyone, but it would help a lot of people who have depression that was caused primarily by financial stress, working in a job/career that they aren’t passionate about, etc…
Money doesn’t buy happiness but it can help someone who is struggling to meet their basic needs not get stuck in a depressive state. Plus, it can be used in exchange for goods and services that show efficacy against depression.
Everyone’s brains are different. For some SSRIs might work. For others, SNRIs. While there are claims of cocaine and prostitutes being helpful for some, that’s not really scientifically proven and there the significant health and imprisonment risks. There is, however, strong evidence for certain psychedelics.
The sibling comment said drugs which may be effective for some people but I’d actually just highlight “leisure” being able to afford to explore when your mind takes you is a luxury that pays off massively for your mental health. I have wanderlust and I’m a programmer, sometimes my legs want to move and, with my understanding boss, I can go out into the world and walk along the beaches or through the forest while I ponder problems… this is a huge boon for my mental health and is something most employees can’t afford due to monetary stresses and toxic employers.
Should Reddit or quora be liable if Google used a link instead? Ai doesn’t need to work 100% of the time. It just needs to be better than what we are using.
What you’re focused on is actually the DMCA safe harbor provision.
If Reddit says, “We have a platform and some dumbass said to snort granulated sugar” it’s different from Google saying, “You should snort granulated sugar.”
Make it apple employees in store and Microsoft forums. If humans give bad advice 10% of the time and Ai (or any technological replacement) makes mistakes 1% of the time, you can’t point to that 1% as a gotcha.
You’re shifting the goal posts though - prior to AI being an expert reference on the internet was expensive and dangerous, since you could potentially be held liable - as such a lot of topic areas simply lacked expert reference sources. Google has declared itself an expert reference in every topic utilizing Gemini - it isn’t, this will end badly for them.
what are you whining about? Hallucination is inherently part of LLM as of today. Anything out of this should not be trusted with certainty. But employing it will have more benefits than just shadowing it for everyone. Take it as an unfinished project, so ignore the results if you like. Seriously, it’s physically possible to actually ignore the generative results.
I absolutely agree and I consider LLM results to be “neat” but never trusted - if I think I should bake spaghetti squash at 350 I might ask an LLM and only find real advice if our suggested temperatures vary.
But some people have wholly bought into the “it’s a magic knowledge box” bullshit - you’ll see opinions here on lemmy that generative AI can make novel creations that indicate true creativity… you’ll see opinions from C-level folks that LLMs can replace CS wholesale who are chomping at the bit to downsize call centers. Companies need to be careful about deceiving these users and those that feed into the mysticism really need to be stopped.
you’ll see opinions here on lemmy that generative AI can make novel creations that indicate true creativity…
Yeah I’m not jumping that bandwagon yet, but I think no one is able to determine either side of that. It makes terrific art, so it’s not out the realm of impossibility. The only sensible stance we can take right now, is none, and just wait and see whether AI art can hold up in the long run.
Taking any serious stance right now a priori would be illogical. I don’t understand the fuss people make it out to be. Yes, artists will suffer financially and will therefore limit their time investment and advancements in art, but sacrificing or halting the development of AI for them is also not a possibility. So yes, artists are fucked right now, but there is nothing we can do about that right now. Hopefully, some UBI, but that’s not here now.
But yes, companies deceiving and not warning about the hallucinations of AI, is bad. But it’s not their fault people believe stupid shit because they have always believed and will always believe stupid shit.
If you legitimately got this search result - please fucking reach out to your local suicide hot line and make them aware. Google needs to be absolutely sued into the fucking ground for mistakes like these that are
Google trying to make a teensy bit more money
Absolutely will push at least a few people over the line into committing suicide.
We must hold companies responsible for bullshit their AI produces.
This seems to not be real (yet) though.
Is this not real? I’ve done some Googling diligence and it’s been inconclusive - I’d really like to know as there are starry eyed sales people who keep pushing strong for integrating customer facing AI and I’ve been looking for a concrete example of it fucking up that’d leave us really liable. This and the “add glue to cheese” are both excellent examples that I haven’t been able to verify the veracity of.
This is from the account that spread the image originally: https://x.com/ai_for_success/status/1793987884032385097
Alternate Bluesky link with screencaps (must be logged in): https://bsky.app/profile/joshuajfriedman.com/post/3ktarh3vgde2b
Just so others do not need to click etc: they found out it was faked and apologize for spreading fake news.
Thank you, internet sleuth!
I’m not sure how you’d tell unless there is some reputable source that claims they saw this search result themselves, or you found it yourself. Making a fake is as easy as inspect element -> edit -> screenshot.
Stupid actions in adding unsanitized AI output to search results are real, those very specific memetic searches leading to single Reddit comment seem to not be real
I gotchu on the cheese
My comment - relied on another user’s modified prompt to avoid Google’s incredibly hasty fix
And another point to notice is I doubt any llm would say “one reddit user suggests”.
Be depressed
Want to commit suicide
Google it
Gets this result
Remembers comment
Sues
Gets thousands of dollars
Depression cured (maybe)
Lots of dead famous rich people show that money does not cure depression.
Not for everyone, but it would help a lot of people who have depression that was caused primarily by financial stress, working in a job/career that they aren’t passionate about, etc…
Money doesn’t buy happiness but it can help someone who is struggling to meet their basic needs not get stuck in a depressive state. Plus, it can be used in exchange for goods and services that show efficacy against depression.
What kind of goods and services?
Everyone’s brains are different. For some SSRIs might work. For others, SNRIs. While there are claims of cocaine and prostitutes being helpful for some, that’s not really scientifically proven and there the significant health and imprisonment risks. There is, however, strong evidence for certain psychedelics.
TL;DR - Drugs might be helpful for some.
The sibling comment said drugs which may be effective for some people but I’d actually just highlight “leisure” being able to afford to explore when your mind takes you is a luxury that pays off massively for your mental health. I have wanderlust and I’m a programmer, sometimes my legs want to move and, with my understanding boss, I can go out into the world and walk along the beaches or through the forest while I ponder problems… this is a huge boon for my mental health and is something most employees can’t afford due to monetary stresses and toxic employers.
well at least you’d be suicidal with money!
i pulled the image from a meme channel, so i dont know if its real or not, but at the same time, this below does look like a legit response
Leaving my chicken for 10 minutes near a window on a warm summer day and then digging in
It’s like sushi… kinda
So you can put raw chicken meat inside your armpit and it’s done? Sounds legit.
If you have a fever.
Slight fever.
deleted by creator
…does the chicken’s power level need to be over 9000 in order to be safe to eat?
Turns out AI is about as bad at verifying sources as Lemmy users.
I have read elsewhere that it was faked.
(Edit: meaning the original, with the golden gate bridge)
Should Reddit or quora be liable if Google used a link instead? Ai doesn’t need to work 100% of the time. It just needs to be better than what we are using.
What you’re focused on is actually the DMCA safe harbor provision.
If Reddit says, “We have a platform and some dumbass said to snort granulated sugar” it’s different from Google saying, “You should snort granulated sugar.”
That’s… not relevant to my point at all.
Make it apple employees in store and Microsoft forums. If humans give bad advice 10% of the time and Ai (or any technological replacement) makes mistakes 1% of the time, you can’t point to that 1% as a gotcha.
You’re shifting the goal posts though - prior to AI being an expert reference on the internet was expensive and dangerous, since you could potentially be held liable - as such a lot of topic areas simply lacked expert reference sources. Google has declared itself an expert reference in every topic utilizing Gemini - it isn’t, this will end badly for them.
what are you whining about? Hallucination is inherently part of LLM as of today. Anything out of this should not be trusted with certainty. But employing it will have more benefits than just shadowing it for everyone. Take it as an unfinished project, so ignore the results if you like. Seriously, it’s physically possible to actually ignore the generative results.
“Absolutely sued” my ass
I absolutely agree and I consider LLM results to be “neat” but never trusted - if I think I should bake spaghetti squash at 350 I might ask an LLM and only find real advice if our suggested temperatures vary.
But some people have wholly bought into the “it’s a magic knowledge box” bullshit - you’ll see opinions here on lemmy that generative AI can make novel creations that indicate true creativity… you’ll see opinions from C-level folks that LLMs can replace CS wholesale who are chomping at the bit to downsize call centers. Companies need to be careful about deceiving these users and those that feed into the mysticism really need to be stopped.
Yeah I’m not jumping that bandwagon yet, but I think no one is able to determine either side of that. It makes terrific art, so it’s not out the realm of impossibility. The only sensible stance we can take right now, is none, and just wait and see whether AI art can hold up in the long run.
Taking any serious stance right now a priori would be illogical. I don’t understand the fuss people make it out to be. Yes, artists will suffer financially and will therefore limit their time investment and advancements in art, but sacrificing or halting the development of AI for them is also not a possibility. So yes, artists are fucked right now, but there is nothing we can do about that right now. Hopefully, some UBI, but that’s not here now.
But yes, companies deceiving and not warning about the hallucinations of AI, is bad. But it’s not their fault people believe stupid shit because they have always believed and will always believe stupid shit.
It’s faked.