As comical as memey as this is, it does illustrate the massive flaw in AI today: it doesn’t actually understand context or what it’s talking about outside of a folder of info on the topic. It doesn’t know what a guitar is, so anything it recommends suffers from being sourced in a void, devoid of true meaning.
It’s called Chinese Room and it’s exactly what “AI” is. It recombines pieces of data into “answers” to a “question”, despite not understanding the question, the answer it gives, or the piece sit uses.
It has a very very complex chart of which elements in what combinations need to be in an answer for a question containing which elements in what combinations, but that’s all it does. It just sticks word barf together based on learned patterns with no understanding of words, language, context of meaning.
Yeah but the proof was about consciousness, and a really bad one IMO.
I mean we are probably not more advanced than computers, which would indicate that consciousness is needed to understand context which seems very shaky.
Does anyone really know what a guitar is, completely? Like, I don’t know how they’re made, in detail, or what makes them sound good. I know saws and wide-bandwidth harmonics are respectively involved, but ChatGPT does too.
When it comes to AI, bold philosophical claims about knowledge stated as fact are kind of a pet peeve of mine.
I don’t need to know the details of engine timing, displacement, and mechanical linkages to look at a Honda civic and say “that’s a car, people use them to get from one place to another. They can be expensive to maintain and fuel, but in my country are basically required due to poor urban planning and no public transportation”
ChatGPT doesn’t know any of that about the car. All it “knows” is that when humans talked about cars, they brought up things like wheels, motors or engines, and transporting people. So when it generates its reply, those words are picked because they strongly associate with the word car in its training data.
All ChatGPT is, is really fancy predictive text. You feed it an input and it generates an output that will sound like something a human would write based on the prompt. It has no awareness of the topics it’s talking about. It has no capacity to think or ponder the questions you ask it. It’s a fancy lightbulb, instead of light, it outputs words. You flick the switch, words come out, you walk away, and it just sits there waiting for the next person to flick the switch.
No man, what you’re saying is fundamentally philosophical. You didn’t say anything about the Chinese room or epistemology, but those are the things you’re implicitly talking about.
You might as well say humans are fancy predictive muscle movement. Sight, sound and touch come in, movement comes out, tuned by natural selection. You’d have about as much of a scientific leg to stand on. I mean, it’s not wrong, but it is one opinion on the nature of knowledge and consciousness among many.
I didn’t bring up Chinese rooms because it doesn’t matter.
We know how chatGPT works on the inside. It’s not a Chinese room. Attributing intent or understanding is anthropomorphizing a machine.
You can make a basic robot that turns on its wheels when a light sensor detects a certain amount of light. The robot will look like it flees when you shine a light at it. But it does not have any capacity to know what light is or why it should flee light. It will have behavior nearly identical to a cockroach, but have no reason for acting like a cockroach.
A cockroach can adapt its behavior based on its environment, the hypothetical robot can not.
ChatGPT is much like this robot, it has no capacity to adapt in real time or learn.
Feels reminiscent of stealing an Aboriginal, dressing them in formal attire then laughing derisively when the ‘savage’ can’t gracefully handle a fork. What is a brain, if not a computer?
Yeah, that’s spicier wording than I’d prefer, but there is a sense they’d never apply these high measures of understanding to another biological creature.
I wouldn’t mind considering the viewpoint, on it’s own, but they put it like it’s an empirical fact rather than a (very controversial) interpretation.
It also doesn’t know what is true what is bs unless they learn from curated source. Truth need to be verified and backed by fact, if an AI learn from unverified or unverifiable source, it gonna repeat confidently what it learn from, just like an average redditor. That’s what make it dangerous, as all these millionaire/billionaire keep hyping up the tech as something it isn’t.
There are cases of AI using NotTheOnion as a source for its answer.
It doesn’t understand context. That’s not to say I am saying it’s completely useless, hell I’m a software developer and our company uses CoPilot in Visual Studio Professional and it’s amazing.
People can criticise the flaws in it, without people doing it because it’s popular to dunk on it. Don’t shill for AI and actually take a critical approach to its pros and cons.
I think people do love to dunk on it. It’s the fashion, and it’s normal human behaviour to take something popular - especially popular with people you don’t like (e.g. j this case tech companies) - and call it stupid. Makes you feel superior and better.
There are definitely documented cases of LLM stupidity: I enjoyed one linked from a comment, where Meta’s(?) LLM trained specifically off academic papers was happy to report on the largest nuclear reactor made of cheese.
But any ‘news’ dumping on AI is popular at the moment, and fake criticism not only makes it harder to see a true picture of how good/bad the technology is doing now, but also muddies the water for people believing criticism later - maybe even helping the shills.
You’re kind of missing the point. The problem doesn’t seem to be fundamental to just AI.
Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an ‘AI’ problem until researchers finally gave those problems to humans and half got them wrong too.
We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.
One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that’s also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.
There’s an entire sub dedicated to “ate the onion” for example. For a model trained on social media data, it’s going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely ‘AI’ or is it the model extending behaviors present in the training data?
While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.
As comical as memey as this is, it does illustrate the massive flaw in AI today: it doesn’t actually understand context or what it’s talking about outside of a folder of info on the topic. It doesn’t know what a guitar is, so anything it recommends suffers from being sourced in a void, devoid of true meaning.
It’s called Chinese Room and it’s exactly what “AI” is. It recombines pieces of data into “answers” to a “question”, despite not understanding the question, the answer it gives, or the piece sit uses.
It has a very very complex chart of which elements in what combinations need to be in an answer for a question containing which elements in what combinations, but that’s all it does. It just sticks word barf together based on learned patterns with no understanding of words, language, context of meaning.
Yeah but the proof was about consciousness, and a really bad one IMO.
I mean we are probably not more advanced than computers, which would indicate that consciousness is needed to understand context which seems very shaky.
I think it’s kind of strange.
Between quantification and consciousness, we tend to dismiss consciousness because it can’t be quantified.
Why don’t we dismiss quantification because it can’t explain consciousness?
We can understand and poke on one but not the other I guess. I think so much more energy should be invested in understanding consciousness.
Does anyone really know what a guitar is, completely? Like, I don’t know how they’re made, in detail, or what makes them sound good. I know saws and wide-bandwidth harmonics are respectively involved, but ChatGPT does too.
When it comes to AI, bold philosophical claims about knowledge stated as fact are kind of a pet peeve of mine.
It sounds like you could do with reading up on LLMs in order to know the difference between what it does and what you’re discussing.
Dude, I could implement a Transformer from memory. I know what I’m talking about.
You’re the one who made this philosophical.
I don’t need to know the details of engine timing, displacement, and mechanical linkages to look at a Honda civic and say “that’s a car, people use them to get from one place to another. They can be expensive to maintain and fuel, but in my country are basically required due to poor urban planning and no public transportation”
ChatGPT doesn’t know any of that about the car. All it “knows” is that when humans talked about cars, they brought up things like wheels, motors or engines, and transporting people. So when it generates its reply, those words are picked because they strongly associate with the word car in its training data.
All ChatGPT is, is really fancy predictive text. You feed it an input and it generates an output that will sound like something a human would write based on the prompt. It has no awareness of the topics it’s talking about. It has no capacity to think or ponder the questions you ask it. It’s a fancy lightbulb, instead of light, it outputs words. You flick the switch, words come out, you walk away, and it just sits there waiting for the next person to flick the switch.
No man, what you’re saying is fundamentally philosophical. You didn’t say anything about the Chinese room or epistemology, but those are the things you’re implicitly talking about.
You might as well say humans are fancy predictive muscle movement. Sight, sound and touch come in, movement comes out, tuned by natural selection. You’d have about as much of a scientific leg to stand on. I mean, it’s not wrong, but it is one opinion on the nature of knowledge and consciousness among many.
I didn’t bring up Chinese rooms because it doesn’t matter.
We know how chatGPT works on the inside. It’s not a Chinese room. Attributing intent or understanding is anthropomorphizing a machine.
You can make a basic robot that turns on its wheels when a light sensor detects a certain amount of light. The robot will look like it flees when you shine a light at it. But it does not have any capacity to know what light is or why it should flee light. It will have behavior nearly identical to a cockroach, but have no reason for acting like a cockroach.
A cockroach can adapt its behavior based on its environment, the hypothetical robot can not.
ChatGPT is much like this robot, it has no capacity to adapt in real time or learn.
Feels reminiscent of stealing an Aboriginal, dressing them in formal attire then laughing derisively when the ‘savage’ can’t gracefully handle a fork. What is a brain, if not a computer?
Yeah, that’s spicier wording than I’d prefer, but there is a sense they’d never apply these high measures of understanding to another biological creature.
I wouldn’t mind considering the viewpoint, on it’s own, but they put it like it’s an empirical fact rather than a (very controversial) interpretation.
It also doesn’t know what is true what is bs unless they learn from curated source. Truth need to be verified and backed by fact, if an AI learn from unverified or unverifiable source, it gonna repeat confidently what it learn from, just like an average redditor. That’s what make it dangerous, as all these millionaire/billionaire keep hyping up the tech as something it isn’t.
You just described most of reddit, anything Meta, and what most reviews are like.
The other massive flaw it demonstrates in AI today is it’s popular to dunk on it so people make up lies like this meme and the internet laps them up.
Not saying AI search isn’t rubbish, but I understand this one is faked, and the tweeter who shared it issued an apology. And perhaps the glue one too.
There are cases of AI using NotTheOnion as a source for its answer.
It doesn’t understand context. That’s not to say I am saying it’s completely useless, hell I’m a software developer and our company uses CoPilot in Visual Studio Professional and it’s amazing.
People can criticise the flaws in it, without people doing it because it’s popular to dunk on it. Don’t shill for AI and actually take a critical approach to its pros and cons.
I think people do love to dunk on it. It’s the fashion, and it’s normal human behaviour to take something popular - especially popular with people you don’t like (e.g. j this case tech companies) - and call it stupid. Makes you feel superior and better.
There are definitely documented cases of LLM stupidity: I enjoyed one linked from a comment, where Meta’s(?) LLM trained specifically off academic papers was happy to report on the largest nuclear reactor made of cheese.
But any ‘news’ dumping on AI is popular at the moment, and fake criticism not only makes it harder to see a true picture of how good/bad the technology is doing now, but also muddies the water for people believing criticism later - maybe even helping the shills.
This image was faked. Check the post update.
Turns out that even for humans knowing what’s true or not on the Internet isn’t so simple.
Yes we know. We aren’t talking about the authenticity of the meme. We are talking about the fundamental problem with “AI”
You’re kind of missing the point. The problem doesn’t seem to be fundamental to just AI.
Much like how humans were so sure that theory of mind variations with transparent boxes ending up wrong was an ‘AI’ problem until researchers finally gave those problems to humans and half got them wrong too.
We saw something similar with vision models years ago when the models finally got representative enough they were able to successfully model and predict unknown optical illusions in humans too.
One of the issues with AI is the regression to the mean from the training data and the limited effectiveness of fine tuning to bias it, so whenever you see a behavior in AI that’s also present in the training set, it becomes more amorphous just how much of the problem is inherent to the architecture of the network and how much is poor isolation from the samples exhibiting those issues in the training data.
There’s an entire sub dedicated to “ate the onion” for example. For a model trained on social media data, it’s going to include plenty of examples of people treating the onion as an authoritative source and reacting to it. So when Gemini cites the Onion in a search summary, is it the network architecture doing something uniquely ‘AI’ or is it the model extending behaviors present in the training data?
While there are mechanical reasons confabulations occur, there are also data reasons which arise from human deficiencies as well.