People who don’t understand or use AI think it’s less capable than it is and claim it’s not AGI (which no one else was saying anyways) and try to make it seem like it’s less valuable because it’s “just using datasets to extrapolate, it doesn’t actually think.”
Guess what you’re doing right now when you “think” about something? That’s right, you’re calling up the thousands of experiences that make up your “training data” and using it to extrapolate on what actions you should take based on said data.
You know how to parallel park because you’ve assimilated road laws, your muscle memory, and the knowledge of your cars wheelbase into a single action. AI just doesn’t have sapience and therefore cannot act without input, but the process it does things with is functionally similar to how we make decisions, the difference is the training data gets input within seconds as opposed to being built over a lifetime.
That’s true of any technology. As someone who is a programmer, has studied computer science, and does understand LLMs, this represents a massive leap in capability. Is it AGI? No. Is it a potential paradigm shift? Yes. This isn’t pure hype like Crypto was, there is a core of utility here.
Yeah I studied CS and work in IT Ops, I’m not claiming this shit is Cortana from Halo, but it’s also not NFTs. If you can’t see the value you haven’t used it for anything serious, cause it’s taking jobs left and right.
Crypto was never pure hype either. Decentralized currency is an important thing to have, it’s just shitty it turned into some investment speculative asset rather than a way to buy drugs online without the glowies looking
If you’ve ever actually used any of these algorithms it becomes painfully obvious they do not “think”. Give it a task slightly more complex/nuanced than what it has been trained on and you will see it draws obviously false conclusions that would be obviously wrong had any thought actual taken place. Generalization is not something they do, which is a fundamental part of human problem solving.
People who don’t understand or use AI think it’s less capable than it is and claim it’s not AGI (which no one else was saying anyways) and try to make it seem like it’s less valuable because it’s “just using datasets to extrapolate, it doesn’t actually think.”
Guess what you’re doing right now when you “think” about something? That’s right, you’re calling up the thousands of experiences that make up your “training data” and using it to extrapolate on what actions you should take based on said data.
You know how to parallel park because you’ve assimilated road laws, your muscle memory, and the knowledge of your cars wheelbase into a single action. AI just doesn’t have sapience and therefore cannot act without input, but the process it does things with is functionally similar to how we make decisions, the difference is the training data gets input within seconds as opposed to being built over a lifetime.
People who aren’t programmers, haven’t studied computer science, and don’t understand LLMs are much more impressed by LLMs.
That’s true of any technology. As someone who is a programmer, has studied computer science, and does understand LLMs, this represents a massive leap in capability. Is it AGI? No. Is it a potential paradigm shift? Yes. This isn’t pure hype like Crypto was, there is a core of utility here.
Yeah I studied CS and work in IT Ops, I’m not claiming this shit is Cortana from Halo, but it’s also not NFTs. If you can’t see the value you haven’t used it for anything serious, cause it’s taking jobs left and right.
Crypto was never pure hype either. Decentralized currency is an important thing to have, it’s just shitty it turned into some investment speculative asset rather than a way to buy drugs online without the glowies looking
Crypto solves a few theoretical problems and creates a few real ones
In my experience it’s the opposite, but the emotional reaction isn’t so much being impressed as being afraid and claiming it’s just all plagiarism
If you’ve ever actually used any of these algorithms it becomes painfully obvious they do not “think”. Give it a task slightly more complex/nuanced than what it has been trained on and you will see it draws obviously false conclusions that would be obviously wrong had any thought actual taken place. Generalization is not something they do, which is a fundamental part of human problem solving.
Make no mistake: they are text predictors.