As an Artificial Intelligence proponent, I want to see the field succeed and go on to do great things. That is precisely why the current exaggerated publicity…
I use quotation marks there because what is often referred to as AI today is not whatsoever what the term once described.
The field of AI has been around for decades and covers a wide range of technologies, many of them much “simpler” than the current crop of generative AI. What is often referred to as AI today is absolutely what the term once described, and still does describe.
What people seem to be conflating is the general term “AI” and the more specific “AGI”, or Artificial General Intelligence. AGI is the stuff you see on Star Trek. Nobody is claiming that current LLMs are AGI, though they may be a significant step along the way to that.
I may be sounding nitpicky here, but this is the fundamental issue that the article is complaining about. People are not well educated about what AI actually is and what it’s good at. It’s good at a huge amount of stuff, it’s really revolutionary, but it’s not good at everything. It’s not the fault of AI when people fail to grasp that, no more than it’s the fault of the car when someone gets into it and then is annoyed it won’t take them to the Moon.
Correction. AGI describes Data, Moriarty, and Peanut Hamper, but it doesn’t describe the Enterprise’s computer. Which has speech recognition, but is less intelligent than an LLM.
People are not well educated about what AI actually is and what it’s good at.
And half the reason they’re not educated about it is that AI companies are actively and intentionally misinforming them about it. AI companies sell people these products using words like “thinking”, “assessing”, “reasoning”, and “learning”, none of which are accurate to AI, but would be to AGI.
The field of AI has been around for decades and covers a wide range of technologies, many of them much “simpler” than the current crop of generative AI. What is often referred to as AI today is absolutely what the term once described, and still does describe.
What people seem to be conflating is the general term “AI” and the more specific “AGI”, or Artificial General Intelligence. AGI is the stuff you see on Star Trek. Nobody is claiming that current LLMs are AGI, though they may be a significant step along the way to that.
I may be sounding nitpicky here, but this is the fundamental issue that the article is complaining about. People are not well educated about what AI actually is and what it’s good at. It’s good at a huge amount of stuff, it’s really revolutionary, but it’s not good at everything. It’s not the fault of AI when people fail to grasp that, no more than it’s the fault of the car when someone gets into it and then is annoyed it won’t take them to the Moon.
The problem is that the average person and politician don’t know this difference, and are running around like skynet is about to kick off any second.
The LLM CEOs and evangelists are going on like this too, because they need hype to make number go up.
Oop, wish I’d read this comment before mine. 100% right
Correction. AGI describes Data, Moriarty, and Peanut Hamper, but it doesn’t describe the Enterprise’s computer. Which has speech recognition, but is less intelligent than an LLM.
I didn’t say that everything in Star Trek was AGI, just that you can find examples there.
I shall amend my comment to say clarification instead of correction.
And half the reason they’re not educated about it is that AI companies are actively and intentionally misinforming them about it. AI companies sell people these products using words like “thinking”, “assessing”, “reasoning”, and “learning”, none of which are accurate to AI, but would be to AGI.