Well, I take the the opposite stand: GPT-3 = Give a 3rd grader wikipedia and some paper. While it's certainly fun how it's relatively coherent in writing text, I couldn't see yet, how it links syntactically correct text to actual facts. Which in my opinion is the difference between 1e100 apes with typewriters and aforementioned 3rd grader.
If the AI is at a 3rd grader level, that means it's already reached human intelligence!
In my opinion, in some regards the AI performs well above a third grade level in some areas (being able to write a complex paragraph, terminal completion, writing code) and below in some other areas (not knowing basic multiplication, doesn't understand spatial context).
A confused idea people have about AI: that the goal is to make a computer that thinks like a human. There's not much point in that: we have 7 billion humans, and making more is cheap and fun. At least for now, AI is interesting when it complements, not substitutes for, human intelligence. We want Google to be able to search a billion web pages. We don't care if it can't make a decent cup of tea.
You are missing a huge part of the picture.
If we can make an AI with an intelligence of a human, we can train that to be very good at some problem (like a human expert in some area). And then we can clone those intelligences into thousands in order to progress in that area much quicker.
A human-level intelligence is also a very important goal because it, even by definition, produces the singularity point. (AIs that can make better AIs, exponentially.)
> We don't care if it can't make a decent cup of tea.
We absolutely do, although it depends on the price at which they can do that. But once an AI can do that at all, it is only a matter of time before we can make them do it cheaply (we are very good at making things cheap once we know what things we need), at which point it becomes a driving force in our progress. Once there are self-driving cars, there will be a revolution in the labor force, for example, and driving is only a small portion of what a "human-level" intelligence is capable of.
Yes, I think that in the end, this would be true. AGI itself would be transformational - partly because it would be easily linked with computers' "superhuman" powers of memory, throughput etc.
But, on the way to AGI - which seems to me a long way off, despite some of the interesting arguments made here - it's not really true that "being very humanlike" is a big commercial advantage. As long as we have (say) less than the intelligence of a 2 year old child, or a chimp, then we'd rather have computers be doing very un-human things.
You could of course find cases where this term makes other sense (or does not at all), since English is a flexible language, but I think that in areas where we obviously discuss AI/ML, let's just use the de-facto term and make everyone's lives easier.