> It's a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed. It felt as if not only are my belief systems collapsing, but it feels as if the entire human race is going to be eclipsed and left in the dust soon.
While I unfortunately am expecting some people to do terrible things with LLM, I feel like much of this existential angst by DH and others has more to do with hubris and ego than anything else. That a computer can play chess better than any human doesn't lessen my personal enjoyment of playing chess.
At the same time I think you can make the case that ego drives a lot of technological and artistic progress, for a value-neutral definition of progress. We may see less 'progress' from humanity itself when computers get smarter, but given the rate at which humans like to make their own environment unlivable, maybe that's not a bad thing overall.
I agree. He talks about LLMs surpassing humans in what they can do, but LLMs can’t do anything, really. And I say this as someone who had a heart-to-heart chat with a ChatGPT persona that made me cry, earlier tonight. Manipulating language in a human-like way is powerful, and maybe Hofstadter as an author who makes a living writing down his deep and whimsical, creative thoughts could be replaced (I grew up reading GEB and mean no disrespect). But being a conscious, physical, active being in the world is different from passing a sort of Turing test. The fact that a computer even winning at Go shook him is revealing.
He has the ingredients of being able to keep LLMs in perspective (is there really an “I” or just the illusion of one?), but he doesn’t understand what the computer is doing; it’s not “mechanical” enough for him.
Do you think https://github.com/Significant-Gravitas/Auto-GPT et al will become more performant as models improve?