I don't think there's been any progress on AGI, actually. Even though there's a lot of powerful AI programs out there, they're really nothing more than complex tools created to solve problems. A few other commenters mentioned that there's no good definition for what AGI would actually mean, but a good starting point that I think we can all agree on is that an AGI would need to have some level of autonomy. That being said, even if you did make an AI that was autonomous, I still think that it's a leap to call it an AGI. The way I look at it, if you did all the math by hand, you wouldn't call it intelligent, so you shouldn't call it intelligent if the math was done by a computer.

I don't think that doing the math by hand to simulate a machine intelligence makes it not intelligent, any more than doing a simulation of all the electrochemical signals in a human brain by hand would make a human not intelligent. Aside from the infeasible amount of time it would take, it's the same thing.

As for autonomy, LLMs don't have autonomy by themselves. But they can be pretty easily combined with other systems, connected to the outside world, in a way that seems pretty darned autonomous to me. (https://github.com/Significant-Gravitas/Auto-GPT). And that's basically a duct-tape-and-string version. Given how new these LLMs are, it's likely we're barely scratching the surface.