A fairly lengthy article about Autonomous AI, and, as far as I can tell, not a single word about the safety implications of such a system (a short note about reliability of LLMs is all we get, and it's not clear that the author means anything more than "the thing might break").
I get that there are different philosophies on AI risk, but this is like reading an in-depth discussion about potential atomic bombs in 1942, with no mention of the fact that such a bomb could potentially level cities and kill millions.
If a field of research ever needed oversight and regulation, this is it. I'm not convinced that would solve the problem, but allowing this kind of rushing forward to continue is madness.
Oh please. Not everything has to be regulated-to-hells before a use case is even found on this. Autonomous agents have existed for decades.
If it can automate agents like huginn[0] with natural language, I'd be very happy. Autonomous agents doesn't mean it's going to take over the world autonomously. Let's lower the fearmongering a bit.