"First of all, it takes a human being to guide these models, to host (or pay for the hosting) and instantiate them"

And this will always be true? You repeat this claim several times in slightly varied phrasing without ever giving any reason to assume it will always hold, as far as I can see. But nobody is worried that current models will kill everyone. The worry is about future, more capable models.

Who prompted the future LLM, and gave it access to a root shell and an A100 GPU, and allowed it to copy over some python script that runs in a loop and allowed it to download 2 terabytes of corpus and trained a new version of itself for weeks if not months to improve itself, just to carry out some strange machiavellian task of screwing around with humans?

The human being did.

The argument I'm making is that there's actual real harms occurring now, not some theoretical future "AI" with a setup that requires no input. No one wants to focus on that, and in fact it's better to hype up these science fiction stories, it's a better sell for the real tasks in the real world that are producing real harms right now.

>Who prompted the future LLM, and gave it access to a root shell and an A100 GPU, and allowed it to copy over some python script that runs in a loop and allowed it to download 2 terabytes of corpus and trained a new version of itself for weeks if not months to improve itself

Presumably someone running a misconfigured future version of autoGPT?

https://github.com/Significant-Gravitas/Auto-GPT