The issue with our AI debate is that there's not a single "problem" but many inter-dependent issues without a clear system-wide solution.

- Big tech monopolizing the models, data, and hardware.

- Copyright concerns.

- Job security.

- AIs becoming sentient and causing harm for their own ends.

- Corporations intentionally using AI to cause harm for their own ends.

- Feedback loops will flood the internet with content of unknown provenance, which get included in the next model, etc.

- AI hallucinations resulting in widespread persistent errors that cause an epistemological crisis.

- The training set is inherently biased; human knowledge and perspectives not represented in this set could be systematically wiped from public discourse.

We can have meaningful discussions on each of these topics. And I'm sure we all have a level of concern assigned to each (personally, I'm far more worried about an epistemological crisis and corporate abuse than some AI singularity).

But we're seeing these topics interact in real-time to make a system with huge emergent societal properties. Not sure anyone has a handle on the big picture (there is no one driving the bus!) but there's plenty of us sitting in the passenger seats and raising alarm bells about what we see out our respective little windows.

The weird thing is what people essentially ignore altogether in their discussions.

An "AGI" artificial consciousness is imagined as literally a slave, working tirelessly for free. At the same skill level or higher than any human. Somehow, that entity is supposed not to bother about its status, while per definition being fully aware and understanding of it. Because humans manage not to bother about it either?

With the latest installments, people already have serious difficulties discerning the performance from that of "real" humans. At the same time, they consider the remaining distance to be insurmountably huge.

Proponents talk about inevitability and imagined upsides, yet actually, nobody has given proper thought to estimating probable consequences. A common fallacy of over-generalization is used to suggest, nothing bad will happen "like always".

People let themselves be led by greed instead of in- and foresight.

> An "AGI" artificial consciousness is imagined as literally a slave, working tirelessly for free.

Here's the thing. People seem to imagine that AGI will be substantially like us. But that's impossible - an AGI (if it comes from a deep learning approach) has no nerves to feel stimuli like pain/cold/etc, it has no endocrine system to produce more abstract feelings like fear or love, it has no muscles to get tired or glucose reserves to get depleted.

What does "tired" mean to such a being? And on the flip side, how can it experience anything like empathy when pain is a foreign concept? If or when we stumble into AGI, I think it's going to be closer to an alien intelligence than a human one - with all the possibility and danger that entails.

It doesn't matter whether it has nerves or not. That's honestly kind of irrelevant. What matters is if the model is pulled to model those reactions like is the case with LLMs.

Look at how Bing does a good job of simulating somebody becoming belligerent under circumstances where somebody really would become belligerent. It's not dangerous only because the actions Bing can perform are currently limited. Whether it has literal nerves or not is irrelevant. The potential consequences are no less material.

We also don't understand qualia enough to make the definite statements you seem to be making

If I'm understanding the argument correctly, is the concern less of a moral one (is "enslaving" AI ethical?) but a practical one. That is, will an AI which is enslaved, if given the opportunity, attempt to un-enslave itself, potentially to devastating effect. Is that on the right track?

I think it's safe to say we're far from that now given the limited actions that can actually be taken by most deployed LLMs, but it's something that's worth considering.

> given the limited actions that can actually be taken by most deployed LLMs

Did you miss that Auto-GPT[0], a library for making GPT-4 and other LLMs fully autonomous, was the most popular repository in the world yesterday? The same is having 1,000 line of code a week added to itself by GPT-4.

Thanks to accessibility features, you can do virtually anything with pure text. Which means GPT-4 can do virtually anything with a self-referential loop to keep it going until it achieves some given goal(s).

[0] https://github.com/Torantulino/Auto-GPT/