We software engineers have to be the stupidest "smart" people on the planet. No other occupations work so hard to destroy entire businesses including our own. I get it, I'm a software engineer who loves automation.
"But AI will just be a tool in our tool-set, software engineers will still be the system architects."
Sure, for a while and then AI will do that too.
"But eventually we will live in a fully automated world in abundance, wouldn't that be great?"
Doing what? When we get there, anything we can consider doing, an AI can do faster and better. Write a poem? Write a book? Write music? Paint a picture? Life will be like a computer game with cheat-codes, whenever we struggle with something, instead of fighting on and improving we will turn to our universal cheat-engine: AI.
Anecdotally, I did an analog mistake in my early twenties when I wrote a cheat-program for save-files. It worked like a typical cheat-engine, search the save-file for a specific value, go back to the game and change that value, go back to the save and search for the new value but only in those locations that had the original value. This is how I ruined "Heroes of Might and Magic II" :(. I used to love that game. I could spend hours playing it. Writing the cheat program was a lot of fun for a couple of hours but when it was done, there was no longer any reason for me to play the game. You might say that I didn't need to use my cheat program, but once the genie was out of the box it was too tempting to resist when I met some obstacle in the game.
This is what I fear about AI making our jobs superfluous; it will also make our hobbies or anything we enjoy doing superfluous.
Sorry for the bleak comment but this my fear and I feel the genie is already out of the box.
> Doing what? When we get there, anything we can consider doing, an AI can do faster and better. Write a poem? Write a book? Write music? Paint a picture? Life will be like a computer game with cheat-codes, whenever we struggle with something, instead of fighting on and improving we will turn to our universal cheat-engine: AI.
For literally anything in my life that I can do, there is already someone who can do it better and faster than me. I still enjoy doing the things that I do, and why would that change?
Because now that person works for you, for free, instantly, anywhere, anytime. There's at least a temptation.
Who says that person will work for you, or that it will be free?
They're talking about the AI.
I'm not sure if this is what the commenter meant, but I am vanishingly unlikely to own the AI. Even if I write the AI I'm unlikely to own it. The training costs are too large and even if I did train a working model, AI can be duplicated, and there's little reason to use anything but the best. The scary thing about AI to me is finally turning intellectual labor into a pure process of capital. You put more energy and capital in, you get more out, no need or room for humans anywhere in that loop. Now of course we're a long way away from that. There will be room for humans for a long time, but it's scary how much additional power it will give to capital. How much less the interests of ordinary people will mater
This confuses AI training with inference. Training a machine learning model is expensive, complex, and requires data. Once trained, inference is cheap in terms of compute. A trained model is trivial to leak so hopefully nobody is putting much stock in their models staying secret.
Not to mention there are techniques by which training can be avoided entirely like transfer learning and others.
Inference is not cheap -- SOTA LLMs require 100s of gigabytes of VRAM for inference.
I'll concede that Stable Diffusion may or may not meet the threshold for SOTA but still think this is indicative of inference eventually becoming supported for any compelling LLM on consumer-grade hardware. The possibilities for creative tools are just too vast.
[1] https://machinelearning.apple.com/research/stable-diffusion-...