At this rate they'll be manufacturing 0nm chips soon, and in a decade they'll be on -1nm.

But despite the weird naming scheme, it's clear from transistor density [1] and GPU prices [2] that foundries are still making progress in transistors per dollar. That progress is just barely beginning to make large neural networks (Stable Diffusion, vision and speech systems, language model AIs) deployable in consumer applications.

It might not matter whether your cell phone renders this page in 1ms or 10ms, but the difference between talking to a 20B parameter language model and a 200B net is night and day [3]. If TSMC/Samsung/Intel can squeeze out just one or two more nodes, then by the middle of the century we might have limited general-purpose AI in every home and office.

[1] https://en.wikipedia.org/wiki/Transistor_count#GPUs

[2] https://pcpartpicker.com/trends/price/video-card/

[3] https://textsynth.com/playground.html

>"That progress is just barely beginning to make large neural networks (Stable Diffusion, vision and speech systems, language model AIs) deployable in consumer applications."

Interesting. Is Stable Diffusion a product of neural network size then? Is the size of the network a function of chips density? Also is there a Stable Diffusion app that currently works on edge devices?

If edge devices includes my gamer PC then yes. For apps I'd recommend https://github.com/AUTOMATIC1111/stable-diffusion-webui