Big shot across the bow of Nvidia - the whole 30-series stock problems seem like a big miscalculation right now. Granted it’s still early as this is just announced, so no real world benchmarks, but their 3090 competitor coming in at $500 under Nvidia makes it a really tough sell - not that a lot of people have even gotten them at this point. Rumors of Nvidia making TI versions in literal months after the 30-series launch are probably true.

Addressing the price difference between RTX 3090 and 6900 XT:

3090 is priced at $1500 for its 24GB RAM which enables ML & Rendering use cases (Nvidia is segmenting the market by RAM capacity).

AMD's 6900XT has the same 16GB of RAM as 6800XT, with less support on the ML side. Their target market is gamers enthusiasts who wanted the absolute fastest GPU.

But is 8 GB of GDDR whatever worth almost $1000? If AMD can put 16 in at about $500, paying another $1000 for an extra 8 is IMHO outrageous. Nvidia is charging that much because many gamers hold to the “Intel & Nvidia are the best” even if benchmarks say otherwise.

I do machine learning work and research, and when I upgrade I will pay the Nvidia “premium” without hesitation for that extra RAM and CUDA. I really wish it wasn’t so one-sided, I’d love to have a choice.

Edit: should clarify that I’d really love to get a quadro or one of their data center cards, which aren’t gimped in certain non-gaming workloads... but I’m not made of money :)

> I do machine learning work and research, and when I upgrade I will pay the Nvidia “premium” without hesitation

Would you still do it without hesitation if the money was coming of your own pocket?

(Speaking for me) Yes. It's not 500$ extra for 8GB more, it's 500$ extra for being able to do ML at all basically.

AMD GPUs have near zero support in major ML frameworks. There are some things coming out with Rocm and other niche things, but most people in ML already have enough work dealing with model and framework problems that using experimental AMD support is probably a no go.

Hell, if AMD had a card with 8GB ram more than nvidia, and for 500$ cheaper, I would still go with nvidia. Everyone wish AMD would step their game up w.r.t ML workloads but it's just not happening (yet), Nvidia has a complete monopoly there.

Would PlaidML be a viable option for you? https://github.com/plaidml/plaidml