If anything, this post suggests Nvidia has a long supremacy ahead. In particular, the author lays out what is likely to be a durable network in favor of Nvidia:

- best in breed software

- industry standard used and preferred by most practitioners

- better (faster) hardware

Notably, this is a similar combination to that which led Wintel to be a durable duopoly for decades, with the only likely end the mass migration to other modes of compute.

Regarding the "what will change" category, 2 of the bullet points essentially argue that the personnel he cites as being part of the lock-in will decide to no longer bias for Nvidia, primarily for cost reasons. A third point similarly leans on cost reasons.

Nowhere in the analysis does the author account for the historical fact that typically the market leader is best positioned to also be the low-cost leader if strategically desired. It is unlikely that a public company like Intel or AMD or (soon) Arm would enter the market explicitly to race to zero margins. (See also: the smartphone market.)

Nvidia also could follow the old Intel strategy and sell its high-end tech for training and its older (previously) high-end tech for inference, allowing customers to use a unified stack across training & inference at different price points. Training customers pay for R&D & profit margin; lower-price inference customers provide a strategic moat.

Research doesn't really have the sunk-cost that industry does. New students are willing to try new things and supervisors don't necessarily need to reign them in.

I wonder what is holding AMD back in research? Their cards seem much less costly. I would have figured a nifty research student would figure out quickly how to port torch and run twice as many gpus with his small budget to eek out a bit more performance.

The software support just isn't there. The drivers need work, the whole ecosystem is built on CUDA not OpenCL, etc. Not to say someone that tries super hard can't do it, e.g. https://github.com/DTolm/VkFFT .