This is purely a total outsider's opinion, so I caveat that and please don't get all offended if I'm totally wrong.

Sometimes it's really difficult to tell the difference between a crackpot theorist and a good equities analyst. Watching this guy's video, it's hard to tell which he comes off as more.

He's just coherent and convincing enough that as an outsider I could believe his collection of observations. But the presentation is also just disorganized enough, fixated on certain details enough, and a scattered collection enough that I can't tell whether these factors matter.

Of course I'm sure an expert in the field could tell instantly.

I've seen enough interviews of former engineers (usually retired guys) of whatever company who know about a very specific fault of the product/tool/etc and can claim convincingly up and down that it will cause catastrophic failure, fundamental strategy misstep, etc. But will it really be a major factor for overall outcome?

He proceeds from analysis of the smallest die area detail to the opposite end of the spectrum, the problem with MBAs + Marketing, within 1 minute. With typos. Also, the guy doesn't have a long history of posting analysis of this type, and he's honest about his info being 8 years old. Not sure what his motivation for making the video is. So it's hard to put too much weight on it, as if were like a Ming-Chi Kuo predicting Apple's next iPhone form factor or something like that.

I would love to know what industry people here think, and then I might study the video / findings with more attention.

Francois is a smart guy, but there's a range of opinions on the matters he raises. We argue a good deal on Twitter ("constructive confrontation" between former Intel Principal Engineers :-) ). He put together the video pretty quickly so don't dismiss it just for lack of polish.

Personally, I argue with him a lot on AVX-512. I think AVX-512 is a Good Thing (or will be shortly - the first instantiation in Skylake Server - SKX - isn't great).

The biggest meta-problem with AVX-512 is that due to process issues, the pause button got hit just as it appeared in a server-only, 'first cut' form. AVX2 had downclocking issues when it first appeared too - but these were rapidly mitigated and forgotten.

I personally feel that SIMD is underutilized in general purpose workloads (partly due to Intel's fecklessness in promoting SIMD - including gratuitous fragmentation - e.g. a new Comet Lake machine with Pentium branding doesn't even suport AVX2 and has it fused off). Daniel Lemire and I built simdjson (https://github.com/lemire/simdjson) to show the power of SIMD in a non-traditional domain, and a lot of the benefits of Hyperscan (https://github.com/intel/hyperscan) were also due to SIMD. Notably, both use SIMD in a range of ways that aren't necessarily all that amenable to the whole idea of "just do it all on an accelerator/GPU" - choppy mixes of SIMD and GPR-based computing. Not everything looks like matrix multplication, or AI (but I repeat myself :-) ).

AVX-512 is a combination of fascinating and frustrating. I enthuse about recent developments on my blog (https://branchfree.org/2019/05/29/why-ice-lake-is-important-...) so I won't repeat why it's so interesting, but the learning curve is steep and there's plenty of frustrations and hidden traps. Intel collectively tends to be more interested in Magic Fixes (autovec) and Special Libraries for Important Benchmarks rather than building first-rate material to improve the general level of expertise with SIMD programming (one of my perpetual complaints).

You more or less have to teach yourself and feel your way through the process of getting good at SIMD. That being said, the benefits are often huge - not only because SIMD is fast, but because you can do fundamentally different things that you can't do quickly in GPR ("General Purpose Register") land (look, for example, at everyone's favorite instruction PSHUFB - you can't do the equivalent on a GPR without going to memory).