"Implementing the GPU is the hard part."

"It will take Asahi Linux years to get graphical acceleration."

"Maybe they'll have graphics figured out by 2024 if they are lucky."

Doubters gonna doubt. OpenGLES 2 for Asahi Linux on the M1, 94% conformity, just 1 year later.

I think that's pretty consistent? If it takes one year for OpenGL ES2 coverage, not even optimization, it will probably take two more years at least for proper optimized Vulkan/SPIR-V/OpenGL 4 support along with accelerated vidéo encode and decode support.

They don't implement OpenGL ES2, they figure out the GPU / graphics drivers interfaces / etc, and OpenGL ES2 then works as more features are made available.

If they have coverage of the GPU, then Vulcan and so on will play for the features ES2 does. Besides you don't need support for all exotic features to run Linux with hw acceleration.

I'd say usable desktop hardware acceleration requires accelerated rendering in the browser and video decode acceleration. These tend to require pretty exotic features and still aren't implemented in a few PC graphics driver.

OpenGL ES 2.0 is pretty simple in comparison, and you won't have to deal with undocumented features and so on.

It's also not as simple as coverage, you also need good performance, which means you need a good optimizing compiler for the GPU architecture, and that's not obvious either. Unless they are using Apple's drivers? I don't know that they are. From reading her website it seems they are not.

> It's also not as simple as coverage, you also need good performance, which means you need a good optimizing compiler for the GPU architecture, and that's not obvious either. Unless they are using Apple's drivers? I don't know that they are.

She's been writing progress reports as she goes. This one is from back in May.

>I’ve begun a Gallium driver for the M1, implementing much of the OpenGL 2.1 and ES 2.0 specifications. With the compiler and driver together, we’re now able to run OpenGL workloads like glxgears and scenes from glmark2 on the M1 with an open source stack. We are passing about 75% of the OpenGL ES 2.0 tests in the drawElements Quality Program used to establish Khronos conformance. To top it off, the compiler and driver are now upstreamed in Mesa!

Gallium is a driver framework inside Mesa. It splits drivers into frontends, like OpenGL and OpenCL, and backends, like Intel and AMD. In between, Gallium has a common caching system for graphics and compute state, reducing the CPU overhead of every Gallium driver. The code sharing, central to Gallium’s design, allows high-performance drivers to be written at a low cost. For us, that means we can focus on writing a Gallium backend for Apple’s GPU and pick up OpenGL and OpenCL support “for free”.

https://rosenzweig.io/blog/asahi-gpu-part-4.html

Yes. Simply covering these features will provide basic support for many features. That doesn't, however, means that performance will automatically be sufficient. It also remains to be seen if the more complex featureset of OpenGL 3.1 is as straightforward to cover efficiently.

I'm not saying it won't happen. I'm just saying that we shouldn't underestimate how much work there is.

Even with the help of Mesa and years of effort, the nouveau backend for NVidia cards is still barely satisfactory even for day to day tasks, it's OpenGL performance is very poor even for basic applications. It's really not as simple as just coverage in practice.

Nouveau actually doesn't count as a good reference, because NVIDIA actually locked-out reclocking support from any firmware that is not NVIDIA-signed starting with the GTX 900 series.

This means that if you are running any form of unsigned driver (which would be any open-source driver such as Nouveau) on those cards, the chip will run at the slowest possible performance tier, and won't allow the firmware to crank the speed up. Only the signed NVIDIA driver can change the GPU speed, which is basically mandatory for a driver to be useful.

So - don't blame Nouveau for being behind, NVIDIA has made it so that open-source drivers are almost useless. In which case, why bother with improving Nouveau when the performance is going to be just terrible?

Wow, why would they go out of their way to do that? Even with my most cynical hat on, I can’t think of how this is in their self interest.

Oh, is it to ensure nerfing of FP64 performance for their consumer cards? Is that done at the driver level?

> Is that done at the driver level?

No, the FP64 units aren't physically present on silicon in high numbers on the non xx100 dies.

However, limitations enforced just by the driver and its FW set:

- GPU virtualisation (see: https://github.com/DualCoder/vgpu_unlock)

- NVENC video encoder limitations to 2 simultaneous streams on customer cards

- Lite Hash Rate limitation enforcement to make GPUs less attractive for miners