I suspect this article may even be underestimating the impact of WebGPU. I'll make two observations.

First, for AI and machine learning type workloads, the infrastructure situation is a big mess right now unless you buy into the Nvidia / CUDA ecosystem. If you're a research, you pretty much have to, but increasingly people will just want to run models that have already been trained. Fairly soon, WebGPU will be an alternative that more or less Just Works, although I do expect things to be rough in the early days. There's also a performance gap, but I can see it closing.

Second, for compute shaders in general (potentially accelerating a large variety of tasks), the barrier to entry falls dramatically. That's especially true on web deployments, where running your own compute shader costs somewhere around 100 lines of code. But it becomes practical on native too, especially Rust where you can just pull in a wgpu dependency.

As for text being one of the missing pieces, I'm hoping Vello and supporting infrastructure will become one of the things people routinely reach for. That'll get you not just text but nice 2D vector graphics with fills, strokes, gradients, blend modes, and so on. It's not production-ready yet, but I'm excited about the roadmap.

[Note: very lightly adapted from a comment at cohost; one interesting response was by Tom Forsyth, suggesting I look into SYCL]

This is the discussion I hoped to find when clicking on the comments.

> Fairly soon, WebGPU will be an alternative...

So while the blog focused on the graphical utility of WebGPU, the underlying implementation of WebGPU is currently about the way that websites/apps can now interface with the GPU in a more direct/advantageous way to render graphics.

But what you're suggesting is that in the future new functionality will likely be added to take advantage of your GPU in other ways, such as training ML models and then using them via an inference engine all powered by your local GPU?

Is the reason you can't accomplish that today bc APIs haven't been created or opened up to allow such workloads? Are there not lower level APIs available/exposed today in WebGPU that would allow developers to begin the design of browser based ML frameworks/libraries?

Was it possible to interact with the GPU before WebGPU via Web Assembly?

Other than ML and graphics/games (and someone is probably going to mention crypto), are there any other potentially novel uses for WebGPU?

> GPU in other ways, such as training ML models and then using them via an inference engine all powered by your local GPU?

Have a look at wonnix https://github.com/webonnx/wonnx

A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web