What does HackerNews think of plaidml?

PlaidML is a framework for making deep learning work everywhere.

Language: C++

It understands my Swedish attempts at English really well with the medium.en model. (Although, it gives me a funny warning: `UserWarning: medium.en is an English-only model but receipted 'English'; using English instead.`. I guess it doesn't want to be told to use English when that's all it can do.)

However, it runs very slowly. It uses the CPU on my macbook, presumably because it hasn't got a NVidia card.

Googling about that I found [plaidML](https://github.com/plaidml/plaidml) which is a project promising to run ML on many different gpu architectures. Does anyone know whether it is possible to plug them together somehow? I am not an ML researcher, and don't quite understand anything about the technical details of the domain, but I can understand and write python code in domains that I do understand, so I could do some glue work if required.

Deep Learning could in theory run on this using Intel PlaidML's Keras implementation in OpenCL https://github.com/plaidml/plaidml
Check out PlaidML (from of all companies intel) to get AMD GPUs working with the Keras APIs for tensorflow

https://github.com/plaidml/plaidml

There is PlaidML[1] - a cross-platform deep learning framework that works, among others, on Raspberry Pi's GPU.

[1] - https://github.com/plaidml/plaidml

Edit: they don't support RPI GPU yet - https://github.com/plaidml/plaidml/issues/141

> Is TF running on AMD Vega's?

Not with mainline kernel (which supports up to Polaris), but it it already got merged into 4.18 that is going to be released in a couple of weeks. Until then it requires a bit extra effort https://github.com/RadeonOpenCompute/ROCm#dkms-driver-instal...

> Keras?

over opencl with https://github.com/plaidml/plaidml

We're working to provide more choice with PlaidML as well. You can use it to run Keras, ONNX, and TF models on AMD, Intel, and mobile/embedded GPUs, on most operating systems and OpenCL, LLVM, OpenGL, and soon CUDA and Metal, etc. We would welcome contributions for TensorFlow integration, ROCm support, and anything else that widens support to more people.

https://github.com/plaidml/plaidml

As far as I know Tile/PlaidML (Vertex.AI) is the only DSL+compiler that's usable for real workloads across a variety of hardware. https://github.com/plaidml/plaidml
No it won't. Especially if the price is competitive. Say for the price of 1 1080ti, if i can buy 1.5 units of comparable performance graphics card, i'll surely buy it. There are already resources being spent on OpenCI based ML/DL platforms (https://github.com/plaidml/plaidml). The architectures keep getting bigger and training time keeps getting longer. I think you underestimate this factor. I need as much gpu computing power as i can buy within the budget.
With regards to Deep Learning, a recently released Keras backend[0] actually supports OpenCL with direct validation on a few AMD cards. I've been playing around with it for the past week or so and so far have had no issues with it, though I'm admittedly not doing anything too serious.

0. https://github.com/plaidml/plaidml

I would give the framework developers some credit, they shipped and the field made a lot of progress on their tools. Getting good performance out of GPUs (CUDA or OpenCL) is quite difficult, cuDNN has historically been the only good low-level deep learning library, and NVIDIA makes great hardware. My company is building a fully open source low-level framework called PlaidML bringing OpenCL support and other benefits to the existing frameworks. We're starting with Keras, first code was posted last Friday: https://github.com/plaidml/plaidml