What does HackerNews think of HIP?

HIP: C++ Heterogeneous-Compute Interface for Portability

Language: C++

ROCm has HIP (1) which is a compatibility layer to run CUDA code on AMD GPUs. In theory, you only have to adjust #includes, and everything should just work, but as usual, reality is different.

Newer backends for AI frameworks like OpenXLA and OpenAI Triton directly generate GPU native code using MLIR and LLVM, they do not use CUDA apart from some glue code to actually load the code onto the GPU and get the data there. Both already support ROCm, but from what I've read the support is not as mature yet compared to NVIDIA.

1: https://github.com/ROCm-Developer-Tools/HIP

AMD's equivalent is HIP [1], for sufficiently flexible definitions of "equivalent". I can't speak to how complete/correct/performant it is (I'm just a guy running tutorial/toy-level ML stuff on an RDNA1 card), but part of AMD's problem is that it might not practically matter how well they do this because the broader ecosystem support specifically for the CUDA stack is so entrenched.

[1] https://github.com/ROCm-Developer-Tools/HIP

> does not run on the GPU

both Cuda and the Metal shader language are C++, so is OpenCL since 2.0 (https://www.khronos.org/opencl/), so is AMD ROCm's HIP (https://github.com/ROCm-Developer-Tools/HIP), so is SYCL (https://www.khronos.org/sycl/)? C++ is pretty much the language that runs most on GPUs.

> no vector instructions,

There's a thousand different possibilities for SIMD in C++, from #pragma omp simd, to libs such as std::experimental::simd (https://en.cppreference.com/w/cpp/experimental/simd/simd), Eve (https://github.com/jfalcou/eve), Highway (https://github.com/google/highway), Vc (https://github.com/VcDevel/Vc)...