> This is in part because of the work by Google on the NVPTX LLVM back-end.

I'm one of the maintainers at Google of the LLVM NVPTX backend. Happy to answer questions about it.

As background, Nvidia's CUDA ("CUDA C++?") compiler, nvcc, uses a fork of LLVM as its backend. Clang can also compile CUDA code, using regular upstream LLVM as its backend. The relevant backend in LLVM was originally contributed by nvidia, but these days the team I'm on at Google is the main contributor.

I don't know much (okay, anything) about Julia except what I read in this blog post, but the dynamic specialization looks a lot like XLA, a JIT backend for TensorFlow that I work on. So that's cool; I'm happy to see this work.

Full debug information is not supported by the LLVM NVPTX back-end yet, so cuda-gdb will not work yet.

We'd love help with this. :)

Bounds-checked arrays are not supported yet, due to a bug [1] in the NVIDIA PTX compiler. [0]

We ran into what appears to be the same issue [2] about a year and a half ago. nvidia is well aware of the issue, but I don't expect a fix except by upgrading to Volta hardware.

[0] https://julialang.org/blog/2017/03/cudanative [1] https://github.com/JuliaGPU/CUDAnative.jl/issues/4 [2] https://bugs.llvm.org/show_bug.cgi?id=27738

Does this mean we could hook Cython up to NVPTX as the backend?

I've always thought it weird that I'm writing all my code in this language that compiles to C++, with semantics for any type declaration etc...And then I write chunks of code in strings, like an animal.

IDK about Cython, but I remember a blog post using Python's AST reflection to jit to LLVM ->NVPTX -> PTX. It's relatively simple to do, I've done it for LDC/D/DCompute[1,2,3]. It's a little tricker if you want to be able to express shared memory surfaces & textures, but it should still be doable.

[1] https://github.com/ldc-developers/ldc [2] dlang.org [3] http://github.com/libmir/dcompute