We were very heavy numba users at my former company. I would even go so far as to say numba was probably the biggest computational enabler for the product. I’ve also made a small contribution to the library.

It’s a phenomenal library for developing novel computationally intensive algorithms on numpy arrays. It’s also more versatile than Jax.

In presentations, I’ve heard Leland McInnes credits numba often when he speaks of his development of UMAP. We built a very computationally intensive portion of our application with it and it has been running in production, stable, for several years now.

It’s not suitable for all use cases. But I recommend testing it if you need to do somewhat complex calculations iterating over numpy arrays for which standard numpy or scipy functions don’t exist. Even then, often we were surprised that we could speed up some of those calculations by placing them inside numba.

Edit: ex of a very small function I wrote with numba that speeds up an existing numpy function (note - written years ago and numba has undergone quite some amount of changes since!): https://github.com/grej/pure_numba_alias_sampling

Disclosure - I now work for Anaconda, the company that sponsors the numba project.

> It’s also more versatile than Jax

Does numba do automatic differentiation?

I view JAX as primarily an automatic differentiation tool with the bonus that it makes great use of XLA and can easy make use of GPU/TPUs.

I don’t usually see numba and JAX as solving the same problem, but would be excited to be wrong

They solve different problems.

Numba compiles functions down to machine code or cuda kernels, that's it. XLA is "higher level" than what Numba produces.

You may be able to get the equivalent of jax via numba+numpy+autograd[1], but I haven't tried it before.

IMHO, jax is best thought of as a numerical computation library that happens to include autograd, vmapping, pmapping and provides a high level interface for XLA.

I have built a numerical optimisation library with it, and although a few things became verbose, it was a rather pleasant experience as the natural vmapping made everything a breeze, I didn't have to write the gradients for my testing functions, except for special cases that involved exponents and logs that needed a bit of delicate care.

[1] https://github.com/HIPS/autograd