What does HackerNews think of Octavian.jl?
Multi-threaded BLAS-like library that provides pure Julia matrix multiplication
That's not true. None of the Julia differential equation solver stack is calling into Fortran anymore. We have our own BLAS tools that outperform OpenBLAS and MKL in the instances we use it for (mostly LU-factorization) and those are all written in pure Julia. See https://github.com/YingboMa/RecursiveFactorization.jl, https://github.com/JuliaSIMD/TriangularSolve.jl, and https://github.com/JuliaLinearAlgebra/Octavian.jl. And this is one part of the DiffEq performance story. The performance of this of course is all validated on https://github.com/SciML/SciMLBenchmarks.jl
Unless you're using [Octavian.jl](https://github.com/JuliaLinearAlgebra/Octavian.jl) or such for your linear algebra in place of BLAS. But yes, it is always interesting how many people do not know how much of modern software is built on Fortran!
You could also do it at the LLVM level: https://github.com/JuliaComputingOSS/llvm-cbe
One cool use case is in https://github.com/JuliaLinearAlgebra/Octavian.jl which relies on loopvectorization.jl to do transforms on Julia AST beyond what LLVM does. Because of that, Octavian.jl. a pure julia linalg library, beats openblas on many benchmarks
That seems to be very Julia-specific comparisons, which I'm sure will be oriented towards the use cases Julia has been designed for. I'm more interested in "neutral" benchmarks and more general-purpose computing areas.
> Furthermore, Julia has been used on some of the world's fastest super-computers (in the performance critical bits), which as far as I know isn't true of Swift/Kotlin/C#.
That's more a reflection of culture than performance though. Back when I worked with a bunch of data scientists they would happily run Python or R on our Spark cluster, consuming oodles of resources to do not very much, but balked at writing Scala, even though their code would have run much faster.