I would highly recommend Jon Harrop's "OCaml for Scientists" if you're interested in learning OCaml. It's a really awesome book. http://www.ffconsultancy.com/products/ocaml_for_scientists/

When I was learning OCAML and bought the book, Jon actually found some code I had posted on my website and sent me some advice about how to make it more idiomatic. He even rewrote the first one of my functions to show me how.

That book is ancient, but probably still pretty relevant. John Harrop himself posted on reddit awhile back that he hadn't personally used OCaml in a long time when someone asked about the book on the OCaml sub reddit. His username starts with jdh I think. Honestly though, scientific computing seems to be going Julia. Really good numeric, optimization, charting, linear algebra, parallel & distributed computing...etc. It's all free and open source. Julia Computing was founded to provide enterprise support and services including auditing and affordable cloud computing. This is the direction I'm moving towards. You should check out the case studies on Julia Computing.

I work professionally as an applied mathematician and I've struggled to understand where Julia fits into the tools already available.

In terms of prototyping algorithms, MATLAB/Octave still seems to be the best choice. We have access to an enormous number of builtin routines that help diagnose what's going on when an algorithm breaks. That's not to say other language don't, but the ability to set a single dbstop command and then run some kind of diagnostic like plotting the distribution of eigenvalues with d=eig(A); plot(real(d),imag(d),'x') is amazing and saves time. There's also a very straightforward workflow to run the debugger and then drop into gdb in case the case we need to interact with external libraries.

Now, certainly, MATLAB/Octave is weak, in my opinion, for generating larger software projects that need to interact with the rest of the world. This includes things like network connections, GUIs, database access, etc. Alternatively, sometimes a new low level driver needs to be written, which needs to be very fast. All that same, that ecosystem seems to be much better in languages like C++ and Python. Though, I've been experimenting with Rust as an alternative to C++ for this use case. At this point, if I have trouble with the algorithms, I can run it in parallel with the MATLAB/Octave to diagnose how things differ.

Coming back to Julia, where is it supposed to fit in? To me, there's a better prototyping language, production language, and bare metal fast language.

I will make one last quip and that's the licensing. Frankly, the big advantage of MATLAB is that it provides license cover. Mathworks has obtained the appropriate licenses for your expensive factorizations and sparse matrix methodology. Julia has not and they remain largely under GPL:

https://github.com/JuliaLang/julia/blob/master/LICENSE.md

Look at things like SUITESPARSE. Practically, what that means is that I can deliver MATLAB code to my clients and I don't have to disclose the source externally due to GPL requirements. Now, maybe they choose to run Octave. That's fine and then they can assume the responsibility for GPL code. However, for me, I maintain a MATLAB license and that gives me coverage for a whole host of other licenses in the context of MATLAB code and that makes my life vastly easier than if I were to develop and deliver code in another language.

The beauty of Julia is in its design as a language. This is what gives people hope that the tooling and ecosystem will emerge -- they all seem easier to develop in Julia than in other languages, thanks to multiple dispatch, the type system, and powerful metaprogramming abilities.

As far as fast bare metal languages go, you can write extremely general optimized matrix multiplication libraries in Julia. For matrices that fit in the L2 cache, I achieved performance similar to Intel MKL, but with far more generic code.

https://discourse.julialang.org/t/we-can-write-an-optimized-...

Writing a kernel in C means using SIMD intrinsics, where function names and types differ for every vector size, and then different numbers/sets of calls for each kernel (you'd want several sizes per architecture). Compare this to Julia, where parametric typing and multiple dispatch mean you only need one set of functions, and using an @generated function will generate whatever kernels you happen to need at compile time.

Looking into the future, awesome projects like Cassete speak to the potential of what's possible:

https://github.com/jrevels/Cassette.jl

Cassete promises to let you do anything from getting automatic differentiation for arbitrary Julia code, even if it is strictly typed and buried in a chain of dependencies of the library you're using and written by someone who never imagined the idea of autodiff, to injecting custom compiler passes.

Also, while I haven't tried it yet, many seem to like: https://github.com/timholy/Rebugger.jl

Julia is more promise than practice right now, but that's largely because of just how much it promises. I (and many others) think it has done a great job delivering so far. That's why there's excitement, and why many are starting to embrace it.

> Writing a kernel in C means using SIMD intrinsics, where function names and types differ for every vector size, and then different numbers/sets of calls for each kernel (you'd want several sizes per architecture)

This is not true for any recent, competent compiler. I can write a template function in C++ code with a regular loop over arrays and it gets vectorized withSIMD instructions automatically according to architecture. Sure, it’s not JIT but it still never requires intrinsics

Sure you can, but when one weights the language complexities and opportunities for unsafe code/UB, Julia certainly wins in productivity.

If you can deploy that runtime maybe. How well does Julia work when you need to ship a static library?

As far as I know, Julia only allows for dynamic libraries.

https://github.com/JuliaLang/PackageCompiler.jl

Which really for me they are good enough, I don't remember the last time I cared about producing a .a/.lib file for delivery.