So I don't doubt that "modern Fortran" is a nice language, but last I'd heard, it also effectively didn't exist in the wild; Fortran remains king of numerical computation by virtue (largely if not exclusively) of having a vast number of packages already written spanning decades of use... but not, generally, spanning to new versions of the language. So is it good enough to use the language in spite of its ecosystem?

No.

I write Fortran professionally and it is a real pain in the dick to get anything done. It has some nice SIMD and matrix operation syntactic sugar but that is about it.

If you want to get anything useful done you are better off writing in another language.

The Fortran ecosystem is running on fumes from the inertia of these massive legacy code bases that no one wants to pay to rewrite. But then complain that it is expensive to get anything done in them.

This surprises me. I wrote a lot of Fortran doing research and found it very strong for numerics. Having first class multidimensional arrays with debuggers being able to understand slicing operators is a nice boost for productivity, because half the time is often spent figuring out why two results don’t agree. The main missing feature was generic functions and that’s where I think Julia brings some real advantages to the table. But SIMD, multicore, MPI and GPU support is afaik. still more mature on Fortran and that’s where it counts when programming cluster applications.

I agree with you that Fortran is running on more than just legacy here. At the same time, I also think Julia has caught up a lot as far as SIMD, multicore, MPI and GPU.

For SIMD, Chris Elrod's LoopVectorization.jl [1] is an (IMHO) incredibly impressive piece of work (which incidentally provides the foundation for I think the first pure Julia linear algebra library competitive with BLAS).

Multithreading is pretty easy with things like `@spawn`/`@sync` and `@threads for` in the base language, as well as super low-overhead multithreading from the Polyester.jl [2] package (which LoopVectorization also uses to provide a version its vectorization macro that'll also multithread your loops in addition to SIMD-vectorizing them).

MPI.jl [3] has been totally problem free for me, though I wouldn't be surprised if the Fortran bindings still have an edge somewhere, and Cuda.jl [4] seems to provide pretty seamless GPU support which should play nicely with MPI.jl's Cuda-aware MPI [5], but I don't work as much with GPUs myself.

[1] https://github.com/JuliaSIMD/LoopVectorization.jl

[2] https://github.com/JuliaSIMD/Polyester.jl

[3] https://github.com/JuliaParallel/MPI.jl

[4] https://github.com/JuliaGPU/CUDA.jl

[5] https://juliaparallel.github.io/MPI.jl/latest/usage/#CUDA-aw...