Ignoring the presence of "(*, *)" in the Fortran program -- which is the equivalent of all of the format string and type complexities in the C, Python, and Julia programs -- is a serious deficiency of the argument here. As soon as you start writing something more complicated than Hello World: The Sequel, those complexities become something that language learners need to engage with.

Well, ignoring the presence of "(,)" in Fortran is exactly what one should do.

Recall that FORTRAN stands for FORmula TRANslator.

FORTRAN is built for math. It makes math expressions — particularly, linear algebra stuff — easy to write, and running very, very fast. There are much better tools for text processing (there's perl, python, pandas, and everything in between). But for raw math, there's Fortran.

That's why everything is still powered by Fortran.

Yes, from Matlab to your fancy neural network, numpy, scipi, scikit-learn, etc., the actual computational core - BLAS/LAPACK is Fortran-based.

That's why Matlab feels like Fortran. And that's why numpy semantics are different from python's: because it is heavily inspired by Matlab, and, like Matlab, wraps FORTRAN code in a friendly manner.

But in the end, it's FORTRAN all the way down. Even in Julia.

(And that's how I ended up fixing a minor bug in Google's FORTRAN sparse svd package, PROPACK[1], in 2019. Well, PROPACK was written before Google existed, but they have hired the author for a reason. The bug was only in the way an error message displeased the internal memory leak checker; in other words, FORTRAN text I/O was an issue. Stay away from that, and it's the best tool for the job.)

[1] http://sun.stanford.edu/~rmunk/PROPACK/

> But in the end, it's FORTRAN all the way down. Even in Julia.

That's not true. None of the Julia differential equation solver stack is calling into Fortran anymore. We have our own BLAS tools that outperform OpenBLAS and MKL in the instances we use it for (mostly LU-factorization) and those are all written in pure Julia. See https://github.com/YingboMa/RecursiveFactorization.jl, https://github.com/JuliaSIMD/TriangularSolve.jl, and https://github.com/JuliaLinearAlgebra/Octavian.jl. And this is one part of the DiffEq performance story. The performance of this of course is all validated on https://github.com/SciML/SciMLBenchmarks.jl