I thought the same thing for a long time, I found s-expressions very elegant. Then I discovered lambda-calculus, with its scary syntax but based on a process I could understand, text-rewriting. So I had the idea to explore the lambda-calculus using s-expressions and patiently rebuilt booleans, pairs, lists, recursion and beyond a real complete Turing language, for instance:

- http://lambdaway.free.fr/lambdawalks

- http://lambdaway.free.fr/lambdawalks/?view=lambdacode5

No need for cons, car, cdr and other historical names. We can rebuild everything from scratch, from abstractions and applications. No need for Lisp, Scheme, Common Lisp and other Clojure, even if they remain excellent examples. So in my opinion the Maxwell equations of computation are to be seen more in the lambda-calculus than in LISP and its dialects. We just need to make the syntax a little more human.

For the reasons you give, it is perhaps more reasonable to say that Lisp is a Maxwell's Equations of software. But there are several. Forth has another reasonable claim to it, despite unpopularity, and Turing Machines have a claim too, despite their extreme unpopularity as a programming methodology. I don't know array languages well enough to know but I bet they have one too.

There isn't really a unique set of Maxwell's equations in software.

i wasn't able to get a runnable forth to less than a couple of pages written in itself https://github.com/kragen/stoneknifeforth but schönfinkel's ski-combinators are maybe the simplest practical basis

    s f g x → f x (g x)
    k a b   → a
    i = s k k (one of many possible definitions)
maybe wolfram knows of a simpler alternative

my favorite is abadi and cardelli's object-calculus on which i based bicicleta. it has two reduction rules rather than the λ-calculus's one. using a hybrid of bicicleta's syntax and abadi and cardelli's

    {f = ς(v)b, g = ς(v)c, ...}{f = ς(v)d} → {f = ς(v)d, g = ς(v)c, ...}
    {f = ς(v)b, g = ς(v)c, ...}.f          → b[{f = ς(v)b, g = ς(v)c, ...}/v]
the first of these derives an inherited object with a new definition for method f. the second one invokes method f on an object, which is evaluated by replacing its self-variable v with the object itself throughout its body, using the standard β-reduction semantics with α-renaming that we're familiar with from the λ-calculus (b[x/y] means b but with x replacing y)

despite the simplicity of the semantics the ς-calculus is enormously more usable for actually writing down functions than the λ-calculus. here's factorial(10) in the notation above (untested)

    {fac = ς(env){
        n = ς(_)3,
        return = ς(o)
            (o.n < 2).if_true{
                then = ς(_)1
                else = ς(_)o.n * env.fac{n = ς(_)o.n - 1}.return
            }.return
        }
    }.fac{n = ς(10)}.return
bicicleta has a lot of syntactic sugar which reduces this to (tested)

    {env: fac = {fac:
        arg1 = 3
        '()' = (fac.arg1 < 2).if_true(
            then = 1
            else = fac.arg1 * env.fac(fac.arg1 - 1)
         )
    }}.fac(10)
in particular to be able to define infix operators inside the language, bicicleta rewrites

    x * y
as

    x.'*'{arg1 = y}.'()'
and analogously for -, <, etc.

they published a bunch of papers and a book on this but they were more interested in static typing than anything else. a paper on one imperative variation of the thing is http://lucacardelli.name/Papers/PrimObjImpSIPL.A4.pdf

you can implement a turing machine in a few lines of c, making it internally simpler than the other alternatives, but as you point out it's unusable except as a compilation target or to prove theorems about