I think this is a wonderful idea and one of the best ways to structure software, but I want to point out that the functional core is only one way to achieve the benefits and if you are in an environment where that is not possible due to performance or whatever, you can still derive most of the benefits by focusing on the state transformation part.

The real key aspect of this approach is that your state is not spread around the code and changing all over the place, but instead a big State -> State function built out of many smaller referentially transparent functions. (i.e. the complete opposite of OO programming, which IMO is a generally terrible idea that only works well in Smalltalk or in distributed systems).

You can achieve a similar outcome in the procedural setting with a "double buffered" state. Your procedural function writes out a new state based on the previous state, and the "imperative shell" only ever touches the "completed" states, and then flips them so the current state becomes the previous state and the previous state becomes the buffer to have the new state written into.

Less convenient but if you need to have tight control over memory allocations or performance this can be beneficial.

The issue is that while this greatly simplifies the program logic, when you start to scale the application you do end up with a lot of redundant computation. So in the end you need to either:

. take a performance hit

. somehow track/update costly intermediaries in your global state, and this can balloon out of control really fast

. pair it with some memoization mechanism/framework that automatically manages "derived state" variables

Yup, and I strongly suggest option 1, just take the perf loss. You can probably detect in each of your state update functions that nothing changed and just reuse the previous result (a memcpy is pretty fast, or if that data is in a separate allocation just copy the pointer).

The same is true of the functional core idea really, you're still going to have a big tree of function calls rebuilding things unless you also add some memoization to it. React and friends do it because DOM manipulation is very slow.

But think of IMGUI style GUIs, they're rebuilding the entire user interface every frame and yet are often more efficient than many retained mode UIs. Redundant computation is not as bad as it sounds at first glance.

I asked around and there is another option. Something called "Incremental Computations". Here is a Clojure library. I think the examples kinda demo how it works

https://github.com/hoplon/javelin

It's sort of a version of option 3