I've changed the top URL to the video, in the hope of reducing the tedious complaints about webpage formatting that the transcript was generating.
The transcript is here: https://jackrusher.com/strange-loop-2022/. Please let's talk about the ideas now.
I'm glad I happened across the item before you made the change! The transcript is the best transcript I've ever seen of a talk.
I think the ideas are very interesting. I don't agree with his condemnation of Docker and single-threaded programming, but he's certainly right about the value of being able to kill threads in Erlang, and about the importance of being able to fix things that are broken, and about our computers cosplaying as PDP-11s (and the consequent teletype fetishism).
I hadn't made the connection between Sussman's propagators and VisiCalc before. I mean I don't think Bricklin and Frankston were exposed to Sussman, were they? They were business students? But if not, it's certainly a hell of a coincidence.
My defense of single-threaded code and aborting is that the simplest way we've found so far to write highly concurrent systems is with transactions. A transaction executes and makes some changes to mutable state based on, ideally, a snapshot of mutable state, and if it has any error in the middle, none of those changes happen. So it executes from beginning to end, starting from a blank (internal) state, and runs through to termination, unless halted by a failure, just like the "dead programs" Rusher is complaining about. You put a lot of these transactions together, executing concurrently, and you have a magnificent live system, and one that's much easier to reason about than RCU stuff or even Erlang stuff. This is what the STM he praises in Clojure is doing, and it's also how OLTP systems have been built for half a century. Its biggest problem is that it has a huge impedance mismatch with the rest of the software world.
I've said before that to get anything done you need some Dijkstra and some Alan Kay. If you don't have any Dijkstra in you, you'll thrash around making changes that empirically seem to work, your progress will be slow, and your code will be too buggy to use for anything crucial. If you don't have any Alan Kay in you, you'll never actually put any code into a computer, and so you won't get anything done either except to prove theorems. Alan Kay always had a fair bit of Dijkstra in him, and Dijkstra had some Kay in him in his early years before he gave up programming.
Ideologically, Rusher is way over on the Kay end of the spectrum, but he may not be aware of the degree to which the inner Dijkstra he developed in his keypunch days allows him to get away with that. The number of programmers who are ridiculously unproductive with Forth (i.e., almost all of us) is some kind of evidence of that.
Interestingly he doesn't talk about observability at all, and I suspect that observability may be a more useful kind of liveness for today's systems than setting breakpoints and inspecting variables, even with Clouseau.
Data Rabbit, Maria.cloud, Hazel, livelits, and Clerk sound really interesting.
I think it's unfortunate that you switched the URL; even for people without hearing impairment, transcripts are far preferable to videos, and this is a really excellent transcript. With a couple of screenshots, it would be better than the video in almost every way, though a few of the demos would lose something. (The demos start at 14'40".) The sort of people who were making worthless comments because they were confronted with a webpage formatted in an unfamiliar way won't suddenly start making insightful comments because there's a video link; they won't make any comments at all. So it's a mistake to cater to them and damage the experience for people who might have something to contribute. Video links make for shallow conversations.
Interesting thought on Forth. I'm also unproductive in it, but I think at no fault to the language. I simply haven't had the time to build a true forth to solve a problem. I usually have some data and some transformations and maybe some API calls to make to an application and a database. Not really a good use for Forth. At least not time wise.
I got to being about 25% as productive in Forth as in C once I learned to stop trying to use the stack for local variables. Maybe with enough practice I might get to being as productive as in C, or even more so. I doubt I'd get to being as productive as in Python, which I think is about 3× as productive as C for me.
I think that if I were, say, trying to get some piece of buggy hardware working, so that most of the complexity of my system was poking various I/O ports and memory locations to see what happened, Forth would already be more productive than C for me. Similar to what Yossi Kreinin said about Tcl:
https://yosefk.com/blog/i-cant-believe-im-praising-tcl.html
Tcl is also good for that kind of thing, but Tcl is 1.2 megabytes, and Forth is 4 kilobytes. You can run Forth on computers that are two orders of magnitude too small for Tcl.
So I think we shouldn't evaluate Forth as a programming language. We should think of it as an embedded operating system. It has a command prompt, multitasking, virtual memory, an inspector for variables (and arbitrary memory locations), and a sort of debugger: at any time, at the command prompt, you can type the name of any line of code to execute it and see what the effect is, so you can sort of step through a program by typing the names of its lines in order. Like Tcl and bash, you can also program in its command-prompt language, and in fact build quite big systems that way, but the language isn't really its strength.
But there is an awful lot of software out there that doesn't really need much complicated logic: some data, some transformations, and maybe some API calls to make to some motors or sensors (or an application and a database). So it doesn't really matter if you're using a weak language like Tcl or Forth because the program logic isn't the hard part of what you're doing.
And it's in that spirit that Frank Sergeant's "three instruction Forth" isn't a programming language at all; it's a 66-byte monitor program that gives you PEEK, POKE, and CALL.
https://pages.cs.wisc.edu/~bolo/shipyard/3ins4th.html
On the other hand, if the computer you're programming has megabytes of RAM rather than kilobytes, and megabits of bandwidth to your face rather than kilobits, you can probably do better than Forth. You can get more powerful forms of what Rusher is calling "liveness" than Forth's interactive procedure definition and testing at the command prompt and textual inspection of variables and other memory locations on demand; you can plot metrics over time and record performance traces for later evaluation. You can afford infix syntax, array bounds checking (at least most of the time), and dynamic type checking.
I always find this [1] article to be the most trenchant regarding Forth:
> "Forth is about the freedom to change the language, the compiler, the OS or even the hardware design".
> …And the freedom to change the problem.
In my employed work life it has been fairly rare that I can make more than tiny changes to the problem, making Forth not useful.
[1] https://yosefk.com/blog/my-history-with-forth-stack-machines...
: mean_std ( sum2 sum inv_len -- mean std )
\ precise_mean = sum * inv_len;
tuck u* \ sum2 inv_len precise_mean
\ mean = precise_mean >> FRAC;
dup FRAC rshift -rot3 \ mean sum2 inv_len precise_mean
\ var = (((unsigned long long)sum2 * inv_len) >> FRAC) - (precise_mean * precise_mean >> (FRAC*2));
dup um* nip FRAC 2 * 32 - rshift -rot \ mean precise_mean^2 sum2 inv_len
um* 32 FRAC - lshift swap FRAC rshift or \ mean precise_mean^2 sum*inv_len
swap - isqrt \ mean std
;
I've done all these things (except designing the hardware) and I agree that it can be very painful. I did some of them in 02008, for example: https://github.com/kragen/stoneknifeforthThe thing is, though, you can also not do all those things. You can use variables, and they don't even have to be allocated on a stack (unless you're writing a recursive function, which you usually aren't), and all the NIP TUCK ROT goes away, and with it all the Memory Championship tricks. You can test each definition interactively as you write it, and then the fact that the language is absurdly error-prone hardly matters. You can use metaprogramming so that your code is as DRY as a nun's pochola. You can use the interactivity of Forth to quickly validate your hypotheses about not just your code but also the hardware in a way you can't do with C. You can do it with GDB, but Forth is a lot faster than GDBscript, but that's not saying much because even Bash is a lot faster than GDBscript.
But Yossi was just using Forth as a programming language, like a C without local variables or type checking, not an embedded operating system. And, as I said, that's really not Forth's strength. Bash and Tcl aren't good programming languages, either. If you try to use Tcl as a substitute for C you will also be very sad. But the way they're used, that isn't that important.
I explained a more limited version of this 12 years ago: https://yosefk.com/blog/my-history-with-forth-stack-machines...
So, I don't think Forth is only useful when you have the freedom to change the problem, though programs in any language do become an awful lot easier when you have that freedom.