I appreciate this kind of discussion. It's important to note C programmers these days effectively avoid the inherent risks of unsafety by deploying solid memory strategies.
For example (self-plug incoming) we're hosting a workshop this summer [0] teaching people to replace a rat's nest of mallocs and frees with an arena-based memory allocator.
These sorts of techniques eliminate the common memory bugs. We don't do C programming the way we did back in the 80's and 90's.
Arenas were pervasively used in the 80s, 90s, 2000s, ... and are still widely used
Using Malloc and free in the naive way was the exception, not the rule
They can be the cause of memory safety problems in some cases, as well as a partial solution in others
This is at odds with my understanding of how C programming was (typically) done. We might also be defining arena usage differently - that's one way I can reconcile our mismatched outlooks.
My first job was at EA working on console games (PS2, GameCube, XBox, no OS or virtual memory on any of them), and while at the time I was too junior to touch the memory allocators themselves, we were definitely not malloc-ing and freeing all the time.
It was more like you load data for the level in one stage, which creates a ton of data structures in many arrays. Then you enter a loop to draw every frame quickly, and you avoid any allocation in that loop. There were many global variables.
---
Wikipedia calls it a region, zone, arena, area, or memory context, and that seems about right:
https://en.wikipedia.org/wiki/Region-based_memory_management
It describes history from 1967 (before C was invented!) and has some good examples from the 1990's: Apache web server ("pools") and Postgres database ("memory contexts").
I also just looked at these codebases:
https://github.com/mit-pdos/xv6-public (based on code from the 70's)
https://github.com/id-Software/DOOM (1997)
I looked at allocproc() in xv6, and gives you an object from a fixed global array. This is similar to a lot of C code in the 80's and 90's -- it was essentially "kernel code" in that it didn't have an OS underneath it. Embedded systems didn't run on full-fledges OSes.
DOOM tends to use a lot of what I would call "pools" -- dynamically allocated arrays of objects of a fixed size, and that's basically what I remember from EA.
Though in g_game.c, there is definitely an arena of size 0x20000 called "demobuffer". It's used with a bump allocator.
---
So I'd say
- malloc / free of individual objects was NEVER what C code looked like (aside from toy code in college)
- arena allocators were used, but global, fixed-size arrays, and dynamic pools were maybe more common.
- arenas are more or less wash for memory safety. they help you in some ways, but hurt you in others.
The reason C programmers don't malloc/free all the time is for speed, not memory safety. Arenas are still unsafe.
When you free an arena, you have no guarantee there's nothing that points to it anymore.
Also, something that shouldn't be underestimated is that arena allocators break tools like ASAN, which use the malloc() free() interface. This was underscored to me by writing a garbage collector -- the custom allocator "broke" ASAN, and that was actually a problem:
https://www.oilshell.org/blog/2023/01/garbage-collector.html
If you want memory safety in your C code, you should be using dynamically instrumented allocators (ASAN, valgrind) and good test coverage. Depending on the app, arenas don't necessarily help, they can hurt.
An arena is a simple idea -- the problem is more if that usage pattern actually matches your application, and apps evolve over time.