Great article, but I'm curious why automatic reference counting (ARC) and smart pointers never seemed to really catch on outside of Objective-C and Swift:

https://en.wikipedia.org/wiki/Automatic_Reference_Counting

https://en.wikipedia.org/wiki/Smart_pointer

They almost "just work", except for circular references:

https://en.wikipedia.org/wiki/Reference_count#Dealing_with_r...

I'd like to see some real-world studies on what percentage of unclaimed memory is taken up by orphaned circular references, because my gut feeling is that it's below 10%. So that really makes me question why so much work has gone into various garbage collection schemes, nearly all of which suffer from performance problems (or can't be made realtime due to nondetermistic collection costs).

Also I can't prove it, but I think a major source of pain in garbage collection is mutability, which is exacerbated by our fascination with object-oriented programming. I'd like to see a solid comparison of garbage collection overhead between functional and imperative languages.

I feel like if we put the storage overage of the reference count aside (which becomes less relevant as time goes on), then there should be some mathematical proof for how small the time cost can get with a tracing garbage collector or the cycle collection algorithm of Bacon and Paz.

The reason they are not used is not the circular reference problem, it's performance. Single-threaded ARC is passable, but thread-safe ARC is really, really slow. It basically hits all the worst performance pitfalls of modern CPUs.

Performance is really bad if you use Arc for practically everything, as with Swift. If you can deal with the simplest (tree-like) allocation patterns statically as Rust does, and use refcounting only where it's actually needed, you can outperform tracing GC.

> Performance is really bad if you use Arc for practically everything, as with Swift.

Do you have examples where Swift suffers compared to other languages solely because of ARC?

Also, is this something that Apple might theoretically make up for by optimizing their custom CPUs for it, without changing Swift?

Yes, here it arrives always last.

https://github.com/ixy-languages/ixy-languages

Since the Xerox PARC workstations and Genera Lisp Machines, all hardware specific instructions for memory management have proven to be worse than doing it eventually in software.

How does that compare to SwiftNIO?

https://github.com/apple/swift-nio