Amazing there is still something to find in a sort algorithm.
Especially when targeting realistic machine models, there are a lot of things that are suboptimal about the classical sorting algorithms like quicksort or mergesort. For example, a quicksort with perfect choice of pivot will incur a branch miss with probability 50% for every element. That's not something classical complexity analysis measures, but on actual CPUs, branch misses have quite an impact (something like 10–15 cycles). Shameless plug for something I once built for a lecture to demonstrate this: https://github.com/lorenzhs/quicksort-pivot-imbalance
For sorting algorithms that take the behaviour of modern CPUs into account, check out ips4o (https://arxiv.org/abs/1705.02257, code: https://github.com/SaschaWitt/ips4o) or for a simpler algorithm that's still much faster than quicksort in most cases, blockquicksort (https://arxiv.org/abs/1604.06697, code: https://github.com/weissan/BlockQuicksort). Note that both papers were published in the last two years :)
Of course these algorithms are much more complex and error-prone to implement and use some additional memory, which may explain why they're not used in standard library implementations of popular languages.
Out of curiosity, what do you read/follow that makes stuff like this discoverable?
I'm a PhD student in algorithmics / algorithm engineering, so I work with a lot of people who do stuff like this, even though my research isn't related to sorting. Super Scalar Sample Sort (which ips4o is based on) was co-authored by my advisor, Peter Sanders, and I re-implemented it a few years ago in modern C++ just for the fun of it. Turns out that was a lot nicer to read than the original code (which isn't public) and a bit faster, too. I put that on GitHub where the blockquicksort authors found it and contacted me and we traded some emails and made some improvements. Sometime later my advisor and two colleagues came up with an idea for in-place super-scalar sample sort, out of which eventually ips4o was born.
So, uh, I guess I just work with the right people :) Sorry that I can't give you anything concrete. All of these papers were presented at ESA (European Symposium on Algorithms), though, so that's a good venue to follow. But beware, ESA has a theory track that's a lot bigger than the experimental track, and papers published there can be somewhat unapproachable ;)
Ahh, that makes sense, thanks!
I'd recently asked elsewhere 'I've always wondered is there a good mathematical representation of an algorithm that is useful for algebraic manipulation? Like producing a quicksort from bubble sort.' And got linked this paper [0]. This is only time I've heard the word 'Algoritmics' since that.
Any interesting 'entry level' reads you could send my way?
[0]: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.45....
I don't think the term 'algorithmics' appears very often in publications, it's more of an umbrella term for many things. The stuff we work in our group is sort of the practical side of theoretical computer science, in that our focus is on algorithms that can be efficiently implemented and don't just look good on paper. The methodology is called Algorithm Engineering, it's described quite well in https://en.wikipedia.org/wiki/Algorithm_engineering. Apart from ESA's track B, the major conferences there are Alenex and SEA (Symposium on Experimental Algorithms). All three are open access.
It's difficult to recommend anything in particular, not only because the scope is very broad, but also because most papers aren't written for a wide audience. Good writing is not something that academics optimize for (perversely, good writing can be seen as a negative point, in that if a paper is easy to understand, it may be rejected for being too simple), nor is it taught. Maybe something like https://github.com/papers-we-love/papers-we-love could serve as a starting point?