Given the constraints, the proposed implementation is not efficient at all. Here’s a quick benchmark to compare two implementations: https://gist.github.com/Const-me/265b8a3e89f17f4268d34d64771...

In theory, the hash map is O(n), and std::sort approach is O(n*log(n))

And yet when measured on my computer, std::sort is more than twice as fast. Specifically, for hash map my program prints 3574736 (best of 10 tries), for std::sort only 1695712. My CPU base frequency is 3.8 GHz, so these numbers measured with RDTSC translate to 941 and 446 microseconds, respectively.

I’m using a CPU with AMD Zen 3 cores, and compiled that code for AVX2 ISA, with Visual Studio 2022.

std::unordered_map is notoriously slow, several times slower than a fast hashmap implementation like Google's absl or Martin's robin-hood-hashing [1]. In addition, your implementation lets the hashmap grow from a small size. As you know the total length, you could preallocate the hash table and avoid rehashing. As typical hash table implementations double the capacity during rehashing, preallocating the buckets should be about twice as fast if we don't consider the cache and the complex mechanism behind the STL allocator.

That said, std::sort is not the fastest sort implementation, either. It is hard to say which is faster for a small N of 10000. Your general point is still right in that an O(n*log(n)) algorithm is not necessarily slower than an O(n) algorithm in practice.

[1]: https://github.com/martinus/robin-hood-hashing