A vectorized implementation of find_close_polygons wouldn't be very complex or hard to maintain at all, but the authors would also have to ditch their OOP class based design, and that's the real issue here. The object model doesn't lend itself to performant, vectorized numpy code.

Exactly, that is the real issue, vectorization might be good enough in terms of performance. It doesn't seem to be mentioned in the article at all.

It might have been added later, but the author mentions vectorization in the beginning:

> It’s worth noting that converting parts of / everything to vectorized numpy might be possible for this toy library, but will be nearly impossible for the real library while making the code much less readable and modifiable, and the gains are going to be limited (here’s a partially vertorized version, which is faster but far from the results we are going to achieve).

Semi Vectorized code:

https://github.com/ohadravid/poly-match/blob/main/poly_match...

Expecting Python engineers unable to read defacto standard numpy code but meanwhile expect everyone can read Rust.....

Not to mention that the semi-vectorized code is still suboptimal. Too many for loops despite the author clearly know they can all be vectorized.

For example instead the author can just write something like:

   np.argmin(
    distances[distances<=threshold]
    )
Also in oneplace there is:

    np.xxx( np.xxx, np.xxx + np.xxx)
You can just slap numexpr on top of it to compile this line on the fly.

https://github.com/pydata/numexpr