I built multiple systems using vector search, one of them demoed in a search engine for non-commercial content at http://teclis.com
Running vector search (also sometimes referred to as semantic search, or a part of semantic search stack) is a trivial matter with open-source libraries like Faiss https://github.com/facebookresearch/faiss
It takes 5 minutes to set up. You can search billion vectors on common hardware. For low-latency (up to couple of hundred milliseconds) use cases, it is highly unlikely that any cloud solution like this would be a better choice than something deployed on premise because of the network overhead.
(worth noting is that there are about two dozen vector search libraries, all benchmarked at http://ann-benchmarks.com/ and most of them open-source)
A much more interesting (and harder) problem is creating good vectors to begin with. This refers to the process of converting a text or an image to a multidimensional vector, usually done by a machine learning model such as BERT (for text) or ImageNet (for images).
Try entering a query like 'gpt3' or '2019' into the news search demo linked in the Google's PR:
https://matchit.magellanic-clouds.com/
The results are nonsensical. Not because the vector search didn't do its job well, but because generated vectors were suboptimal to begin with. Having good vectors is 99% of the semantic search problem.
A nice demo of what semantic search can do is Google's Talk to Books https://books.google.com/talktobooks/
This area of research s fascinating. For those who want to play with this more, an interesting end-to-end (including both vector generation and search) open-source solution is Haystack https://github.com/deepset-ai/haystack
Disclose: I have built a vector search engine to proof this idea[2]