I remember when "semantic search" was the Next Big Thing (back when all we had were simple keyword searches).
I don't know enough about the internals of Google's search engine to know if it could be called a "semantic search engine", but not, it gets close enough to fool me.
But I feel like I'm still stuck on keyword searches for a lot of other things, like email (outlook and mutt), grepping IRC logs, searching for products in small online stores, and sometimes even things like searching for text in a long webpage.
I'm sure people have thought about these things: what technical challenges exist in improving search in these areas? is it just a matter of integrating engines like the one that's linked here? Or maybe keyword searches are often Good Enough, so no one is really clamoring for something better
Semantic similarity more concretely means to use neural nets to embed the text, then use cosine similarity or dot product to compute the score between two entities.
embed1 = neural_net(txt1)
embed2 = neural_net(txt2)
sim_score = np.dot(embed1, embed2)
If you're making a search engine you precompute the embeds for all the items in your database. When a user performs a search you just need to embed the query and do the dot products, which are pretty fast for small indexes.
Assuming you want to index millions or billions of entities doing dot products is inefficient because it scales linearly in the size of the index. There is a trick (similar to binary search) that will find the top-k most similar results in O(log(N)) time, called approximate nearest neighbour (ANN). There are a few good libraries for that.
Are there any semantic search implementations focused on.. small, local deploys?
Eg i'm interested in local serverless setups (on desktop, mobile, etc) that yield quality search results in the ~instant~ time frame, but that are also complete and accurate in results. Ie i threw out investigating ANN because i wanted complete results due to smaller datasets.