If I recall correctly, Postgres search doesn't scale well. Not sure where it falls apart but it isn't optimized in the same way something like Solr is.

I have a table with over a billion rows and most full-text searches still respond in around a few milliseconds. I think this will depend on a lot of factors, such as proper indexing, and filtering down the dataset as much as possible before performing the full-text ops. I've spent a considerable amount of time on optimizing these queries, thanks to tools like PgMustard [0]. Granted, I do still have a couple slow queries (1-10s query time), but that's likely due to very infrequent access i.e. cold cache.

I will say, if you use open source libraries like pg_search, you are unlikely to ever have performant full-text search. Most full-text queries need to be written by hand to actually utilize indexes, instead of the query-soup that these types of libraries output. (No offense to the maintainers -- it's just how it be when you create a "general" solution.)

[0]: https://pgmustard.com

Silly question, I'm using pg right now and most of my queries are something like this (in english)

Find me some results in my area that contain these categoryIds and are slotted to start between now and next 10 days.

Since its already quite a filtered set of data, would that mean I should have little issues adding pg text search because with correct indexing and all, it will usually be applied to a small set of data?

Thanks

You might be just fine adding an unindexed tsvector column, since you've already filtered down the results.

The GIN indexes for FTS don't really work in conjunction with other indices, which is why https://github.com/postgrespro/rum exists. Luckily, it sounds like you can use your existing indices to filter and let postgres scan for matches on the tsvector. The GIN tsvector indices are quite expensive to build, so don't add one if postgres can't make use of it!