Was thinking of building something very similar - upload your technical docs - and create a chat bot help system.
One of the tricky things to overcome is the length limits of the prompt you can feed into things like GPT-3. There are some suggestions on the OpenAI website on how to overcome this. The main one seems to be to filter the content using embeddings and then only feed "relevant" sections into the prompt.
Would be interested to know what your approach is.
I've been working on a hobby project to make all the site, videos and conversations I've seen searchable (aka external memory). While building it I used this to prepare the data and get around the prompt limits https://github.com/jerryjliu/gpt_index
Could you share anything (e.g. how many rows of data and tokens in each row) around how much it cost you to use GPT Index? It looks interesting, but it seems it'd be expensive.