We published an internal policy for AI tools last week. The basic theme is: "We see the value too, but please don't copypasta our intellectual property until we get a chance to stand up something internal."

We've granted some exceptions to the team responsible for determining how to stand up something internal. Lots of shooting in the dark going on here, so I figured we would need some divulgence of our IP against public tools to gain traction.

Inform us when you figured out a way to host something with the quality of ChatGPT internally :-)

Even if we had a 100% private ChatGPT instance, it wouldn't fully cover our internal use case.

There is way more context to our business than can fit in 4/8/32k tokens. Even if we could fit the 32k token budget, it would be very expensive to run like this 24/7. Fine-tuning a base model is the only practical/affordable path for us.

You can retrieve information on demand based on what the user is asking, like this: https://github.com/openai/chatgpt-retrieval-plugin