Explicitly calling out that they are not going to train on enterprise's data and SOC2 compliance is going to put a lot of the enterprises at ease and embrace ChatGPT in their business processes.
From our discussions with enterprises (trying to sell our LLM apps platform), we quickly learned how sensitive enterprises are when it comes to sharing their data. In many of these organizations, employees are already pasting a lot of sensitive data into ChatGPT unless access to ChatGPT itself is restricted. We know a few companies that ended up deploying chatbot-ui with Azure's OpenAI offering since Azure claims to not use user's data (https://learn.microsoft.com/en-us/legal/cognitive-services/o...).
We ended up adding support for Azure's OpenAI offering to our platform as well as open-source our engine to support on-prem deployments (LLMStack - https://github.com/trypromptly/LLMStack) to deal with the privacy concerns these enterprises have.
So, how do you plan to commercialize your product? I have noticed tons of chatbot cloud-based app providers built on top of ChatGPT API, Azure API (ask users to provide their API key). Enterprises will still be very wary of putting their data on these multi-tenant platforms. I feel that even if there is encryption that's not going to be enough. This screams for virtual private LLM stacks for enterprises (the only way to fully isolate).
We have a cloud offering at https://trypromptly.com. We do offer enterprises the ability to host their own vector database to maintain control of their data. We also support interacting with open source LLMs from the platform. Enterprises can bring up https://github.com/go-skynet/LocalAI, run Llama or others and connect to them from their Promptly LLM apps.
We also provide support and some premium processors for enterprise on-prem deployments.