What does HackerNews think of openai-cookbook?
Examples and guides for using the OpenAI API
It still feels like a bit of a Wild West for patterns in this area as yet, with a lot of people trying lots of things and it might be too soon for defining terms. A useful resource is still things like the OpenAI Cookbook, that is a decent collection of a lot of the things in this article but with a more implementation bent.[1]
The area that seems to get a lot of idea duplication currently is in providing either a 'session' or a longer term context for GPT, be it with embeddings or rolling prompts for these apps. The use of vector search and embedded chunks is something that seems to be missing so far from vendors like OpenAI, and you can't help but wonder that they'll move it behind their API eventually with a 'session id' in the end. I think that was mentioned as on their roadmap for this year too. The lack of GPT-4 fine tuning options just seems to push people more to look at the Pinecone, Weaviates etc stores and chaining up their own sequences to achieve some sort of memory.
I've implemented features with GPT-4 and functions and so far it's feeling useful for 'data model' like use (where you're bringing json into the prompt about a domain noun, e.g. 'Tasks') but is pretty hairy when it comes to pure functions - the tuning they've done to get it to pick which function and which parameters to use is still hard going to get right, which means there doesn't feel like a lot of trust that it is going to be usable. It's like there needs to be a set of patterns or categories for 'business apps' that are heavily siloed into just a subset of available functions it can work with, making it more task-specific rather than as a general chat agent we see a lot of. The difference in approach between LangChain's Chain of Thought pattern and just using OpenAI functions is sort of up in the air as well. Like I said, it still all feels like we're in wild west times, at least as an app developer.
I see using ChatGPT directly as prompting, not prompt engineering.
[1] - https://platform.openai.com/docs/tutorials [2] - https://github.com/openai/openai-cookbook [3] - https://github.com/hwchase17/langchain
- Generative use cases, where you give the model the kernel of an idea and then you curate its output (e.g., blog writing, code completion, etc.)
- Extractive use cases, where you give the model some big piece of text, and then process it in some way (e.g., extract names and addresses, classify it, ask a question about the text)
- Transformational use cases, where you need to fix/adjust a piece of text, or translate from one domain to another (e.g., sometimes I'll use GPT-3 for little tasks like copying and pasting a table from a presentation and then asking the model to translate it to markdown; saves me a visit to Google and finding some table generator website)
- Comparisons, where you use embeddings to do search/clustering/recommendations over any set of strings (e.g., can combo nicely with the Q&A use case above, where you search over a knowledge base)
I started a repo here with some barebones examples of each: https://github.com/openai/openai-cookbook/
If you're looking for examples of commercial applications, OpenAI published two blog posts highlighting a few:
- GPT-3 use cases (2021): https://openai.com/blog/gpt-3-apps/
- Codex use cases (2022): https://openai.com/blog/codex-apps/