I follow some indie hackers online who are in the scraping space, such as BrowserBear and Scrapingbee, I wonder how they will fare with something like this. The only solace is that this is nondeterministic, but perhaps you can simply ask the API to create Python or JS code that is deterministic, instead.
More generally, I wonder how a lot of smaller startups will fare once OpenAI subsumes their product. Those who are running a product that's a thin wrapper on top of ChatGPT or the GPT API will find themselves at a loss once OpenAI opens up the capability to everyone. Perhaps SaaS with minor changes from the competition really were a zero-interest-rate phenomenon.
This is why it's important to have a moat. For example, I'm building a product that has some AI features (open source email (IMAP and OAuth2) / calendar API), but it would work just fine even without any of the AI parts, because the fundamental benefit is still useful for the end user. It's similar to Notion, people will still use Notion to organize their thoughts and documents even without their Notion AI feature.
Build products, not features. If you think you are the one selling pickaxes during the AI gold rush, you're mistaken; it's OpenAI who's selling the pickaxes (their API) to you who are actually the ones panning for gold (finding AI products to sell) instead.
I've been using Duckling [0] for extracting fuzzy dates and times from text. It does a good job but I needed a custom build with extra rules to make that into a great job. And that's just for dates, 1 of 13 dimensions supported. Being able to use an AI that handles them with better accuracy will be fantastic.
Does a specialised model trained to extract times and dates already exist? It's entity tagging but a specialised form (especially when dealing with historical documents where you may need Gregorian and Julian calendars).