I'm skeptical. Like Microsoft, Apple has a lot of money to throw at super large models. But the question is whether they are committed enough to AI to do it, and whether they have the required ML talent. The latter was actually a problem for Microsoft, otherwise they wouldn't cooperate with OpenAI. Regarding commitment: This also seems iffy to me. Generative AI doesn't fit very well into Apple's hardware centric philosophy.
> Generative AI doesn't fit very well into Apple's hardware centric philosophy
They have a huge unique niche to exploit with local private models, probably the only company on the planet with the hardware in place to take advantage of this today end-to-end, but unfortunately I don't think generative AI fits with Apple's culture for anything I just think they'd be terrified of it saying something wrong so would probably clip its wings to the point that it's mundane and kinda useless.
I don't think it's a very huge or exploitable niche for two reasons:
- Android, Windows, Linux and MacOS can already run local and private models just fine. Getting something product-ready for iPhone is a game of catch-up, and probably a losing battle if Apple insists on making you use their AI assistant over competing options.
- The software side needs more development. The current SOTA inferencing techniques for Apple Silicon in llama.cpp are cobbled together with Metal shaders and NEON instructions, neither of which are ideal or specific to Apple hardware. If Apple wants to differentiate their silicon from the 2016 Macbooks running LLaMA with AVX, then they have to develop CoreML's API further.