I wish they would compare to a 1080Ti or something along those lines. Nevertheless, this is pretty neat minus the RAM implications because realistically, I'm guessing you can use 5GB of RAM at most for training. I always wanted a computer where I can test the model for a couple batches (that doesn't cost $1200+) and then push to the cloud to train for scalability.
Seems i am not the only one who wants to know whether the M1 neural engine is performant enough for prototyping :)
It will be interesting how long the other deep learning frameworks will need to support the M1. Pytorch has not yet achieved comparable performance on a TPU compared to tensorflow.
You know, they don’t mention the neural engine. I know very little of ML but maybe the neural engine isn’t helpful on the training side?
The API of the neural engine is closed for some reason.
really? So WHO can use it then when not normal developers? Only Apple?
The neural engine on the Apple A11 wasn't exposed to apps at least at launch, but that's no longer a thing on A12 onwards.