What does HackerNews think of neural-engine?

Everything we actually know about the Apple Neural Engine (ANE)

#18 in iOS
Mobile TPUs/NPUs:

Pixel 6+ phones have TPUs (in addition to CPUs and an iGPU/dGPU).

Tensor Processing Unit > Products > Google Tensor https://en.wikipedia.org/wiki/Tensor_Processing_Unit

TensorFlow lite; tflite: https://www.tensorflow.org/lite

From https://github.com/hollance/neural-engine :

> The Apple Neural Engine (or ANE) is a type of NPU, which stands for Neural Processing Unit.

From https://github.com/basicmi/AI-Chip :

> A list of ICs and IPs for AI, Machine Learning and Deep Learning

If Apple would wake up to what's happening with llama.cpp etc then I don't see such a market in paying for remote access to big models via API, though it's currently the only game in town.

Currently a Macbook has a Neural Engine that is sitting idle 99% of the time and only suitable for running limited models (poorly documented, opaque rules about what ops can be accelerated, a black box compiler [1] and an apparent 3GB model size limit [2])

OTOH you can buy a Macbook with 64GB 'unified' memory and a Neural Engine today

If you squint a bit and look into the near future it's not so hard to imagine a future Mx chip with a more capable Neural Engine and yet more RAM, and able to run the largest GPT3 class models locally. (Ideally with better developer tools so other compilers can target the NE)

And then imagine it does that while leaving the CPU+GPU mostly free to run apps/games ... the whole experience of using a computer could change radically in that case.

I find it hard not to think this is coming within 5 years (although equally, I can imagine this is not on Apple's roadmap at all currently)

[1] https://github.com/hollance/neural-engine

[2] https://github.com/smpanaro/more-ane-transformers/blob/main/...

This is an insidious argument because there's a grain of truth in it. Photos/videos are heavily post-processed by Apples 'Neural engine' (https://github.com/hollance/neural-engine) running on the Bionic chips, but this is done at the time of recording.

Zooming the video to play it to the jury would just be simple linear scaling, and could be done on any device capable of playing & scaling a video. For example, the zoom or crop functions in VLC.

> have artificial intelligence in them that allow things to be viewed through three-dimensions and logarithms

a dangerous mix of incoherent buzzwords that's nearly correct.

It’s accessible through the CoreML framework. I know nothing about this and very little about ML in general, but I found an interesting GitHub page written by a consultant who seems to know a fair amount about it.

https://github.com/hollance/neural-engine