What does HackerNews think of onnx?

Open standard for machine learning interoperability

Language: Python

#6 in Tensorflow
ONNX[0], model-as-protosbufs, continuing to gain adoption will hopefully solve this issue.

[0] https://github.com/onnx/onnx

How does this compare to ONNX https://github.com/onnx/onnx in terms of feature completeness/performance and what made you develop your own runtime ?
We are framework agnostic for model development, models get converted to ONNX[1] and served with the ONNX runtime[2]. They are deployed as microservices with docker.

We are currently looking at MLflow[3] for the tracking server, it has some major pain points though. We use Tune[4] for hyperparameter search, and MLflow provides no way to delete artifacts from the parallel runs which will lead to massive amounts of wasted storage or dangerous external cleanup scripts. They have also been resisting requests for the feature in numerous issues. Not a good open source solution in the space.

Note that this is for an embedded deployment environment.

[1] https://github.com/onnx/onnx

[2] https://github.com/Microsoft/onnxruntime

[3] https://mlflow.org/

[4] https://ray.readthedocs.io/en/latest/tune.html

We've been looking at ONNX a lot at work. Now that we have the open-source ONNX runtime[1] available, it looks like a really good option. The ONNX runtime from MS supports, or claims to eventually support, all of Windows, Mac and Linux, with API's in Python, C#, C, and C++ (available or "coming soon").

The biggest "hole" in the ONNX ecosystem, that I see, is that Tensorflow doesn't natively support the format. But, there are converters to convert TF models to ONNX, so that shouldn't be a huge issue.

Also, despite the name "Open Neural Network eXchange", ONNX is capable of representing non neural-network models as well. Just something to keep in mind.

[1]: https://github.com/onnx/onnx

Recently there is more and more initiative to have standard format in deeplearning environnement. dlpack for tensor format (https://github.com/dmlc/dlpack) onnx for saved NN (https://github.com/onnx/onnx) and tvm for execution (http://tvmlang.org/2017/10/06/nnvm-compiler-announcement.htm...)
Reference specification and validator are hosted in https://github.com/onnx/onnx
Interesting, it is also MIT instead of the usual BSD + Patents Facebook uses. [1]

It is also under the 'onnx' namespace, presumably because it was a team effort.

It looks like Microsoft often uses MIT (vscode, dotnet, CNTK, ChakraCore), does this mean Microsoft was the driving force (even though it's a Facebook blog)?

* just noticed the link to Microsofts blog at the bottom of the page [2].

[1] https://github.com/onnx/onnx [2] https://www.microsoft.com/en-us/cognitive-toolkit/blog/2017/...