On M1/M2, it offers a convenient single binary deployment, thanks to Rust. You can find the latest release at https://github.com/TabbyML/tabby/releases/tag/latest
(Disclaimer: I am the author)
Model supports list: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, LLAMA, T5, WHISPER
( Found this library during my research on alternatives to triton/FasterTransformer in Tabby https://github.com/TabbyML/tabby)
Self-hosted AI coding assistant. An opensource / on-prem alternative to GitHub Copilot.
Project: https://github.com/TabbyML/tabby
Show HN post: https://news.ycombinator.com/item?id=35470915
We have an ambitious roadmap ahead, and we need your help! If you are interested in being an early member of a fast-growing startup team(spoiler: 100% remote , open source , and transparent compensation). Please reach out and apply at https://tabbyml.notion.site/Careers-35b1a77f3d1743d9bae06b7d...
Local context is definitely the key factor for small models achieving better quality than copilot (related: [1], [2]) .
One things I’d really wanna have in Sourcegraph: A Search API supports custom retrieval / ranking. Research works (e.g [2]) show simple BoW fetched context is more efficient for code completion tasks.
Disclaimer: I’m building https://github.com/TabbyML/tabby an open source alternative of copilot.
There’re recent works show a potential of beating Copilot’s performance (e.g https://arxiv.org/abs/2303.12570) with much smaller models (500M vs 10B+).
Inspired by these work, I’m building Tabby (https://github.com/TabbyML/tabby), a OSS GitHub Copilot alternative. Hopefully it could make low cost AI coding accessible to everyone.