What does HackerNews think of llm?
Run inference for Large Language Models on CPU, with Rust 🦀🚀🦙
Language:
Rust
Hi! I'm a maintainer of https://github.com/rustformers/llama-rs. We're planning to expand our model support soon.
I'm curious if someone will have to port these enhancements elsewhere, ie: https://github.com/rustformers/llama-rs