Not a big fan of how server-centric the LLM landscape is. I want something that can run locally, and doesn't require any special setup. One install + one model import maximum. Currently unless I want to go clone git repos, install Python dependencies and buy an Nvidia GPU I'm stuck waiting for it to become part of https://webllm.mlc.ai/. That's a website, come to think of it, but at least the computation happens locally with minimal fuss.
> Not a big fan of how server-centric the LLM landscape is.
That's just not true. You can get ooba[1] running in no time, which is 100% made for desktop usage. There's also koboldcpp and other solution also made for desktop users. In fact, most LLM communities are dominated by end users who run these LLMs on their desktops to roleplay.
AMD being awful is orthogonal here.