Link should probably be to the announcement post (10th Feb 23) : https://www.kickstarter.com/projects/aiforeveryone/mycroft-m...

TLDR: Mycroft is now out of money (reportedly due to ongoing patent litigation) and can no longer fulfill pledges. Though it's not clear if you can still pay for a full price unit?

> Which brings us to the present day. We will still be shipping all orders that are made through the Mycroft website, because these sales directly cover the costs of producing and shipping the products. However we do not have the funds to continue fulfilling rewards from this crowdfunding campaign, or to even continue meaningful operations.

The patent troll is what killed the business plan in the end because there was zero margin. The real problem seems to me that it is difficult to compete with a nicely integrated 30 usd echo dot if people rather pay it with their personal data (this actually includes me). Mycroft also technically was falling short even for its potential target audience. They had a hard time even without that troll.

If I would do this I would rather repurpose (and potentially resell) off the shelf mobile phones with a custom stand and rather get the software right first.

I think this is a sad way to go. Hardware is super expensive to develop and the key IP here is in the software. There's probably some fancy microphone tech, but ultimately you could (should?) run Mycroft on a Raspberry Pi with an edge accelerator device.

Technically I think we're getting close to a good offline system. Home Assistant solves the practical aspect of controlling systems and triggered routines - it really works well, at least on par with vendor software. Speech recognition models are usable and getting better. The interesting "glue" is parsing the commands into something reasonable and I think open source LLMs are likely to be the answer. See Open Assistant [0] for example. It doesn't solve the knowledge base issue (e.g. people seem to use Alexa either as a very fancy egg timer, or for querying facts), but it would probably allow for very natural interaction with the device and the model could just translate a query into a command that triggers an action.

My impression is that Amazon's approach (I have no knowledge of what goes on inside) has been to build fairly complicated subsystems to handle very specific queries. Each "Skill" has to be separately developed and if you ask something outside a fairly rigid grammar, the device has no idea.

[0] https://github.com/LAION-AI/Open-Assistant