What does HackerNews think of RWKV-LM?
RWKV is an RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best of RNN and transformer - great performance, fast inference, saves VRAM, fast training, "infinite" ctx_len, and free sentence embedding.
Wow. Pretty darn cool! <3 :'))))
Moreover, the inability to cache transformers makes the use of large context windows quite costly, as all previous messages must be sent with each call. In this context, the RWKV-LM project on GitHub (https://github.com/BlinkDL/RWKV-LM) might offer a solution. They claim to achieve performance comparable to transformers using an RNN, which could potentially handle a 100-page document and cache it, thereby eliminating the need to process the entire document with each subsequent query. However, I suspect RWKV might fall short in handling complex tasks that require maintaining multiple variables in memory, such as mathematical computations, but it should suffice for many scenarios.
On a related note, I believe Anthropics' Claude is somewhat underappreciated. In some instances, it outperforms GPT4, and I'd rank it somewhere between GPT4 and Bard overall.
Not necessarily.
RWKV, for example, is a different architecture that wasn't based on Facebook's weights whatsoever. I don't know where BlinkDL (the author) got the training data, but they seem to have done everything mostly independently otherwise.
https://github.com/BlinkDL/RWKV-LM
disclaimer: I've been doing a lot of work lately on an implementation of CPU inference for this model, so I'm obviously somewhat biased since this is the model I have the most experience in.
The dataset is interesting. It's not dissimllar to The Pile, which RWKV is already trained on, but does seem to have quite a lot more preprocessing to increase the dataset quality.
But they reference https://arxiv.org/abs/2105.14103 and the bottom section of https://github.com/BlinkDL/RWKV-LM has an explainer.