RWKV-X: A Linear Complexity Hybrid Language Model
By: Haowen Hou , Zhiyi Huang , Kaifeng Tan and more
Potential Business Impact:
Lets computers understand very long stories.
In this paper, we introduce RWKV-X, a novel hybrid architecture that combines the efficiency of RWKV for short-range modeling with a sparse attention mechanism designed to capture long-range context. Unlike previous hybrid approaches that rely on full attention layers and retain quadratic complexity, RWKV-X achieves linear-time complexity in training and constant-time complexity in inference decoding. We demonstrate that RWKV-X, when continually pretrained on 64K-token sequences, achieves near-perfect accuracy on the 64K passkey retrieval benchmark. It consistently outperforms prior RWKV-7 models on long-context benchmarks, while maintaining strong performance on short-context tasks. These results highlight RWKV-X as a scalable and efficient backbone for general-purpose language modeling, capable of decoding sequences up to 1 million tokens with stable speed and memory usage. To facilitate further research and analysis, we have made the checkpoints and the associated code publicly accessible at: https://github.com/howard-hou/RWKV-X.
Similar Papers
Enhancing RWKV-based Language Models for Long-Sequence Text Generation
Computation and Language
Makes computers write longer, smarter stories.
Cross-attention for State-based model RWKV-7
CV and Pattern Recognition
Makes pictures match words better.
RocketKV: Accelerating Long-Context LLM Inference via Two-Stage KV Cache Compression
Computation and Language
Makes AI understand long texts faster, using less memory.