Block Sparse Flash Attention
By: Daniel Ohayon , Itay Lamprecht , Itay Hubara and more
Potential Business Impact:
Makes AI understand long texts much faster.
Modern large language models increasingly require long contexts for reasoning and multi-document tasks, but attention's quadratic complexity creates a severe computational bottleneck. We present Block-Sparse FlashAttention (BSFA), a drop-in replacement that accelerates long-context inference while preserving model quality. Unlike methods that predict importance before computing scores, BSFA computes exact query-key similarities to select the top-k most important value blocks for each query. By comparing per-block maximum scores against calibrated thresholds, we skip approximately 50% of the computation and memory transfers for pruned blocks. Our training-free approach requires only a one-time threshold calibration on a small dataset to learn the per-layer and per-head attention score distributions. We provide a CUDA kernel implementation that can be used as a drop-in replacement for FlashAttention. On Llama-3.1-8B, BSFA achieves up to 1.10x speedup on real-world reasoning benchmarks and up to 1.24x for needle-in-a-haystack retrieval tasks while maintaining above 99% baseline accuracy, with certain configurations even improving accuracy by focusing on the most relevant content, substantially outperforming existing sparse attention methods. The implementation is available at https://github.com/Danielohayon/Block-Sparse-Flash-Attention
Similar Papers
Flash Sparse Attention: An Alternative Efficient Implementation of Native Sparse Attention Kernel
Distributed, Parallel, and Cluster Computing
Makes AI understand more words faster.
Optimizing Mixture of Block Attention
Machine Learning (CS)
Makes AI understand long texts much faster.
SSA: Sparse Sparse Attention by Aligning Full and Sparse Attention Outputs in Feature Space
Computation and Language
Makes AI understand long stories better, faster.