Elastic Attention: Test-time Adaptive Sparsity Ratios for Efficient Transformers
arXiv:2601.17367v1 Announce Type: new Abstract: The quadratic complexity of standard attention mechanisms poses a significant scalability bottleneck for large language models (LLMs) in long-context scenarios....