Score: 1

Efficient Inter-Task Attention for Multitask Transformer Models

Published: August 6, 2025 | arXiv ID: 2508.04422v1

By: Christian Bohn , Thomas Kurbiel , Klaus Friedrichs and more

Potential Business Impact:

Makes smart computers learn many things faster.

In both Computer Vision and the wider Deep Learning field, the Transformer architecture is well-established as state-of-the-art for many applications. For Multitask Learning, however, where there may be many more queries necessary compared to single-task models, its Multi-Head-Attention often approaches the limits of what is computationally feasible considering practical hardware limitations. This is due to the fact that the size of the attention matrix scales quadratically with the number of tasks (assuming roughly equal numbers of queries for all tasks). As a solution, we propose our novel Deformable Inter-Task Self-Attention for Multitask models that enables the much more efficient aggregation of information across the feature maps from different tasks. In our experiments on the NYUD-v2 and PASCAL-Context datasets, we demonstrate an order-of-magnitude reduction in both FLOPs count and inference latency. At the same time, we also achieve substantial improvements by up to 7.4% in the individual tasks' prediction quality metrics.

Country of Origin
🇩🇪 Germany

Page Count
15 pages

Category
Computer Science:
CV and Pattern Recognition