Fundamental Limits of Distributed Computing for Linearly Separable Functions
By: K. K. Krishnan Namboodiri , Elizabath Peter , Derya Malak and more
Potential Business Impact:
Makes computers share data faster and cheaper.
This work addresses the problem of distributed computation of linearly separable functions, where a master node with access to $K$ datasets, employs $N$ servers to compute $L$ user-requested functions, each defined over the datasets. Servers are instructed to compute subfunctions of the datasets and must communicate computed outputs to the user, who reconstructs the requested outputs. The central challenge is to reduce the per-server computational load and the communication cost from servers to the user, while ensuring recovery for any possible set of $L$ demanded functions. We here establish the fundamental communication-computation tradeoffs for arbitrary $K$ and $L$, through novel task-assignment and communication strategies that, under the linear-encoding and no-subpacketization assumptions, are proven to be either exactly optimal or within a factor of three from the optimum. In contrast to prior approaches that relied on fixed assignments of tasks -- either disjoint or cyclic assignments -- our key innovation is a nullspace-based design that jointly governs task assignment and server transmissions, ensuring exact decodability for all demands, and attaining optimality over all assignment and delivery methods. To prove this optimality, we here uncover a duality between nullspaces and sparse matrix factorizations, enabling us to recast the distributed computing problem as an equivalent factorization task and derive a sharp information-theoretic converse bound. Building on this, we establish an additional converse that, for the first time, links the communication cost to the covering number from the theory of general covering designs.
Similar Papers
Typical Solutions of Multi-User Linearly-Decomposable Distributed Computing
Information Theory
Helps planes and satellites share information better.
Byzantine-Resilient Distributed Computation via Task Replication and Local Computations
Information Theory
Makes computers work together even with bad helpers.
Distributed Source Coding for Compressing Vector-Linear Functions
Information Theory
Makes computers share data better for tasks.