Tradeoffs on the volume of fault-tolerant circuits
By: Anirudh Krishna, Gilles Zémor
Potential Business Impact:
Makes computers work even with broken parts.
Dating back to the seminal work of von Neumann [von Neumann, Automata Studies, 1956], it is known that error correcting codes can overcome faulty circuit components to enable robust computation. Choosing an appropriate code is non-trivial as it must balance several requirements. Increasing the rate of the code reduces the relative number of redundant bits used in the fault-tolerant circuit, while increasing the distance of the code ensures robustness against faults. If the rate and distance were the only concerns, we could use asymptotically optimal codes as is done in communication settings. However, choosing a code for computation is challenging due to an additional requirement: The code needs to facilitate accessibility of encoded information to enable computation on encoded data. This seems to conflict with having large rate and distance. We prove that this is indeed the case, namely that a code family cannot simultaneously have constant rate, growing distance and short-depth gadgets to perform encoded CNOT gates. As a consequence, achieving good rate and distance may necessarily entail accepting very deep circuits, an undesirable trade-off in certain architectures and applications.
Similar Papers
Machine learning discovers new champion codes
Information Theory
Finds better ways to fix digital mistakes.
Function-Correcting Codes for Insertion-Deletion Channel
Information Theory
Saves space when storing computer information.
Capacity-Achieving Codes with Inverse-Ackermann-Depth Encoders
Information Theory
Makes computers fix errors in messages faster.