Can Transformers Break Encryption Schemes via In-Context Learning?
By: Jathin Korrapati , Patrick Mendoza , Aditya Tomar and more
Potential Business Impact:
Teaches computers to break secret codes.
In-context learning (ICL) has emerged as a powerful capability of transformer-based language models, enabling them to perform tasks by conditioning on a small number of examples presented at inference time, without any parameter updates. Prior work has shown that transformers can generalize over simple function classes like linear functions, decision trees, even neural networks, purely from context, focusing on numerical or symbolic reasoning over underlying well-structured functions. Instead, we propose a novel application of ICL into the domain of cryptographic function learning, specifically focusing on ciphers such as mono-alphabetic substitution and Vigen\`ere ciphers, two classes of private-key encryption schemes. These ciphers involve a fixed but hidden bijective mapping between plain text and cipher text characters. Given a small set of (cipher text, plain text) pairs, the goal is for the model to infer the underlying substitution and decode a new cipher text word. This setting poses a structured inference challenge, which is well-suited for evaluating the inductive biases and generalization capabilities of transformers under the ICL paradigm. Code is available at https://github.com/adistomar/CS182-project.
Similar Papers
ICL CIPHERS: Quantifying "Learning" in In-Context Learning via Substitution Ciphers
Computation and Language
Helps computers learn by hiding and revealing patterns.
A Simple Generalisation of the Implicit Dynamics of In-Context Learning
Machine Learning (CS)
Teaches computers to learn from examples without changing them.
Understanding the Generalization of In-Context Learning in Transformers: An Empirical Study
Machine Learning (CS)
Teaches computers to learn better from examples.