Geometric Reasoning in the Embedding Space
By: Jan Hůla , David Mojžíšek , Jiří Janeček and more
Potential Business Impact:
Computers learn to draw shapes from clues.
In this contribution, we demonstrate that Graph Neural Networks and Transformers can learn to reason about geometric constraints. We train them to predict spatial position of points in a discrete 2D grid from a set of constraints that uniquely describe hidden figures containing these points. Both models are able to predict the position of points and interestingly, they form the hidden figures described by the input constraints in the embedding space during the reasoning process. Our analysis shows that both models recover the grid structure during training so that the embeddings corresponding to the points within the grid organize themselves in a 2D subspace and reflect the neighborhood structure of the grid. We also show that the Graph Neural Network we design for the task performs significantly better than the Transformer and is also easier to scale.
Similar Papers
Hierarchical Geometry of Cognitive States in Transformer Embedding Spaces
Computation and Language
Computers learn how people think and organize ideas.
Fully Geometric Multi-Hop Reasoning on Knowledge Graphs with Transitive Relations
Artificial Intelligence
Makes computers understand complex questions better.
Training Neural Networks by Optimizing Neuron Positions
Machine Learning (CS)
Makes smart computer brains smaller and faster.