Text Simplification with Sentence Embeddings
By: Matthew Shardlow
Potential Business Impact:
Makes hard text easy to understand.
Sentence embeddings can be decoded to give approximations of the original texts used to create them. We explore this effect in the context of text simplification, demonstrating that reconstructed text embeddings preserve complexity levels. We experiment with a small feed forward neural network to effectively learn a transformation between sentence embeddings representing high-complexity and low-complexity texts. We provide comparison to a Seq2Seq and LLM-based approach, showing encouraging results in our much smaller learning setting. Finally, we demonstrate the applicability of our transformation to an unseen simplification dataset (MedEASI), as well as datasets from languages outside the training data (ES,DE). We conclude that learning transformations in sentence embedding space is a promising direction for future research and has potential to unlock the ability to develop small, but powerful models for text simplification and other natural language generation tasks.
Similar Papers
Mechanistic Decomposition of Sentence Representations
Computation and Language
Shows how computers understand sentences better.
Sentence Embeddings as an intermediate target in end-to-end summarisation
Computation and Language
Summarizes long reviews better by picking key sentences.
Enhancing Recommender Systems Using Textual Embeddings from Pre-trained Language Models
Information Retrieval
Makes movie suggestions understand what you like.