From Text to Graph: Leveraging Graph Neural Networks for Enhanced Explainability in NLP
By: Fabio Yáñez-Romero , Andrés Montoyo , Armando Suárez and more
Potential Business Impact:
Turns sentences into pictures to explain AI.
Researchers have relegated natural language processing tasks to Transformer-type models, particularly generative models, because these models exhibit high versatility when performing generation and classification tasks. As the size of these models increases, they achieve outstanding results. Given their widespread use, many explainability techniques are developed based on these models. However, this process becomes computationally expensive due to the large size of the models. Additionally, transformers interpret input information through tokens that fragment input words into sequences lacking inherent semantic meaning, complicating the explanation of the model from the very beginning. This study proposes a novel methodology to achieve explainability in natural language processing tasks by automatically converting sentences into graphs and maintaining semantics through nodes and relations that express fundamental linguistic concepts. It also allows the subsequent exploitation of this knowledge in subsequent tasks, making it possible to obtain trends and understand how the model associates the different elements inside the text with the explained task. The experiments delivered promising results in determining the most critical components within the text structure for a given classification.
Similar Papers
Advancements in Natural Language Processing: Exploring Transformer-Based Architectures for Text Understanding
Computation and Language
Computers now understand and write like people.
Next Word Suggestion using Graph Neural Network
Computation and Language
Helps computers guess the next word better.
Towards Zero-Shot & Explainable Video Description by Reasoning over Graphs of Events in Space and Time
CV and Pattern Recognition
Lets computers describe videos using words.