Score: 1

Semantic Tree Inference on Text Corpa using a Nested Density Approach together with Large Language Model Embeddings

Published: December 29, 2025 | arXiv ID: 2512.23471v1

By: Thomas Haschka, Joseph Bakarji

Potential Business Impact:

Organizes texts into a family tree of ideas.

Business Areas:
Semantic Search Internet Services

Semantic text classification has undergone significant advances in recent years due to the rise of large language models (LLMs) and their high dimensional embeddings. While LLM-embeddings are frequently used to store and retrieve text by semantic similarity in vector databases, the global structure semantic relationships in text corpora often remains opaque. Herein we propose a nested density clustering approach, to infer hierarchical trees of semantically related texts. The method starts by identifying texts of strong semantic similarity as it searches for dense clusters in LLM embedding space. As the density criterion is gradually relaxed, these dense clusters merge into more diffuse clusters, until the whole dataset is represented by a single cluster -- the root of the tree. By embedding dense clusters into increasingly diffuse ones, we construct a tree structure that captures hierarchical semantic relationships among texts. We outline how this approach can be used to classify textual data for abstracts of scientific abstracts as a case study. This enables the data-driven discovery research areas and their subfields without predefined categories. To evaluate the general applicability of the method, we further apply it to established benchmark datasets such as the 20 Newsgroups and IMDB 50k Movie Reviews, demonstrating its robustness across domains. Finally we discuss possible applications on scientometrics, topic evolution, highlighting how nested density trees can reveal semantic structure and evolution in textual datasets.

Country of Origin
🇱🇧 🇦🇹 Austria, Lebanon

Page Count
20 pages

Category
Computer Science:
Computation and Language