How to predict creativity ratings from written narratives: A comparison of co-occurrence and textual forma mentis networks
By: Roberto Passaro, Edith Haim, Massimo Stella
This tutorial paper provides a step-by-step workflow for building and analysing semantic networks from short creative texts. We introduce and compare two widely used text-to-network approaches: word co-occurrence networks and textual forma mentis networks (TFMNs). We also demonstrate how they can be used in machine learning to predict human creativity ratings. Using a corpus of 1029 short stories, we guide readers through text preprocessing, network construction, feature extraction (structural measures, spreading-activation indices, and emotion scores), and application of regression models. We evaluate how network-construction choices influence both network topology and predictive performance. Across all modelling settings, TFMNs consistently outperformed co-occurrence networks through lower prediction errors (best MAE = 0.581 for TFMN, vs 0.592 for co-occurrence with window size 3). Network-structural features dominated predictive performance (MAE = 0.591 for TFMN), whereas emotion features performed worse (MAE = 0.711 for TFMN) and spreading-activation measures contributed little (MAE = 0.788 for TFMN). This paper offers practical guidance for researchers interested in applying network-based methods for cognitive fields like creativity research. we show when syntactic networks are preferable to surface co-occurrence models, and provide an open, reproducible workflow accessible to newcomers in the field, while also offering deeper methodological insight for experienced researchers.
Similar Papers
Textual forma mentis networks bridge language structure, emotional content and psychopathology levels in adolescents
Computers and Society
Helps doctors spot mental health issues from talking.
Cognitive networks highlight differences and similarities in the STEM mindsets of human and LLM-simulated trainees, experts and academics
Computation and Language
Shows how minds connect science ideas.
Evaluating LLM Story Generation through Large-scale Network Analysis of Social Structures
Computation and Language
Checks stories for good or bad character friendships.