Soft Inductive Bias Approach via Explicit Reasoning Perspectives in Inappropriate Utterance Detection Using Large Language Models
By: Ju-Young Kim , Ji-Hong Park , Se-Yeon Lee and more
Recent incidents in certain online games and communities, where anonymity is guaranteed, show that unchecked inappropriate remarks frequently escalate into verbal abuse and even criminal behavior, raising significant social concerns. Consequently, there is a growing need for research on techniques that can detect inappropriate utterances within conversational texts to help build a safer communication environment. Although large-scale language models trained on Korean corpora and chain-of-thought reasoning have recently gained attention, research applying these approaches to inappropriate utterance detection remains limited. In this study, we propose a soft inductive bias approach that explicitly defines reasoning perspectives to guide the inference process, thereby promoting rational decision-making and preventing errors that may arise during reasoning. We fine-tune a Korean large language model using the proposed method and conduct both quantitative performance comparisons and qualitative evaluations across different training strategies. Experimental results show that the Kanana-1.5 model achieves an average accuracy of 87.0046, improving by approximately 3.89 percent over standard supervised learning. These findings indicate that the proposed method goes beyond simple knowledge imitation by large language models and enables more precise and consistent judgments through constrained reasoning perspectives, demonstrating its effectiveness for inappropriate utterance detection.
Similar Papers
Investigating Thinking Behaviours of Reasoning-Based Language Models for Social Bias Mitigation
Computation and Language
Fixes AI's thinking to stop unfair stereotypes.
Language Models Do Not Follow Occam's Razor: A Benchmark for Inductive and Abductive Reasoning
Artificial Intelligence
Helps computers guess better with less information.
Stands to Reason: Investigating the Effect of Reasoning on Idiomaticity Detection
Computation and Language
Helps computers understand tricky sayings better.