Causal Autoencoder-like Generation of Feedback Fuzzy Cognitive Maps with an LLM Agent
By: Akash Kumar Panda, Olaoluwa Adigun, Bart Kosko
Potential Business Impact:
Explains complex ideas by turning them into simple stories.
A large language model (LLM) can map a feedback causal fuzzy cognitive map (FCM) into text and then reconstruct the FCM from the text. This explainable AI system approximates an identity map from the FCM to itself and resembles the operation of an autoencoder (AE). Both the encoder and the decoder explain their decisions in contrast to black-box AEs. Humans can read and interpret the encoded text in contrast to the hidden variables and synaptic webs in AEs. The LLM agent approximates the identity map through a sequence of system instructions that does not compare the output to the input. The reconstruction is lossy because it removes weak causal edges or rules while it preserves strong causal edges. The encoder preserves the strong causal edges even when it trades off some details about the FCM to make the text sound more natural.
Similar Papers
LLMs can hide text in other text of the same length
Artificial Intelligence
Hides secret messages inside normal text.
Beyond Hallucinations: The Illusion of Understanding in Large Language Models
Artificial Intelligence
Helps AI think more like humans, not just guess.
Exploring LLM-based Frameworks for Fault Diagnosis
Artificial Intelligence
AI watches machines, finds problems, explains why.