Investigating Counterclaims in Causality Extraction from Text
By: Tim Hagen , Niklas Deckers , Felix Wolter and more
Potential Business Impact:
Helps computers tell if something causes or stops something.
Research on causality extraction from text has so far almost entirely neglected counterclaims. Existing causality extraction datasets focus solely on "procausal" claims, i.e., statements that support a relationship. "Concausal" claims, i.e., statements that refute a relationship, are entirely ignored or even accidentally annotated as procausal. We address this shortcoming by developing a new dataset that integrates concausality. Based on an extensive literature review, we first show that concausality is an integral part of causal reasoning on incomplete knowledge. We operationalize this theory in the form of a rigorous guideline for annotation and then augment the Causal News Corpus with concausal statements, obtaining a substantial inter-annotator agreement of Cohen's $\kappa=0.74$. To demonstrate the importance of integrating concausal statements, we show that models trained without concausal relationships tend to misclassify these as procausal instead. Based on our new dataset, this mistake can be mitigated, enabling transformers to effectively distinguish pro- and concausality.
Similar Papers
Integrating Causal Reasoning into Automated Fact-Checking
Computation and Language
Finds fake news by checking event causes.
Causal Tree Extraction from Medical Case Reports: A Novel Task for Experts-like Text Comprehension
Computation and Language
Helps doctors understand how diseases are diagnosed.
Causal Inference on Outcomes Learned from Text
Econometrics
Helps understand what words cause changes.