How people detect incomplete explanations
Date created: | Last Updated:
: DOI | ARK
Creating DOI. Please wait...
Description: In theory, there exists no bound to a causal explanation – every explanation can be elaborated further. But reasoners rate some explanations as more complete than others. To account for the behavior, we developed a novel theory of the detection of explanatory incompleteness. The theory is based on the idea that reasoners construct mental models – discrete, iconic representations of possibilities – of causal descriptions. By default, each causal relation refers to a single mental model. The theory posits that reasoners consider an explanation complete when they can construct a single causal mental model, but that an incomplete explanation refers to causal descriptions that require reasoners to consider multiple models. A major consequence of the theory is that reasoners should rate causal chains, e.g., A causes B and B causes C, as more complete than “common cause” descriptions (e.g., A causes B and A causes C) or “common effect” descriptions (e.g., A causes C and B causes C). Two experiments validate the theory's prediction: Experiment 1 tested participants' evaluations of the three different types of causal description. Experiment 2 was a preregistered replication and extension of Experiment 1. The data suggest that reasoners construct mental models when generating explanations.