The purported goal of open science is to make scientific processes transparent to readers and users of science. The open science movement has invested a great deal of resources in conceptual (FAIR) and physical infrastructure to make this possible. Despite these collective efforts, studies still fail to reliably replicate, even when data and code are made readily available. We propose that the problem is that not enough detailed information is documented in papers or elsewhere about what exactly researchers did during their research protocols. As described recently by Reddish et al "In many of these cases, what have been called “failures to replicate” are actually failures to generalize across what researchers hoped were inconsequential changes in background assumptions or experimental conditions." This discussion/workshop will use our work on cognitive narratology and journal articles to develop a framework to understand what specific information needed for replication is missing from the documentation so as to then produce guidelines for more effective (usable) scientific documentation. Although we will use one or two sample papers for this session, our intention is to expand the project to culminate in a formal analysis of approximately 50 papers.