De Winter and Dodou (2015) analyzed the distribution (and its change over time) of a large number of p-values automatically extracted from abstracts in the scientific literature. They concluded there is a ‘surge of p-values between 0.041-0.049 in recent decades’ which 'suggests (but does not prove) questionable research practices have increased over the past 25 years'. I show their data about differences in the increase in p-values below 0.05 are better explained by a model of p-value distributions that assumes the average power has decreased over time, and that their observation that p-values just below 0.05 increase more strongly than p-values above 0.05 can be explained by an increase in publication bias over the years (cf. Fanelli, 2012), which has led to a relatively smaller increase of 'marginally significant' p-values in the literature (instead of an increase in p-values just below 0.05). I (again, see Lakens, 2014) explain why researchers analyzing large numbers of p-values in the scientific literature need to develop better models of p-value distributions before drawing conclusion about questionable research practices. I want to thank De Winter and Dodou for sharing their data, assisting in the re-analysis, and reading an earlier version of this draft (to which they replied they were happy to see other researchers using their data to test alternative explanations, and that they did not see any technical mistakes in this analysis).