This paper describes results of a pair of incentivized experiments on biases in probabilistic judgments about random samples. Consistent with the Law of Small Numbers (LSN), participants exaggerated the likelihood that short sequences and random subsets of coin flips would be balanced between heads and tails. Consistent with the Non-Belief in the Law of Large Numbers (NBLLN), participants underestimated the likelihood that large samples would be close to 50% heads. However, we identify some shortcomings of existing models of LSN, and we find that NBLLN may not be as stable as previous studies suggest. Our within-subject design of asking many different questions about the same data lets us disentangle the biases from possible rational alternative interpretations and to control for “bin effects,” whereby the probability assigned to outcomes systematically depends on in a way predicted by support theory on the categories used to elicit beliefs. The bin effects are large and systematic and affect some results, but we find LSN and NBLLN even after removing bin effects as a confound.