Mean scores on English-based graduate admissions exams vary strongly country by country. Here we test four hypotheses for this variance. The first is that samples of would-be graduate students are not representative of the countries from which they came. Second, English Language familiarity might account for cross-national variance on these exams. Third, national differences in well-being explain national differences in exam scores. Finally, variance in national IQ (NIQ) captures most of the variance in scores on admissions exams. We thus coded two extant measures of NIQ, and national scores on the following three admissions exams: The Graduate Management Admission Test (GMAT; n = 200 countries), Graduate Record Exam (GRE; n = 149), and Test of English as a Foreign Language (TOEFL; n = 175). Partial support was found for both the unrepresentative samples and language familiarity hypotheses. The well-being hypothesis was not supported because despite large correlations between it and exam scores, these effects were wiped out when NIQ also appeared in the regression equations. The NIQ hypothesis received the most support in that it strongly predicted admissions exam total scores (GMAT, r = 0.68 (N = 161); GRE, r = 0.71 (N = 141); TOEFL, r = 0.61 (N = 153). Moreover, these relationships (excluding those with the TOEFL) were robust to the controls of English familiarity and well-being. We end by discussing how our data also validate the existence of Rindermann's (2007) "Big G" nexus.