In 2009, John Hattie released the meta-meta-review Visible Learning which summarized 800 meta-analyses into 138 possible influences on student achievement. The influences were all re-coded to a standard metric (Cohen’s d) and ranked based on their effect sizes, ranging from negative (e.g. retention), little effect (e.g., student personality), to strong influences on student achievement (e.g., Response to intervention). To this day, the general criticism has focused on discovering examples of flaws in Hattie’s approach which has been referred to as cherry-picking by proponents of Visible Learning. The purpose of this project is to conduct a rigorous systematic assessment of the presented material. This talk will go through the syntheses made in Visible Learning and also how the quality assessment of the material is done. For example, previous research indicates that several influences have combined meta-analyses despite not having similar population, intervention, comparison groups, outcomes, and study types (PICOS). The talk will also contain a practical demonstration of the code-sheet and coding of the influences. The approach taken includes resources when conducting or assessing any type of meta-review.