![]() ![]() ![]() He is best known for his books Visible Learning (2009) and Visible Learning for Teachers (2012).Īnchoring these books is Hattie’s critical review of thousands of published research studies in six areas that contribute to student learning: student factors, home factors, school factors, curricular factors, teacher factors, and teaching and learning factors. Using those studies that met his criteria for inclusion, Hattie pooled the effect sizes from these individual studies, conducted different series of meta-analyses, and rank ordered the positive to negative effects of over a hundred approaches-again, related to student learning outcomes. Professor John Hattie has been the Director of the Melbourne Educational Research Institute at the University of Melbourne, Australia, since March 2011. His research interests include performance indicators, models of measurement, and the evaluation of teaching and learning. Today’s Discussion: John Hattie and Meta-Analyses This Blog then went on to recommend a series of questions that educational leaders should ask when told that a program, strategy, or intervention is scientifically based, evidence-based, or research-based. ![]() It provided the history and definitions (where present) of the terms “scientifically based” versus “evidence-based” versus “research-based.” Based on ESEA/ESSA, it was concluded that the term and definition “evidence-based” is now the federal “go-to” term when districts and schools need to evaluate the empirical efficacy of programs, curricula, strategies, and interventions. Part II of this series (posted on September 9th) was titled: “Scientifically based” versus “Evidence-based” versus “Research-based”-Oh my!!! Making Effective Programmatic Decisions: Why You Need to Know the History and Questions Behind these Terms. This Blog then described the “Top Ten” reasons why educational leaders make flawed large-scale, programmatic decisions-that waste time, money, and resources and that frustrate and cause staff and student resistance and disengagement.īy self-reflecting on these flawed approaches, the hope is that educational leaders will avoid these hazards, and make their district- or school-wide programmatic decisions in more effective ways. It discussed the fact that, under the new Elementary and Secondary Education Act (ESEA/ESSA), districts and schools will have more freedom-but greater responsibility-to evaluate, select, and implement their own ways of functionally addressing all students’ academic and social, emotional, and behavioral learning and instructional needs-across a multi-tiered continuum that extends from core instruction to strategic response and intensive intervention. In Part I of this series (posted on August 26th) was titled: The Top Ten Ways that Educators Make Bad, Large-Scale Programmatic Decisions: The Hazards of ESEA/ESSA’s Freedom and Flexibility at the State and Local Levels. This three-part series is focusing on how states, districts, schools, and educational leaders make decisions regarding what services, supports, programs, curricula, instruction, strategies, and interventions to implement in their classrooms. Recognizing that we need to use programs that have documented efficacy and the highest probability of implementation success, it has nonetheless been my experience that many programs are chosen “for all the wrong reasons”-to the detriment of students, staff, and schools. Why Hattie’s Research is a Starting-Point, but NOT the End-Game for Effective Schools ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |