white 2

“There is a science to learning and we are finding out more and more about what works best to support the learning processes that make a difference for your learners.“ Advertising for a Visible Learning symposium at the Australian Council for Educational Leadership (ACEL) website

“Assisting practising teachers to maximise their impact on student learning relies on implementing practices that have been shown to benefit students the most – with constructive feedback on educational practices, collaboration and effective professional learning.”  From the Australian Institute for Teaching and School Leadership (AITSL) Chair John Hattie’s Statement of Intent

“Hattie’s work is everywhere in contemporary Australian school leadership. This is not to say that educators have no opportunity for resistance, but the presence and influence of brand Hattie cannot be ignored. The multiple partnerships and roles held by Hattie the man and the uptake of his work by systems and professional associations have canonised the work in contemporary dialogue and debate to the extent that it is now put forth as the solution to many of the woes of education.” Scott Eacott

“Unfortunately, in reading Visible Learning and subsequent work by Hattie and his team, anybody who is knowledgeable in statistical analysis is quickly disillusioned. Why? Because data cannot be collected in any which way nor analysed or interpreted in any which way either. Yet, this summarises the New Zealander’s actual methodology. To believe Hattie is to have a blind spot in one’s critical thinking when assessing scientific rigour. To promote his work is to unfortunately fall into the promotion of pseudoscience. Finally, to persist in defending Hattie after becoming aware of the serious critique of his methodology constitutes wilful blindness.”  Pierre-Jérôme Bergeron

Since the original publication of John Hattie’s book, Visible Learning, there have been questions raised about the statistical methodology underpinning his research and representation of ‘what works best for learning’. By 2014, the year Professor Hattie became the Chair of AITSL, it was clear, even to tertiary statistics students, that serious mathematical errors had been made. There continues to be a steady flow of journal articles contesting Hattie’s ideas. By 2017, concerns about flawed use of statistics and how the politics of education works in Australia sees many practitioners not really needing to read a journal article to know all about “the cult of Hattie” in our schools.

Hattie continues to rank the “195 Influences And Effect Sizes Related To Student Achievement” without acknowledging the concerns raised by statisticians. After reading the latest paper which derides the methodology, I decided to see what some influential educators thought. A simple tweeted question:

Your thoughts about this new analysis of Hattie’s statistics?

resulted in Stephen Dinham, Scott Eacott and Dylan Wiliam responding with the latter agreeing the stats are flawed:

In my view, yes. Issues: age dependence of ES; sensitivity to instruction, study selection; publication bias; atheoretical categorization…

Screen Shot 2017-08-26 at 5.19.43 pm

Scott Eacott‘s recent paper, School leadership and the cult of the guru: the neo-Taylorism of Hattie places Hattie’s work in an Australian context. It really is essential reading for educational leaders. I urge you to read it and engage with him, perhaps on twitterHattie’s reply to Eacott’s paper does not even remotely grapple with the issues raised and I note “no potential conflict of interest was reported by the author”. Eacott tweeted that the journal will not publish his response.

Corwin Australia (see screenshot from their website below) is on a good thing. Google “Visible Learning” + your town and see how many primary and secondary websites links you find back to this business/service. There is a growing coterie of trainers around the world delivering this trademarked professional learning, based on Hattie’s meta-analyses.

Screen Shot 2017-08-26 at 9.04.31 pm

Just to be clear. Professor Hattie has had a stellar career and much of his work makes complete sense without an iota of research. Who would argue with Hattie’s point that teachers with impact are:

  • passionate about helping their students learn
  • able to build strong relationships with their students
  • clear about what they want their students to learn
  • using evidence-based teaching strategies
  • monitoring their impact on students’ learning, and adjust their approaches accordingly
  • actively seek to improve their own teaching
  • viewed by the students as being credible 

However, when flawed statistical analysis is resulting in advice that high-impact, evidence-based teaching strategies include:

  • Direct Instruction
  • Note Taking & Other Study Skills
  • Spaced Practice
  • Feedback
  • Teaching Metacognitive Skills
  • Teaching Problem Solving Skills
  • Reciprocal Teaching
  • Mastery Learning
  • Concept Mapping
  • Worked Examples

but there’s little or no impact with:

  • giving students control over their learning
  • problem-based learning
  • teaching test-taking
  • catering to learning styles
  • inquiry-based teaching

one feels a little less comfortable with the advice considering the statistical analysis of effect size is worse than merely dubious. Research can only tell us what may have happened not what is needed next as we all grapple with the future.

Context is everything. That includes the context, throughly discussed by Dr Eacott, that has led to Australian schools looking for scientific, evidence-based solutions to the educational challenges highlight by PISA and NAPLAN. Dylan Wiliam, since at least 2009, has questioned the use of meta-analysis in education. It seems pretty obvious that Hattie’s number-crunching has appealed to politicians and administrators looking to solve what often feels like a manufactured series of education crises.

It is worthing quoting the conclusions from a 2009 paper that you really should read:

…we want to repeat our belief that John Hattie’s book makes a significant contribution to understanding the variables surrounding successful teaching and think that it is a very useful resource for teacher education. We are concerned, however, that:

(i) Despite his own frequent warnings, politicians may use his work to justify policies which he does not endorse and his research does not sanction;

(ii) Teachers and teacher educators might try to use the findings in a simplistic way and not, as Hattie wants, as a source for “hypotheses for intelligent problem solving”;

(iii) The quantitative research on ‘school effects’ might be presented in isolation from their historical, cultural and social contexts, and their interaction with home and community backgrounds; and

(iv) In concentrating on measureable school effects there may be insufficient discussion about the aims of education and the purposes of schooling without which the studies have little point.

It seems appropriate to close with one of the quotes that opened this brief post and to ask what you think? Your commentary is, as always, highly appreciated.

To believe Hattie is to have a blind spot in one’s critical thinking when assessing scientific rigour. To promote his work is to unfortunately fall into the promotion of pseudoscience. Finally, to persist in defending Hattie after becoming aware of the serious critique of his methodology constitutes wilful blindness.”  Pierre-Jérôme Bergeron

References

Bergeron, Pierre-Jérôme; Rivard, Lysanne,  How to Engage in Pseudoscience With Real Data: A Criticism of John Hattie’s Arguments in Visible Learning From the Perspective of a StatisticianMcGill Journal of Education / Revue des sciences de l’éducation de McGill, [S.l.], v. 52, n. 1, July 2017. ISSN 1916-0666. Available at: <http://mje.mcgill.ca/article/view/9475/7229>, Date accessed: 22 Aug. 2017.

Eacott, Scott, School leadership and the cult of the guru: the neoTaylorism of Hattie, School Leadership & Management, DOI: 10.1080/13632434.2017.1327428, 2017

Hattie, J., Visible learning for teachers: maximizing impact on learning, London: Routledge, 2012

Siebert J. Myburgh, Critique of Peer-reviewed Articles on John Hattie’s Use of Meta-Analysis in Education, Working Papers Series International and Global Issues for Research, No. 2016/3 December 2016. Availability:<http://www.bath.ac.uk/education/documents/working-papers/critique-of-peer-reviewed-articles.pdf> Date accessed: 26 Aug. 2017

Snook, Ivan; O’Neill, John; Clark, John; O’Neill, Anne-Maree and Openshaw, Roger. Invisible Learnings?: A Commentary on John Hattie’s Book – ‘Visible Learning: A Synthesis of Over 800 Meta-analyses Relating to Achievement’ [online]. New Zealand Journal of Educational Studies, Vol. 44, No. 1, 2009: 93-106. Availability:<http://search.informit.com.au/documentSummary;dn=467818990993648;res=IELNZC> ISSN: 0028-8276, Date accessed: 26 Aug. 2017

Share

Comments(58)

    • Michelle Renshaw

    • 7 years ago

    The cultish nature & the use of stats to draw policy conclusions has always rested uncomfortably with me. Teaching is a reflective practice that is both contextual & relational. I’m always happy to read, reflects, trial & apply. I’ve found the professional development Regarding Hattie to be helpful because it makes me think. My biggest big bear is that his names is used to badger or thwart by zealots. As an HSC Studies of Religion teacher the Hattie chant echoes like a new age religion in the Australian Educational landscape. In essence, good teaching & learning is too complex & nuanced for cults, while fads are distilled in the daily practice of teachers & mined for what works today, with these young people in this part of the course.

      • Russ

      • 6 years ago

      Yes. Yes. I have often heard principals quote Hattie in a Confucian way. They even have data rooms (shrines by any other name) in some Western Qld Schools.

    • wayne

    • 7 years ago

    I think you have been too kind. The research is fatally flawed, therefore any of its conclusions are tainted and also flawed. The only sensible approach is to reject the whole lot of it.

    There are promising insights BUT the actual academic work on those is yet to be done and as such they are anecdotal and of no substance. If Hattie was the academic he presents as then he would redo the research, properly and validate or disprove his erroneous findings. The very fact he has not suggests academic misconduct of a high order.

    1. totally agree with you Wayne

    • Natasha Watt

    • 7 years ago

    There is purpose to the powerful wanting to uphold Hattie as a guru. It is that his work makes the complex nature of learning simple ie. politicians think that progressing students is as straight forward as pre and post testing and the other reason is that it throws validation behind the idea that there is a crisis in teaching and reasons to take teachers in hand, take them on, de-professionalise teaching, attempt to make teachers untrustworthy to the public so that the solution is for Pearson n crew to weigh in.

    • Merrideth

    • 7 years ago

    Thanks Darcy-that was an interesting read and I love the phrase, ‘willful blindness’
    I agree that much of Hattie’s work makes sense but the presentation of his findings as scientific fact is problematic. I think that Hattie is appealing to ‘leaders in education’ because he takes the complex nature of teaching and learning and breaks it down into tick boxes; providing solutions that will surely lead to ‘improvement’-of course this improvement can be measured by external assessment that offers but a narrow report of student progress but data can be collected! Politicians can report that ‘standards have been raised’ and schools can be congratulated for ‘bumping up’ thier NAPLAN results. Where to from here?

      • Duane E Swacker

      • 7 years ago

      “of course this improvement can be measured by external assessment”

      While “measuring is what is purported to be being done with those external assessments, nothing of the sort is actually be done with those standardized tests, i.e., external assessments. Noel Wilson delves into the nature of assessment in his seminal 1997 dissertation “Educational Standards and the Problem of Error”, arguably the most important piece of educational writing in the last half century. All should read his work: http://epaa.asu.edu/ojs/article/view/577/700

      Now as far as that measuring goes:

      The TESTS MEASURE NOTHING, quite literally when you realize what is actually happening with them. Richard Phelps, a staunch standardized test proponent (he has written at least two books defending the standardized testing malpractices) in the introduction to “Correcting Fallacies About Educational and Psychological Testing” unwittingly lets the cat out of the bag with this statement:

      “Physical tests, such as those conducted by engineers, can be standardized, of course [why of course of course], but in this volume , we focus on the measurement of latent (i.e., nonobservable) mental, and not physical, traits.” [my addition] (notice how he is trying to assert by proximity that educational standardized testing and the testing done by engineers are basically the same, in other words a “truly scientific endeavor”)

      Now since there is no agreement on a standard unit of learning, there is no exemplar of that standard unit and there is no measuring device calibrated against said non-existent standard unit, how is it possible to “measure the nonobservable”?

      THE TESTS MEASURE NOTHING for how is it possible to “measure” the nonobservable with a non-existing measuring device that is not calibrated against a non-existing standard unit of learning?????

      PURE LOGICAL INSANITY!

    • Askinggoodquestions

    • 7 years ago

    Thank you for this excellent post Darcy. A couple of thoughts. I see more principals and teachers starting to engage with the serious flaws in VL. However there is still too much ‘buying into’ what is held up as education research because it flies the ‘quantitative’ or ‘meta analysis’ banner. Recently I met a young teacher at an education conference who had just completed his Masters (Hons) thesis – it was all based on the JH work ie ‘effect size’ in particular. We had an excellent but robust conversation – he wanted to see the critique/read the critical blog posts/peer-reviewed papers that tackle the ‘JH juggernaut’ head on. He expressed frustration that none of this had been drawn to his attention while undertaking the study and he questioned the whole premise on which his study (now completed) was based – so there are ‘silences’ and some HE institutions are complicit too. The ‘guru’ is never a good thing and singing songs/constructing lyrics/wearing cult tee shirts at expensive VL seminars is not what our learning profession requires. Education research is messy and complex – the quantitative paradigm provides a partial view – deep studies of practice are necessary to really understand classrooms, learning, teachers, students and schools. Education researchers must ‘get their hands dirty’ in the process of engagement and change. The current and ongoing commercialisation of education and charging systems/schools and the community enormous fees for ‘service’ when you can position yourself as having all of the answers is not healthy and needs to be called out – it drains funds/resources and ‘dumbs down’ the extraordinary and relentless nature of the work that principals and teachers do in our schools every day.

    • Deanna

    • 7 years ago

    The issue I had with the Cult of Visible Learning from the very first information session I went to was the “hard sell” to pay huge amounts of money “training” school teams in Hattie’s research findings. It is a business that is raking in the money. Questioning ANYTHING that was presented during the foundation course I attended was actively muted by the facilitators. I have always said that I walked out of the “training” feeling like I had escaped being sucked into the vortex of some cult – as a person who questioned aspects of the research and how the findings applied to my experiences in low sociology economic schools. I actually had a facilitator take me aside and attempt to “turn” me via some sort of interventionist approach. From that day I have been exceptionally cynical of the Visible Learning megalith.
    As a layman (within the academic research context) I have found it has been hard to argue against a “research based” framework, where I have just KNOWN, as an experienced educator, that effect size and ranked strategies DO NOT suit flexible approaches to education – where the student and local context means educational planning and implementation MUST specifically address the needs of this local context.
    Then – to have Hattie installed as Chairperson of AITSL- allows him to have a national vehicle for shoe-horning his PRODUCT into.
    Fascinating reading, Darcy, thanks for the thought provoking post.

    • Lizzie Chase

    • 7 years ago

    I love Hattie’s stuff – to use it for trends – but because he uses meta-analyses, he’s presumably comparing apples with oranges sometimes (because different questions were posed or methodologies were used).

  1. Thanks for this post Darcy and the willingness to be a critical consumer of research. I have always been amazed when working with school leaders and they identify as either ‘working in a Hattie school’ or we do this because ‘Hattie says …’. Much, if not all, of what Hattie reports we already knew. That is the nature of meta-analysis. They call on previous work. The packaging has made it attractive (not to mention the many commerical partners – as you rightly identify) and a perception of it as the solution to the problems of under-achievement or mis-investment. To question Hattie is not to critique evidence, but to question any single solution. I truly hope the questioning continues and educators continue to embrace their professionalism as critical consumers of research.

    • Alex Brown

    • 7 years ago

    The fatal flaw of Hattie’s work is its reliance on effect size. However, his broad strokes are useful as an entry point for the development of teachers as reflective practitioners. It would be great if the money being spent on AITSL, Hattie, etc. was instead being used to buy release time for teachers to professionalise during work hours, instead of in their free time. I’m not sure if you linked this, Darcy (your references appear comprehensive!), but this is the salient evaluation of Hattie for mine:

    http://evidencebasededucationalleadership.blogspot.com.au/2016/04/is-it-time-to-call-time-on-hattie-and.html

    • Daren

    • 7 years ago

    Hattie’s work cannot be ignored. We have to take a long hard look at education from a global perspective and see that what we do in Australia might not necessarily be the best way to teach kids. He is looking to challenge the notion of what a great classroom looks like. Truth is.. We need to re-examine how and why we give homework. We need to critically examine the effects, meaningful goal setting and constructive feedback have on learning. I won’t touch on class size as that is outside my area of expertise. You can pull statistics apart and make it say whatever you want to say. Governments do this all the time. I think Hattie’work is a call to re-examine and critically evaluate what we hold as true in teaching. Honestly… Look at the homework the average child brings home or the work that is done across the world in mathematicss classrooms alone, and you will see irrelevant text book based one size fits all wasted opportunities everywhere you look. If we are doing it right, then why are so many failing and hating maths or even homework? Putting the statistics aside, change needs to happen in many of our pedagogical beliefs.

      • cc

      • 6 years ago

      So well said. Call it what you like but at the core is our capacity to question whether ALL students are actually learning, making progress and how do we know? Too many are just going through the motions…

    • Data troll

    • 7 years ago

    “Lies, damn lies and Education based data”

    The blind and simplistic adherence to Hattie’s research reflects a much wider issue in Australian education which that the teaching profession is not allowed to, and has forgotten how to think for itself. There is also a vacuum in good educational discourse that Hattie has filled in terms of reflecting on what “works”. People need something to think and follow and Hattie fills that void. (i’m thinking of Monty Python’s “Life of Brian” scene – “He’s the messian !”)
    Authors and bloggers who denote the political element of this issue are exactly right. The research can be manipulated to suit political agendas so politicians can be seen to be “doing something”. Politics is at the root of all educational evil in Australia today. Short term, paternalistic and simplistic solutions that can be sold to a public at a level the public can grasp. Hattie has been misused to suit this.
    At a local level my school uses Hattie and as a leader I quote him, but with the qualification that he isn’t the guru in our setting and his research is a guide and should be questioned in our context. Interestingly, when effect sizes are questioned in our school the position on Hattie is immediately softened to recognise its flaws. There is some worth in the research to get us thinking but “wilful blindness” and adherencemeans we’ve stopped thinking as a profession.

  2. This is an excellent response to the growing concerns that many people have about the influence of Hattie’s work on national and international educators and educational institutions. I’ve also loved reading the very measured and insightful responses left in the comments. Thank you for sharing your thoughts and I hope that educators (and politicians!) read this and that rational dialogue and debate can occur about the manufacturing of the cult of Hattie.

  3. I was first introduced to the work of Hattie when I started my Masters of Education at USyd and my lecturer told me that I had better read his ‘Visible Learning’ because he had proven that PBL (what I planned to do my research on) was not effective. As someone who had been using PBL successfully with all students (from the incredibly disengaged to the very talented learners) in years 7-12, I found it confronting that someone had scientifically proven it was an ineffective methodology. Upon reading his book, and then reading more research into PBL and the strategies which underpin it (and, admittedly as I became more research literate) I decided that meta-analysis was the best way to approach edu research. So, I ignored Hattie and his work for many years until it was thrown in my face – I was made to attend a VL ‘Symposium’ earlier this year… and what I experienced was appalling. It was the cult of Hattie – THAT was the most visible thing. I recorded this on my Twitter feed, and don’t need to rewrite it here. What the event did force me to do was to think critically (and somewhat objectively) about the general tenets of VL, and relate them to PBL… essentially to show how ludicrous it is to denounce it when those things they champion as having significant effect sizes are evident in well-planned and run projects… you can read my take on it here: https://biancahewes.wordpress.com/2017/03/25/the-skill-will-and-thrill-of-project-based-learning/ Looking forward to your thoughts on VL and PBL, Darcy.

    1. *meta-anlysis was NOT the best way to approach edu research (sorry for typo!)

      • Darcy Moore

      • 7 years ago

      Thanks for your thoughtful comment, Bianca. Happy to share my thoughts about effect size, assessment, raising student achievement and PBL (although you are much more knowledgable about this than I).

      Why does inquiry-based learning only have an effect size of 0.31? Hattie would say that inquiry-based learning is not introduced at the right time for the students. Watch his 2-minute spiel: https://www.youtube.com/watch?v=YUooOYbgSUg&ab_channel=Corwin
      I do not disagree with him but once again, it is all about context (inc. how good/bad the research he number-crunched actually was etc.).

      PBL can be done well or poorly, like any other strategy or approach. Teachers working together collaboratively has to be key. Assessment is important and backward mapping what skills/knowledge is needed for the student to produce the product requires a great deal of planning/thought/trial/error. However, the potential for developing students who are “futures-focused” and engaged in challenging, relevant, contextualised learning is just excellent.

      You have constantly striven to learn, share, model, share, learn, fail, share, improve, have fun, strive, write, present blog, share, tweet and attend conferences in an effort to improve your skills and then share, learn, model, fail, grow etc, etc,

      With this in mind, how much of what Dylan Wiliam has say make a great deal of sense compared to the VL model?

      “…if you’re serious about raising student achievement you have to improve teachers’ use of assessment for learning; if you’re serious about helping teachers implement assessment for learning in their own practice, you have to help them do that for themselves as you cannot tell teachers what to do; and the only way to do that at scale is through school-based teacher learning communities. The good news is that you do not need experts to come in and tell you what to do. What you need is for you, as groups of teachers, to hold yourselves accountable for making changes in your practice.”

      Sound sage to me. Sounds like what you endeavour to do.

        • Duane E Swacker

        • 7 years ago

        “if you’re serious about raising student achievement”

        As a teacher I never gave a damn about “raising student achievement”. I gave a damn about each individual student learning as much Spanish as they could in the 45 minutes a day I had with them in a class of 25-30.

        “raising student achievement” is an edudeformer term used to disparage the actual teaching and learning process that occurs on a minute to minute, hour by hour, day by day and year to year in classrooms around the world.

        • Sarah

        • 6 years ago

        Hi Darcy,
        In 2017, Inquiry-based teaching was changed to an effect size of .40 I am wanting to analyse the individual studies that resulted in this change. Do you know where about I can locate them? I can’t seem to find any reference to them.

          • Darcy Moore

          • 6 years ago

          Hi Sarah,

          Not sure.

          Are you on Twitter? If so maybe tweet @dylanWiliam

          @Darcy1968

          • George Lilley

          • 6 years ago

          Dr Mandy Lupton has analysed many of the inquiry based studies here – https://visablelearning.blogspot.com/p/inquiry-based-teaching.html to get details of studies you have to email Corwin the comercial arm of Hattie https://au.corwin.com/en-gb/oce/visible-learning

            • Sarah

            • 6 years ago

            Thanks George, I am very fortunate that Mandy Lupton is in fact my university professor! I will email Corwin. Thanks for the tip!

          • Michael Barry

          • 5 years ago

          This identifies one of the problems with Hattie’s Visible Learning: the age of many of the original meta-meta-analysed studies, and therefore their relevance to the present day.

          In my research into games in education, I found that of a sample of the original research papers, more than 40% pre-dated 1990 — noting that most of our modern experience in games, especially electronic, post-dates that.

          I did a brief deep drill into the topic of games in education — which Hattie considers worthless, ES=0.15 or so. I was lucky that the number of meta-analyses in that topic was low.

          The original studies said something quite different to Hattie’s conclusion, and in accordance with professional practice: that games are useful in introducing a topic, not good in fleshing out intermediate material, and great at extending to advanced levels: a very valuable conclusion which the Hattie cult denigrates through ignorance.

          This U-shaped effectiveness merely said that games should be used appropriately, and were on average not highly useful.

          But all this aside, until the statistical deficiencies of Hattie’s method are addressed, the whole work can be considered suspect.

  4. Not a bad career trajectory – politically highly regarded yet apparently statistically flawed book used as a CV to work for the Commonwealth anointed teaching standards support act. Happily though, I would contend that no-one outside of teaching knows what AITSL actually does (actually, let me correct that – “few inside or many outside teaching”). I wonder if he’s proud to be used as part of the first concerted political push to mandated teacher accountability. That his effect size doesn’t stack up should have been discovered well before now. Admittedly, some of his work has presented useful leads for teachers seeking quality practice, but it has also (now erroneously, it seems) been used as a club to whack teachers into infighting about teaching effectiveness. The sad thing is, if you can’t trust the CEO of your peak teaching excellence organisation for guidance, who can you trust?

      • George Lilley

      • 6 years ago

      good points Brendan. Serious questions were raised a while ago about his calculations and his conflict of interest –

      Professor Ewald Terhardt (2011), “A part of the criticism on Hattie condemns his close links to the New Zealand Government and is suspicious of his own economic interests in the spread of his assessment and training programme (asTTle). Similarly, he is accused of advertising elements of performance-related pay of teachers and he is being criticised for the use of asTTle as the administrative tool for scaling teacher performance. His neglect of social backgrounds, inequality, racism, etc., and issues of school structure is also held against him. This criticism is part of a negative attitude towards standards-based school reform in general. However, there is also criticism concerning Hattie’s conception of teaching and teachers. Hattie is accused of propagating a teacher-centred, highly directive form of classroom teaching, which is characterised essentially by constant performance assessments directed to the students and to teachers” (p434).

      And Prof John O’Neill (2012) who publically called Hattie a ‘policy entrepreneur’:

      “public policy discourse becomes problematic when the terms used are ambiguous, unclear or vague” (p1). The “discourse seeks to portray the public sector as ‘ineffective, unresponsive, sloppy, risk-averse and innovation-resistant’ yet at the same time it promotes celebration of public sector ‘heroes’ of reform and new kinds of public sector ‘excellence’. Relatedly, Mintrom (2000) has written persuasively in the American context, of the way in which ‘policy entrepreneurs’ position themselves politically to champion, shape and benefit from school reform discourses” (p2).

        • Michael Barry

        • 5 years ago

        Interesting that Hattie was considered too close to the NZ government. I got the impression last year when meeting in NZ with academics, government officials and teachers that Hattie is somewhat *persona non grata* in the land of his birth. NZ Indigenous educators were especially scathing, justifiably given their experience with “one size fits all” education.

        1. I think the reason Hattie’s reputation has diminished in NZ over the years is largely because of the Teacher Unions publicly critiquing Hattie. A great example, is the report they commissioned on conflicts of interest. Hattie prominently features. Hattie constantly says he only make money from book sales and nothing else, this report shows Hattie for what he truly is. I wish the Australian Unions would have the guts to publish this sort of stuff – report here – https://www.ppta.org.nz/dmsdocument/569

    • Victor Davidson

    • 7 years ago

    In correspondence he refused to acknowledge the impact of trained and qualified teacher librarians on student outcomes. End story.

    • Craig Petersen

    • 7 years ago

    Thanks for posting this, Darcy. It highlights the perennial problem of education systems wanting to cling to those that they see as being able to provide the ‘silver bullet’ – think Finland, Fullan and functional grammar! Teaching is more than a science – it is also a craft and it is the skilful application of knowledge along with the careful consideration of context and relationships which (may) make the successful educator. I have always advocated the importance of teachers and leaders being critical in their adoption of new thinking. This is not to say that we should reject new research or ways of thinking, but the wise educator will consider research carefully it in the context of their experience and their classroom. Critical is the first teaching standard – “Know students and how they learn.”

  5. With all the talk about Hattie, I am always left wondering how Marzano’s work is different? I was of the impression that his model(s) were built on meta-analyses? Could be wrong.

      • Duane E Swacker

      • 7 years ago

      Marzano? Built on meta-analysis?

      Ay ay ay!

      I had an asst supe adminimal try to get me to have an interactive smart board in my room. (I had been clamoring for another foreign language teacher to be hired for quite a while at that point and didn’t want that supposed smart board, didn’t need it.) She stated “Don’t you know that according to Marzano using a smartboard in class raises student achievement 17%. Why wouldn’t you want one?”

      I asked her if she had read the study. No, she hadn’t. I asked her if she had read any rebuttal to the study. No, she hadn’t. So when I got back to my room I emailed her the following links that destroy Marzano’s ‘meta-analysis’ of the supposed effects of using an interactive smart board. Do you know that Marzano was specifically hired by, paid for by a company, Promethean, that makes interactive boards, to come up with research that they could use in selling that technology?

      See: http://edinsanity.com/2009/06/02/marzano_part1/ And from there you can access the “rest of the story”.

      The edupreneur Marzano has product to sell. Don’t let a little analysis get in the way of buying his “products”

  6. […] ‘The Cult of Hattie’: ‘wilful blindness’? – Darcy Moore’s Blog […]

    • Wayne

    • 7 years ago

    Make Duane E Swacker the Minister for Education!

      • Duane E Swacker

      • 7 years ago

      Sorry, Wayne, I don’t think I’d qualify, being a Gringo and all of that. But thanks for the kind thought!

      And that “academic misconduct” to which you refer is routine here in Gringolandia what with all the bought and paid for “research” (like Marzano’s to which I referred) put out by stink tanks, oops, that’s supposed to be think tanks, and the bought and paid for by the same culprits university departments. Lends a fine patina to the crap that comes out of those “institutions”. Think of it as gold plated bovine excrement.

    • Katy Lumkin

    • 7 years ago

    It is never a one size fit all. Hattie provides one perspective. There is a place for explicit and systematic teaching and there is a place for autonomy, mastery and purpose. I totally agree with you Darcy – It is about context. Watch these three different perspectives from ILETC project, Melbourne: http://www.iletc.com.au/events/project-launch/455-2/

  7. […] for competitive benchmark heaven, which has driven educational policymakers to fall in behind the Cult of Hattie et al. who are fast turning teachers into lab rats as they seek to somehow duplicate the perfect […]

  8. […] in Australia sees many practitioners not really needing to read a journal article to know all about “the cult of Hattie” in our schools. . Hattie continues to rank the “195 Influences And Effect Sizes Related To […]

  9. Thanks for an insightful post Darcy, it gets worse than flawed statistics!

    Hattie also consistently totally mispresents studies. I’m slowly going through the controversial influences and reading the studies Hattie used. I’m absolutely staggered at the misrepresentation. Details here – https://visablelearning.blogspot.com.au/

    Many scholars also point this out, e.g., Emeritus Professor Peter Blatchford points out about Hattie’s VL,

    “it is odd that so much weight is attached to studies that don’t directly address the topic on which the conclusions are made” (p13).

  10. A happy diversion from my learning how to calculate various statistical things (thorough understanding of the principles of psychometric theory -gah) T scores etc. But now I have a new motivation to master them! There is no one size fits all approach to education, anyone that pretends there is, is peddling bunkum. I like the phrase a manufactured crisis in education; it is interesting to note that the drop in Australian ranking in international terms correlates with increasing levels of government interference and standardised testing such as Napalm (I mean NAPLAN or do I…?). However correlation does not prove causation…

      • Duane E Swacker

      • 6 years ago

      Suzy,

      May I suggest that you read Noel Wilson’s work (if you haven’t already, and if you have, your thoughts pleas) on those “principles” of psychometric theory. See: “Educational Standards and the Problem of Error” found at: http://epaa.asu.edu/ojs/article/view/577/700

      As it is, if you could answer the following in regards to that supposed “metric” measuring that psychometrics claim is happening, I’d appreciate a response:

      The most misleading concept/term in education is “measuring student achievement” or “measuring student learning”. The concept has been misleading educators into deluding themselves that the teaching and learning process can be analyzed/assessed using “scientific” methods which are actually pseudo-scientific at best and at worst a complete bastardization of rationo-logical thinking and language usage.

      There never has been and never will be any “measuring” of the teaching and learning process and what each individual student learns in their schooling. There is and always has been assessing, evaluating, judging of what students learn but never a true “measuring” of it.

      But, but, but, you’re trying to tell me that the supposedly august and venerable APA, AERA and/or the NCME have been wrong for more than the last 50 years, disseminating falsehoods and chimeras??

      Who are you to question the authorities in testing???

      Yes, they have been wrong and I (and many others, Wilson, Hoffman etc. . . ) question those authorities and challenge them (or any of you other advocates of the malpractices that are standards and testing) to answer to the following onto-epistemological analysis:

      The TESTS MEASURE NOTHING, quite literally when you realize what is actually happening with them. Richard Phelps, a staunch standardized test proponent (he has written at least two books defending the standardized testing malpractices) in the introduction to “Correcting Fallacies About Educational and Psychological Testing” unwittingly lets the cat out of the bag with this statement:

      “Physical tests, such as those conducted by engineers, can be standardized, of course [why of course of course], but in this volume , we focus on the measurement of latent (i.e., nonobservable) mental, and not physical, traits.” [my addition]

      Notice how he is trying to assert by proximity that educational standardized testing and the testing done by engineers are basically the same, in other words a “truly scientific endeavor”. The same by proximity is not a good rhetorical/debating technique.
      Since there is no agreement on a standard unit of learning, there is no exemplar of that standard unit and there is no measuring device calibrated against said non-existent standard unit, how is it possible to “measure the nonobservable”?

      THE TESTS MEASURE NOTHING for how is it possible to “measure” the nonobservable with a non-existing measuring device that is not calibrated against a non-existing standard unit of learning?

      PURE LOGICAL INSANITY!

      The basic fallacy of this is the confusing and conflating metrological (metrology is the scientific study of measurement) measuring and measuring that connotes assessing, evaluating and judging. The two meanings are not the same and confusing and conflating them is a very easy way to make it appear that standards and standardized testing are “scientific endeavors”-objective and not subjective like assessing, evaluating and judging.

      1. Thanks Duane,a very useful paper but a lot ot read. Also, thank you for some of your analysis of Marzano in other blogs, we need more people to do that sort of thing and get it widely read by teachers.

          • Duane E Swacker

          • 6 years ago

          No doubt, George that it is “a lot of read”. And most will not read such a lengthy paper. How is 250 pages of Wilson’s work any different in length than a 350 page novel? Be that as it may, I have read it, at least a dozen times and get something more out of it each time. I contacted Noel back in ’12-’13 (what a wonderful, amazingly intelligent gentleman) for him to look over and suggest any corrections to a summary of the dissertation. Here is that summary (which certainly doesn’t cover everything that Wilson has put into the writing):

          “Educational Standards and the Problem of Error” found at: http://epaa.asu.edu/ojs/article/view/577/700

          Brief outline of Wilson’s “Educational Standards and the Problem of Error” and some comments of mine. (updated 6/24/13 per Wilson email)

          1. A description of a quality can only be partially quantified. Quantity is almost always a very small aspect of quality. It is illogical to judge/assess a whole category only by a part of the whole. The assessment is, by definition, lacking in the sense that “assessments are always of multidimensional qualities. To quantify them as unidimensional quantities (numbers or grades) is to perpetuate a fundamental logical error” (per Wilson). The teaching and learning process falls in the logical realm of aesthetics/qualities of human interactions. In attempting to quantify educational standards and standardized testing the descriptive information about said interactions is inadequate, insufficient and inferior to the point of invalidity and unacceptability.

          2. A major epistemological mistake is that we attach, with great importance, the “score” of the student, not only onto the student but also, by extension, the teacher, school and district. Any description of a testing event is only a description of an interaction, that of the student and the testing device at a given time and place. The only correct logical thing that we can attempt to do is to describe that interaction (how accurately or not is a whole other story). That description cannot, by logical thought, be “assigned/attached” to the student as it cannot be a description of the student but the interaction. And this error is probably one of the most egregious “errors” that occur with standardized testing (and even the “grading” of students by a teacher).

          3. Wilson identifies four “frames of reference” each with distinct assumptions (epistemological basis) about the assessment process from which the “assessor” views the interactions of the teaching and learning process: the Judge (think college professor who “knows” the students capabilities and grades them accordingly), the General Frame-think standardized testing that claims to have a “scientific” basis, the Specific Frame-think of learning by objective like computer based learning, getting a correct answer before moving on to the next screen, and the Responsive Frame-think of an apprenticeship in a trade or a medical residency program where the learner interacts with the “teacher” with constant feedback. Each category has its own sources of error and more error in the process is caused when the assessor confuses and conflates the categories.

          4. Wilson elucidates the notion of “error”: “Error is predicated on a notion of perfection; to allocate error is to imply what is without error; to know error it is necessary to determine what is true. And what is true is determined by what we define as true, theoretically by the assumptions of our epistemology, practically by the events and non-events, the discourses and silences, the world of surfaces and their interactions and interpretations; in short, the practices that permeate the field. . . Error is the uncertainty dimension of the statement; error is the band within which chaos reigns, in which anything can happen. Error comprises all of those eventful circumstances which make the assessment statement less than perfectly precise, the measure less than perfectly accurate, the rank order less than perfectly stable, the standard and its measurement less than absolute, and the communication of its truth less than impeccable.”

          In other words all the logical errors involved in the process render any conclusions invalid.

          5. The test makers/psychometricians, through all sorts of mathematical machinations attempt to “prove” that these tests (based on standards) are valid-errorless or supposedly at least with minimal error [they aren’t]. Wilson turns the concept of validity on its head and focuses on just how invalid the machinations and the test and results are. He is an advocate for the test taker not the test maker. In doing so he identifies thirteen sources of “error”, any one of which renders the test making/giving/disseminating of results invalid. And a basic logical premise is that once something is shown to be invalid it is just that, invalid, and no amount of “fudging” by the psychometricians/test makers can alleviate that invalidity.

          6. Having shown the invalidity, and therefore the unreliability, of the whole process Wilson concludes, rightly so, that any result/information gleaned from the process is “vain and illusory”. In other words start with an invalidity, end with an invalidity (except by sheer chance every once in a while, like a blind and anosmic squirrel who finds the occasional acorn, a result may be “true”) or to put in more mundane terms crap in-crap out.

          7. And so what does this all mean? I’ll let Wilson have the second to last word: “So what does a test measure in our world? It measures what the person with the power to pay for the test says it measures. And the person who sets the test will name the test what the person who pays for the test wants the test to be named.”

          In other words it attempts to measure “’something’ and we can specify some of the ‘errors’ in that ‘something’ but still don’t know [precisely] what the ‘something’ is.” The whole process harms many students as the social rewards for some are not available to others who “don’t make the grade (sic)” Should American public education have the function of sorting and separating students so that some may receive greater benefits than others, especially considering that the sorting and separating devices, educational standards and standardized testing, are so flawed not only in concept but in execution?

          My answer is NO!!

          One final note with Wilson channeling Foucault and his concept of subjectivization:

          “So the mark [grade/test score] becomes part of the story about yourself and with sufficient repetitions becomes true: true because those who know, those in authority, say it is true; true because the society in which you live legitimates this authority; true because your cultural habitus makes it difficult for you to perceive, conceive and integrate those aspects of your experience that contradict the story; true because in acting out your story, which now includes the mark and its meaning, the social truth that created it is confirmed; true because if your mark is high you are consistently rewarded, so that your voice becomes a voice of authority in the power-knowledge discourses that reproduce the structure that helped to produce you; true because if your mark is low your voice becomes muted and confirms your lower position in the social hierarchy; true finally because that success or failure confirms that mark that implicitly predicted the now self-evident consequences. And so the circle is complete.”

          In other words students “internalize” what those “marks” (grades/test scores) mean, and since the vast majority of the students have not developed the mental skills to counteract what the “authorities” say, they accept as “natural and normal” that “story/description” of them. Although paradoxical in a sense, the “I’m an “A” student” is almost as harmful as “I’m an ‘F’ student” in hindering students becoming independent, critical and free thinkers. And having independent, critical and free thinkers is a threat to the current socio-economic structure of society.

          1. THanks Duane i will read it. Thanks for the summary that helps. I’m just wading through all the background research Hattie used, that’s bogged me down and i still have classes to teach.

      • Michael Barry

      • 5 years ago

      You’re right, these Australia-wide interventions have been (at best) useless.

      I’ve always thought of standardised testing like fertiliser: the right fertiliser, at the right place, at the right time, and it can be effective. Too much, the wrong type and wrong place and time, and you get blue-green algae, which in the Murray-Darling river system, last month gave my daughter a shocking rash.

      Or perhaps, just a steaming pile of sh1t.

  11. […] ‘The Cult of Hattie’: ‘willful blindness’? […]

    • Sanjee Balasha

    • 6 years ago

    You might be interested in the latest podcast from the educational research reading room. It discusses the issues with effect size being misinterpreted as the effectiveness of an intervention by Hattie, the Education Endowment Foundation and others. See https://bit.ly/2rIById

      • Ted Lynch

      • 6 years ago

      wow, this throws all those rankings into doubt!

    • Lucinda McKnight

    • 6 years ago

    Hi Darcy
    We wrote our article critiquing Visible Learning before your post was published, or we would have cited you. It is great that you are getting this discussion out there, in a way that teachers can access. Our argument is that Visible Learning de-professionalises teachers, acting as a means of surveillance and control, and is a metaphor that performs exclusion. Have you ever tried insisting to a student with a visual impairment that visible learning is all that matters? We would love others to read our article and discuss its ideas: https://www.tandfonline.com/doi/abs/10.1080/01596306.2018.1480474 It will also soon be available to everyone via our uni respository: http://dro.deakin.edu.au/list/?search_keys%5B0%5D=lucinda+mcknight&submit=Search&fields=search_keys%5B0%5D&cat=quick_filter

    1. Thanks Lucinda, I’m collecting all the peer reviews of Visible Learning and putting summaries in a blog that’s easily available to teachers here – https://visablelearning.blogspot.com/

    • Russ

    • 6 years ago

    Loving this!!!! It is time that snake oil salesman is put to the sword. Sadly, the fact that so many principals subscribe to this nonsense says a great deal about the types of people that are promoted to positions of authority in public schools.

    The Australian Curriculum is akin to the tablet delivered to Joseph Smith. Luckily in Qld we have EQ employees (HOC’s) who are apparently issued with crystalline glasses.

    Systems look after systems and the people that manage systems……. complicating what is simple (teaching requires content knowledge, empathy and engaging explanation skills AND THAT IS IT!!!!) for personal gain has been observable in the human condition since the beginning of recorded history.

    Classroom education is not an unassailable pillar of western civilization. A third of kids benefit from it, a third are traumatized by it and the remaining third will teach themselves what they need to know in order to get along in life with zero outside intervention.

    Unfortunately human beings, when practicing the aforementioned ‘complication procedure’ replace simple common sense with rot (eg ”playing with the hose’ becomes ‘developing kinisthetic awareness through the manipulation of liquid media’ etc etc). This sort of disingenuous rot is what opens the door for the likes of Hattie and his double speaking disciples.

      • Duane E Swacker

      • 6 years ago

      Russ,

      “(teaching requires content knowledge, empathy and engaging explanation skills AND THAT IS IT!!!!)”

      Nailed it!

      Y’all have Hattie to contend with, we gringos have Marzano, Duckworth and many others all with product to sell. And the teachers end up bearing the costs!

      • Michael Barry

      • 5 years ago

      “content knowledge, empathy and engaging explanation skills”

      I believe you just described Socrates, widely described as the greatest teacher in the Western tradition.

      I wonder how history will describe John Hattie?

  12. […] education risk-free through evidence-based interventions’. He names Robert Marzano, John Hattie and Michael Fullan as influential figures offering to ameliorate ‘risk’ if governments (and […]

  13. […] most read and commented on blog post points out the crisis of critical thinking and ethical leadership we have in education where power […]

  14. […] …one feels a little less comfortable with the advice considering the statistical analysis of effect size is worse than merely dubious. Research can only tell us what may have happened not what is needed next as we all grapple with the future” (Source: Darcy Moore, The Cult of Hattie: Wilfull Blindness) […]

    • Lori

    • 1 year ago

    Just one comment: There is no educational research to support teaching to “learning styles,” so I’m fine with Hattie’s flawed statistics on that effect size. 🙂

Leave a Reply to Askinggoodquestions Cancel reply

Your email address will not be published. Required fields are marked *