Yesterday Stephen Dinham‘s keynote at the Australian College of Educators conference got a lot of attention. It featured in articles in The Australian, The Guardian, and spread through the usual twitter networks of educators. To his credit, following some twitter interest about the newspaper articles, Dinham shared the actual paper (see here). In this post, I want to pick up on some of the issues raised by Dinham as a means of engaging in public intellectualism.
Dinham states that ‘Australian primary students are out-performed by their secondary peers in relative terms on international measures of student achievement’. He sets out to explore some explanations for this, naming explicitly: i) a general lack of evidence base for teaching and learning in primary education; ii) a propensity to adopt fads and fashions; and iii) increasing unrealistic and untenable expectations placed on primary teachers and schools.
The initial premise is based on trends in large scale international testing regimes, in particular TIMSS (Trends in International Mathematics and Science Study) and PIRLS (Progress in International Reading Literacy Study) with fleeting reference to PISA. He does note:
Caution needs to be exercised when inferring from such rankings – differences between nations are sometimes small and the metrics are different – but the overall trend should be of concern.
So caution should be exercised yet it is OK to build your entire argument on the trends in the data? I mention this because the title of the paper is Primary Schooling in Australia: Pseudo-Science Plus Extras Time Growing Inequality Equals Decline. My argument being that the reader needs to pay careful attention to how Dinham builds his case. The concern over content knowledge being seen as problematic is evidenced through an example of an upper primary class where some students incorrectly linked Captain Cook with the First Fleet.
In a paper intent of critiquing pseudo-science informing education, Dinham commits a similar error through an under-developed argument that hangs on a short chronological history of curriculum and anecdotal evidence from a single classroom. From this he makes the considerable link to a binary between knowledge/content and activity/process. This is not to say that such binaries are not common in education rather that Dinham has not made the case.
This is important as the next explanation is The Lack of an Evidence Base for Teaching and Learning. Apart from reducing the argument to the comparison between medical and education research, there is a narrow view of ‘science’ (reduced to a strict form of logical empiricism). I too have a problem with the nature of much education research, but the solution is not the privileging of one form (arguably one of the most heavily critiqued) over all others.
As would be expected, and arguably the dominant position in mainstream education circles in Australia currently, Dinham introduces the work of John Hattie (his now colleague at the University of Melbourne). Hattie’s work is very popular, but meta-analysis are only convincing if you believe in the original measures. My point being, the belief in meta-analysis is only convincing if you believe that the quantitative studies from which they are built were measuring the right thing and in an appropriate way in the first place. As I have noted elsewhere, all research is a political activity. Also, as mentioned on twitter by Greg Thompson (@EffectsofNAPLAN), drawing on a Dylan Wiliam (@dylawiliam) keynote, there is no evidence that effect sizes actually improve teaching (see here).
The critiquing of fads is much appreciated. It is not done enough in education. But isn’t the popularity of Hattie and his effect sizes a fad also? As Klaus Weber argues, as researchers we should study fads and fashions and not chase them.
Time pressure takes Dinham’s argument into a interesting, but somewhat predictable, space. The pressure on teacher time is a common discussion point in staffrooms, at conferences, and in just about any place where schooling is discussed.
Education continues to be built up as the place in society to solves all woes. In doing so, more and more expectations are added to schools, and the teachers and leaders who constitute them. However, in making the division between ‘academic’ and ‘social welfare’ isn’t Dinham committing the same binary / false dichotomy – or ‘entity’ thinking – that he cites earlier as a problem? While I understand it serves his purpose to begin to make an argument for the innovative programmes they have at the University of Melbourne (a common move in recent work I might add), is this not a loose coupling, or at least a leap in the argument? While there are some references, where is the evidence for his claims?
In Self-esteem Boosting and a Lack of Constructive, Development Feedback, Dinham again draws on anecdotal evidence of classes where ‘no one receives a ‘bad’ or failing mark, red pens are not used to correct work because ‘red is an angry colour’ and ‘merit’ certificates are thrown around like confetti for meeting normal expectations’.
Is the argument wrong – maybe, but maybe not. What I am arguing is that we do not know. There is not enough evidence to make the claims. Hanging the argument for the most part on the work of Hattie and self-citation (which if honest, many of us do), is not enough. I agree with Dinham that:
There is a need to reject the pseudo-science and the shiny products people want to sell educators.
While it is problematic to read too much into the text of a keynote, my concern with the argument that Dinham builds is that it suffers from many of the same critical elements that he speaks against. The way to refute and/or defend claims is through rigorous and robust scholarship. There are many ways to do scholarship and it is important to remember that. Anything that is popular is not necessarily rigorous or robust – and this I believe to be Dinham’s point. Why is it popular in the here and now is an important question. I applaud the goal of critiquing fads and fashions, but we cannot limit the support for the critique to some selective references and anecdotal evidence. In doing so, rigor and robustness is sacrificed for mere opinion.