I have written before about the need to keep education simple. The core pillars of curriculum, pedagogy and assessment are set firmly in place, and it will always be thus. If you are having conversations in School not directly linked to these three, my advice is not to waste your time. Educational silver bullets abound, with seductive twin promises of modernisation and revolution, terms that involve moving away from the mythical ‘factory model’ and embracing the ‘fourth industrial revolution’. The fact that neither of these soundbites is true doesn’t seem to bother education consultants or teachers keen to leave the classroom and forge a career telling colleagues about jobs that don’t exist.
No system is perfect, but an effective system should:
- Enable excellent teachers to be recruited, retained and developed.
- Provide a stimulating curriculum for pupils to gain expertise in a range of subjects.
- Eschew ideology in favour of evidence (but also allow for teacher autonomy).
- Offer a valid and non-invasive system of assessment.
Some of these are easier said than done (particularly the first), but by concentrating on ‘core business’ we should be able to retain the best of what we offer whilst seeking gradual improvement via harnessing modern research, evidence and technology.
Let’s concentrate on point 4 – assessment. Learning is invisible; we can only tell what pupils have learned and understood if we assess them. Assessment takes many forms – written or oral, timed or timeless, standardised or not. It can be as simple as a short quiz, or be the product of a year’s work (or more). All assessment has a core purpose: to find out what children know and can do. We usually test a sample to give us information about a domain and effective assessment should therefore allow us to make wider inferences than simply the mark achieved. This is why the task is of lesser interest than what it tells us about that child’s expertise in the domain.
Here follows some general considerations when considering assessment:
High stakes v low stakes assessment
Low-stakes assessment is often more useful. A teacher will assess the pupils in a way they feel is necessary to glean information about understanding. Low-stakes assessment provides data used by the teacher to guide learning; it does not become the central focus, as can be the case with high-stakes assessment, where the mark on the task is all-important. All assessment should be formative, but high-stakes assessment is less likely to embrace the formative. In extreme cases, high-stakes can become the sum goal. An example of high-stakes assessment is a terminal examination; these are necessary to ensure valid grading of the pupils taking that examination, but they should be the pleasant by-product of excellent learning, guided by genuine formative assessment.
One valid criticism of examinations is that they can be practised ad infinitum, so pupils end up becoming proficient in exams rather than the subject. There are many examples of this, and taken to extremes, pupils can spend more time working through past papers attempting to question-spot than learning the subject. The answer here is to produce better examinations – ones that test genuine understanding and the ability to synthesise ideas to solve unfamiliar problems, rather than just regurgitating answers on automatic pilot. A bad version of anything is open to criticism; exams need to be stand-alone, not like an IQ test where IQ goes up simply by taking more versions of a similar test.
Yes, of course. Teachers should be taking readings to gauge pupil understanding on a lesson by lesson basis. But this need not be invasive; it does not need to be assessment ‘that counts’; it need not be part of a formal assessment schedule. Trust teachers to deliver the material, check understanding and allow it to build through logical sequencing of lessons. If a course lasts for two years, what does it matter how much a child has mastered after one year, unless it’s for formative purposes? Including assessment that counts to a final mark before the whole course has been taught makes little sense. As mentioned above, learning is invisible, but it’s also messy. We can rarely plot a linear path of pupil learning – some reach a plateau of understanding, whilst others improve exponentially. All assessment that counts should be delivered at (or near to) the end of the course; we don’t call the winner of the match at half time!
Who should write the assessment?
My simplistic view is that teachers should be trusted to assess their pupils in any way they feel is necessary during the course, and then assessment should be taken out of the hands of teachers in the final summative reckoning. It’s not a case of not trusting teachers, but of ameliorating their workload and ensuring the pressure of summative assessment is taken away from the educator. Having the educator prepare the test, especially when it’s a sample of the domain is unfair, considering the high-stakes nature of this assessment and external pressure from parents. In this case, teachers will always default to ‘teaching to the test’, which is a common complaint when preparing for terminal examinations, but will be exacerbated when the teacher has written the test!
Let’s see what we have in the South Australian Certificate of Education (SACE):
Who writes the assessment?
Teachers, in the main. 70% of assessment that counts to ATAR is written by the teachers, who then deliver that assessment to their students. So pupils in different Schools are doing different assessments, written by their own teachers and graded by their own teachers. The teachers then choose the moderation sample themselves, which is a small fraction of the whole. Surely anyone can spot the flaw in this system?
Yes, lots of it. On average, a pupil will produce around 75 assessment pieces that count during their final two years at School. This invasive assessment is akin to coaching a team that only plays matches but never trains. Assessment, far from retaining its key purpose (to find out what pupils know, remember?), has become a sum-goal grind, less about testing knowledge and understanding and more about producing the task by whatever means possible. Many teachers will write the assessment task before looking at how to teach the course – they then plot the most linear route to that task. If the task was different, so is the teaching, but that makes no sense educationally, right?
Yes, very. As is usually the case when assessment is not only high stakes but also internally controllable, it makes sense to both widen the hoops and to bring the hoops so close to the students’ heads that they can’t help but fall through them. It is in everyone’s interest for assessment to be unchallenging and controllable, with either multiple similar ‘practice assessments’ being implemented prior to the real thing or using a lengthy drafting process to polish pupil work.
High stakes v Low stakes?
It’s mostly all high stakes. Boys and girls build their ATAR from 75 assessment bricks. They don’t need to worry about expertise in a subject, because that isn’t really the goal. Satisfying assessment criteria, jumping through assessment hoops, submitting drafts and outsourcing work to tutors have all become the norm, and what has been squeezed out is the joy of learning and the satisfaction of developing expertise.
I suppose I would mind less if at least there was some honesty and transparency about the process. But there isn’t. This system of assessment that is ripe for gaming and actively promotes a tactical approach to education is advertised as a standard that is shifting at the pace of change. Whatever than means.