What are exams good for? Primary and secondary school exam reform in PNG

Edwin Staff

Not many people have fond memories of school exams. It’s probably a fair generalisation to say that most students don’t like them, teachers don’t like marking them, and education administrators don’t like coordinating them. On top of this they are really expensive to run. The question that Motown legend Edwin Starr might ask then is, like war, what are exams good for?

This question cuts to the heart of a debate occurring in Papua New Guinea on how students should be assessed as they progress through primary and secondary school. Currently all students at the end of primary school (grade 8) and lower secondary school (grade 10) are required to sit an externally administered nation-wide exam. However, as PNG prepares to release a new 5-year National Education Plan, these national exams are set to be phased out over the next six years from 2016 and replaced by “internal school assessment systems”. Unless new national exams are introduced, such as NAPLAN in Australia, the only national examination will be for students at the end of high school (grade 12).

The main reason being communicated to the public for the change is to remove constraints on students progressing from primary to secondary school and into year 12. As reported [pdf] by the PNG Department of Education, “Each year around 100,000 students are pushed out of the education system as a direct result of these examinations”.  The problem is that the grade 8 and 10 results are used by provincial administrations to select students into secondary schools. The “pushed out’ students are those that fail to meet the minimum cut-off grade in these exams. According to the acting Education Secretary Dr. Kombre, examinations are “a colonial legacy to ensure that every student is not given an equal chance of completing their education”.

Indeed, improving access to secondary school is commendable. However, it is worth asking two questions that push a little deeper on this issue. First, will the abolition of the national exams help remove constraints on students progressing into secondary and high school? Second, do the national exams help strengthen the education system and raise educational outcomes, perhaps in ways that are not obvious to casual observers?

The national cut-off grade for entry into secondary school is set by the Department of Education at a total of 80 marks out of 150 marks (across three exam subjects). Presumably the cut-off grade is designed to ensure that students achieve a minimum standard at the end of primary school. However, the national cut-off grade is really just a guide for provincial administrations since the responsibility for determining student intake is at the provincial level.

In practice nearly all provincial administrators set their own cut-off grade substantially below the national cut-off, typically at a total mark in the mid-60s which happens to be close to the national average total grade. The lower provincial cut-off grades indicate that low academically achieving students are already being accepted into secondary school. It seems then that the cut-off grades for selection into secondary school are really being used to manage over-crowding at schools due to a lack of infrastructure and teachers. If this is the case then the grade 8 examinations are not really pushing students out of secondary school – in fact it would seem that they are totally irrelevant for the number of students progressing into secondary school.

So what might be given up by abolishing the national exams and can internal assessment at schools fill the vacuum?

The results from standardised national exams can help stakeholders identify how teachers, schools and provincial education systems are performing – as measured by student learning outcomes – both over time and across locations. This is particularly important given the difficulty of monitoring activity and effort levels being put in by students, teachers, and education administrators. For example, around 20 per cent of PNG primary schools in our 2012 survey reported not having any type of inspection visit in that year, and those that are inspected typically only receive one visit per year.

Standardised test results help with performance monitoring because they can be used to diagnose where problems in the education system may lie. For example, suppose a student performed poorly in an exam then if the result was an exception to the rest of the class then it would indicate that student specific factors were to blame. However, if that student’s class performed poorly on average but other classes in the school did not then class or teacher specific factors are likely to blame. Similarly, a comparison of results across schools or provinces can also indicate potential problems at those levels, although appropriate controls need to be applied to ensure that comparisons are being made between reasonably similar types of students. The My School website, for example, facilitates comparisons of NAPLAN results that control for differences in the socio-economic background of students across schools.

Students can also be tracked over time to see which secondary schools are better able to “value-add” to student learning by comparing the gain in performance between grade 8 and grade 10 or 12 exam results on an individual student basis. Unfortunately, the education system in PNG does not yet have the capacity to track student exam results over time.

The enhanced transparency of student learning performance that the standardised exam results can provide is particularly important under PNG’s Tuition Fee Free policy (TFF) whereby schools have control over large sums of funding but sometimes face little oversight on the way funds are spent. As explained by Ludger Woessmann here, the more flexibility a school has over its management the more important it is to have external standards and assessments.

In contrast, internal assessments cannot function in this way because they tend to be subjective in nature, are not likely to be comparable across teachers or schools or provinces, and can be influenced by local factors in a way that distorts the true assessment of learning outcomes. For example, if TFF funds at a particular school are being wasted to the detriment of student learning outcomes then the school could encourage a lenient approach to internal assessment in order to hide the poor performance.

Furthermore, the national exam system is designed to centrally collate the results and facilitate the analysis and dissemination of information based on the results. In this way students may be able to use this information to benchmark their performance against the entire cohort in the country as they progress through school. This is helpful for decision making on subject selection, how much effort to devote to learning, and education pathways that best suit their ability and ranking. In contrast, an inherent problem of internal assessment based systems is the difficulty in collating the data at a national level for analysis and dissemination. Furthermore, the subjectivity of internal assessment results makes benchmarking across students and time difficult or unreliable.

Abolishing national grade 8 and 10 exams does not mean that all of these benefits will be lost.  There are two reasons for this. First, the full potential benefits of the grade 8 and 10 examinations are not currently being realised due to difficulties in standardising and implementing the exams, a poor data collection and housing system, limited analysis of test results, and little information provided to students, parents and teachers for benchmarking purposes. Second, external standardised assessments, such as the Pacific Islands Literacy and Numeracy Assessment (pdf), may be introduced or scaled up to deal with the shortcomings of internal assessment systems. However, these sorts of assessments are survey based and tend to cover only a small proportion of the student population. For this reason they are limited in their ability to signal how the majority of students, teachers and schools are actually performing.

The challenge for education reformers is to invest in both school infrastructure and teachers to facilitate access to school as well as to ensure that time spent at school by students is not just leading to “empty learning”. National, standardised and externally based examinations allow for independent and transparent measures of student learning outcomes which are crucial for raising the quality of education. That’s what exams are good for.

Anthony Swan is a Research Fellow at the Development Policy Centre.

image_pdfDownload PDF

Anthony Swan

Anthony Swan is a Research Fellow at the Development Policy Centre and lecturer in the Master of International and Development Economics program. He managed the PEPE PNG project at the National Research Institute of PNG, and was also a lecturer at the University of Papua New Guinea. Now at the ANU, he continues his work at the PEPE project. He has a PhD in economics from the ANU and has a background in economic policy formulation and consulting.

2 Comments

  • My earlier comments appear to have been lost in a system failure. I will therefore attempt to reconstruct the main points I made again.

    The first point is that a great deal is expected of examinations and tests. They generally serve a number of different decision-making purposes such as gathering data for curriculum planning, guidance, monitoring learning, selection, and certification. Increasingly, a key expectation seems to be managerial, as this Blog demonstrates. Terms like accountability, transparency, benchmarking and performance monitoring often dominate discussions at the expense of a key purpose which is to support student learning through constructive feedback. The managerial approach also tends to import a punitive attitude of fault-finding and apportioning “blame” rather than exploring reasons for success. When one examination is used for multiple purposes it is often the case that it achieves none as well as a single-purpose approach.

    Second, it is wrong to equate national exams with assessment tools such as NAPLAN. National examinations will normally focus on assessing student learning in all areas of the curriculum whereas literacy and numeracy tests have a limited focus on these two dimensions only, as important as they may be. Where high stakes tests like NAPLAN are introduced it is not unusual to find a narrowing of emphasis on literacy and numeracy at the expense of other curriculum areas.

    Third, there is enormous risk in assuming that ideas about teaching, learning, and assessment can be imported successfully from western nations into different cultures and contexts such as those in PNG. Gerard Guthrie’s book, The Progressive Education Fallacy in Developing Countries, which is largely based on his work in PNG, should be mandatory reading for anyone undertaking development work in education.

    In particular, NAPLAN has raised many questions about its overall education benefit in Australia. It may well prove to be a disastrous model in PNG. It is a form of high stakes testing. I have already written about this matter in two Devpolicy Blogs and made this observation in the first:

    “The more we use high-stakes tests to assess students, teachers, schools and systems, the corruptions and distortions that inevitably appear compromise the construct validity of the test and make scores uninterpretable. It is not difficult to imagine the flow-on effects of high stakes testing in developing countries already fighting the scourge of corruption in their education systems. The Special Issue of the journal Assessment in Education: Principles, Policy and Practice, Volume 19, Number 1, 2012 contains a review of the consequences of high stakes testing in developing countries as well as in Australia.”

Leave a Comment