Adsense Skyscrapper

Are high pass rates masking educational failure?

If a student passes WASSCE or BECE, does that make them competent? Or just competent by Ghana’s standards, not the world’s?

Working with Lead For Ghana Associates, educators, and Head Teachers in the Ahafo region over the past year has been eye-opening.

To be honest, I thought I had already figured out the major challenges facing education in the region. I was convinced that poor proficiency in English was one of the biggest problems. LFG Associates teaching in Senior High Schools consistently told me about the inability of most of their students to read, speak, or write English, even though all subjects were taught and assessed in English.

They added that even teachers often preferred to speak Twi with their students and among themselves. I believed them completely, especially since I myself always, somewhat uneasily, had to switch from English to Twi whenever I spoke to one particular Head Teacher.

You can imagine, then, my surprise when I read the Ghana Statistical Service report on Access to and Quality of Basic and Secondary Education in Ghana (2000–2023). The report showed that in the 2019/2020 WASSCE, the Ahafo region recorded the second-highest pass rate in English—70.6%.

That was higher than Accra’s 61.5%! How could this be possible? How could the lived experiences I had encountered differ so sharply from the official data? Clearly, there was more to this story.

Urban areas like Accra and Kumasi are often assumed to have the best schools, the most resources, and the most qualified teachers. On the surface, this appears true: many urban students seem to speak fluent English and have access to facilities that schools in regions like Ahafo can only dream of.

But is this the reality for most urban students? Behind the appearance of privilege may lie deep urban poverty. Many city children attend overcrowded classrooms, underfunded schools, and live-in households where poverty makes learning nearly impossible. Could it be those children in rural communities, despite their disadvantages, sometimes perform better because their environments shield them from the harshest forms of urban deprivation? Or perhaps students in cities are so distracted by social media and other influences that they spend less time studying?

Another explanation could lie in how we measure learning. National assessments often test for placement rather than genuine comprehension or application. With pass marks set low and exams designed more for sorting students than assessing mastery, results may paint an overly optimistic picture. But is passing the BECE or WASSCE as it is now really proof of competence?

If the pass mark is 50% and a student scores exactly 50, that means half the questions were answered incorrectly. Can we confidently say that the student is proficient in the subject?

When these issues are raised, however, people often rush to cite examples of the few students who excel in competitions like the NSMQ or shine internationally, while overlooking the average learner. What about the gap between these celebrated stars and the regular student at Twereku Ampem SHS who barely scraped through the WASSCE?

This raises a crucial question: are outdated assessments failing our students? Finland, for instance, emphasises narrowing the performance gap between the highest-and lowest-achieving students, ensuring fairness and consistency in learning outcomes. Ghana, in contrast, seems to celebrate the exceptional minority while neglecting the struggling majority.

Re-engaging with international benchmarks such as PISA could provide a more honest picture. Are Ghanaian students truly performing at global standards, or are we simply hiding behind self-defined metrics?

And then there is the most uncomfortable possibility: examination malpractice. Could it be that supervision in urban centers is stricter, with tighter networks of monitors, police, and invigilators, while rural areas are more loosely supervised? Could inflated results be the by-product of lax oversight? While there is no conclusive evidence to prove malpractice explains the “rural advantage,” we risk overlooking a critical factor if we dismiss it. If exam conditions are compromised, we create a false sense of achievement while leaving students unprepared for future challenges.

Perhaps the truth lies in a combination of all three explanations: urban poverty dragging down performance, flawed assessments inflating national results, and possible malpractice skewing outcomes in certain regions.

What is clear, however, is that Ghana’s education system is far less healthy than the statistics suggest. On the ground, the reality tells a sobering story. The paradox between lived experience and reported results should compel us to ask difficult questions:

Are our assessments truly reflecting student learning? How can we design systems that support the average student, not just the exceptional few? Is it time to re-engage with international benchmarks for a more honest evaluation?

Ghana must face these questions urgently. Otherwise, we risk celebrating numbers while neglecting the very students the education system is meant to serve.

By: Akua & Antipem

 

Comments are closed.