Updated: Sep 29, 2019
I remember my first experience in an FE college well. It involved presiding over enrolment season in late August/ early September with too many emails flying back and forth referencing ‘IA’ or ‘Initial Assessment’. At that time, I had little to no idea of what this was or what this meant and remember being nearly killed by our furious exams officer who was attempting to make sure it was completed properly. For those who are unaware, students coming into Further Education sit computer-based assessments upon enrolment to determine their current ability level in English and maths. This information is used to help place them on the correct course and let vocational tutors know their strengths and areas of development.
For many reasons, it’s never sat well with me that a student’s first meaningful action during college enrolment is to sit lengthy initial assessments. Part of me thinks it is necessary and aids in making sure students are placed on the correct English and maths level/ course, but what of the negative aspects? Below, I discuss some of the potential negative outcomes and propose a possible solution.
Becoming a little testy
Most students complete IA still heavily fatigued from exam season. With some sitting upwards of 30 (THIRTY!) GCSE exams, the thought of further assessments is enough to sicken even the keenest of students. A side effect of this exam fatigue can also be that students don’t give the assessment the attention it needs and can complete a 1-hour assessment in under 5 minutes. Now, this can be mitigated against and they can be ‘asked’ to do it again, but how seriously will they take this second attempt? When deciding which English and/ or maths qualification on which to enter students, this assessment can sometimes be the deciding factor, and based on the above, we cannot be certain this is done effectively.
‘You don’t fatten the pig by weighing it’
A major selling point of most initial assessment software is that it gives rich insights into student strengths and developmental areas in E&M. Also, it can be exported to vocational tutors who then use this information to embed literacy and numeracy into lessons, plugging gaps and contextualising English and maths in their area. But is this always happening? With further cuts to course hours up and down the country, increased workload and an evolving FE curriculum, can practitioners be expected to do this effectively? And can we rely on IA accuracy before we even reach this stage?
It is usually redundant as of results day
A key argument for IA is that it helps colleges to place students onto the right level of English and/ or maths or the right qualification. An argument against this is that this is usually redundant as of results day anyway. All students gaining a grade 3 at GCSE will re-sit, a majority of grade 2 students will either join GCSE or L1 and most below grade 2 will study L1 or Entry Level qualifications. On top of this, with the above issues with IA, the best source of assessment is generally an English and maths teacher who will flag any issues with student placement anyway. Do we need an extra layer of assessment between GCSE, results day and timetable commencement?
A potential solution
The main need for completing IA is that GCSE results only tell part of a picture. If colleges had the raw marks to go with grades, course placement could be done much more efficiently and effectively. Gaining this data from schools is sometimes near impossible as students transfer to different colleges/ enter the world of work or there is simply isn’t a strong enough relationship between some schools and colleges.
Thousands of pounds are spent annually on maintaining and using IA software to give potentially inaccurate results – could this money be diverted elsewhere? Some exam boards have already started to offer services which allow colleges to track data between school-college transition. If it is possible for exam boards, why can’t the DfE create a national database of results which any college can access? Sections of austerity supporters will point to the cost of this and the impact this would have on current IA providers (job losses, potential closures). A potential solution for these companies is to create and maintain databases holding all English and maths GCSE results, task them with liaising with and compiling this information from exam boards and ask colleges to pay a subscription/ maintenance fee for this service – potentially the same amount currently being paid for the IA software.
It would allow instant access to results which would allow colleges to correctly place students at the correct level, on the correct course and would be a universally consistent system used by all colleges. Surely this would hold some attraction to the armies of exam officers and administrators currently chasing paperwork?
An argument against this is the lack of information given to vocational tutors on student English and/ or maths strengths and weaknesses. With highly trained specialists in all colleges, surely this information could come from E&M practitioners? Low stakes quizzes on spelling, punctuation and grammar (potentially with additional information compiled through tasks completed in class and other building block topics) would provide the same level of detail and could be exported and shared with ease.
With the additional time and potential money saved, FE practitioners would have resources available to develop CPD around English and maths and the DfE could use any residual funding to outsource training to KS1 – KS3 teachers to train practitioners on how to identify gaps in metacognitive building blocks and begin to fill them.
There are many holes in the above, and whether anything like this will be actioned any time soon is highly debateable, but it at least points to another way from the dispiriting and potentially inaccurate current practices being used up and down the land.