LAL-White-Logo.png

Two mistakes schools are making when
designing their assessment systems

by Gareth Davies, Managing Director, Frog Education
 

 

Two mistakes schools are making when designing their assessment systems

 

  • 1. Making the assessment too subjective

    One thing that you need to bear in mind when you’re setting your KPIs (Key Performance Indicators) is that you are able to assess the product of a child’s work clearly against the KPI description (and ideally an exemplar standard that you’ve selected as the “bar” for every child to reach). The exemplar might be a piece of work with arrows scribbled on it highlighting why this work is what we’re aiming at for this KPI. Equally, it could be a video of children reading or answering questions; whatever’s necessary to get across a clear standard for your teachers to aim at.

    These exemplars should exist at the end of every year as well, not just year 2, 6 and 11. A free set of exemplar standards for Reading, Writing and Maths has been provided, aligned with the NAHT Framework here.
     
  • 2. Not letting go of levels

    The second reason things may be getting a bit hairy is the way that you are providing judgements for each KPI. There is evidence that many schools are layering the principle of levels over every KPI. This is so widespread that the DfE recently highlighted it in one of their workload review reports.

 

The recent removal of “levels” should be a positive step in terms of data management; schools should not feel any pressure to create elaborate tracking system or “working at grade” approaches. Yet there is anecdotal evidence that schools are introducing complicated systems which mimic levels. This is an attempt to overlay the old world onto the new, and it creates unnecessary data burden that should be avoided.
 

The recent removal of “levels” should be a positive step in terms of data management; schools should not feel any pressure to create elaborate tracking system or “working at grade” approaches. Yet there is anecdotal evidence that schools are introducing complicated systems which mimic levels. This is an attempt to overlay the old world onto the new, and it creates unnecessary data burden that should be avoided.

Eliminating unnecessary workload associated with data management
Report of the Independent Teacher Workload Review Group, March 2016

 

The most common form of this mistake is the use of the early years E, D, S scheme (emerging, developing, secure). We have seen schools add an M for mastery on the end, we’ve seen E, E+, D, D+, etc. and last week we discovered a school using E1, E2, E3, E4, D1, D2, D3, D4. This isn’t to be ridiculed, this is the approach that most schools are taking. It’s familiar and no-one has explained to them any other way of approaching this challenge, so what do we expect?

This is an attempt to find a way of measuring a child’s progress through an abstract numbering system; a desire to measure it in points. It simply isn’t necessary and fails for one of two reasons:
 

Failure 1: E, D, S, etc. aren’t clearly defined for each KPI, so each teacher may have a different interpretation of what each “level” means. It is therefore no more valid than National Attainment Levels were, and any graphs or reports generated from it are equally invalid. They don’t inform the teaching and learning; it is an abstract reporting mechanism at best, and completely misinforming at worst.

Failure 2: E, D, S, etc. are clearly defined with a performance description against each level for each KPI. This is the direction many schools are starting to head, encouraged by their software suppliers. While technically “valid” we are seeing that the workload required from teachers to assess against this model is nothing short of crippling. In our experience trying this approach will ultimately cause such resentment with your teachers that you’ll struggle to get them to engage with anything at all following this; you’ll break their will to engage with best practice.

In short, the granularity that you need to be able to demonstrate progress is in the curriculum objectives themselves. You do not need a granular set of numbers to further sub-divide it, it’s just creating either ambiguity or huge amounts of work, and isn’t providing you with any better quality data. In some cases, it’s rendering the data completely invalid; although admittedly, you can get a cracking looking graph out of it!

To make your workload manageable you need to make sure that your KPIs are simple and consistently assessed. Layering the principle of levels over the top of each KPI is moving away from this goal, not towards it. All we really need to know is whether each child is on track to achieve this KPI by the end of the year, and if they aren’t then what intervention are we going to put in place to sort it out.


Simple.
 

And it works.

NEWS & ARTICLES

View all...