Research and analysis

Responding to Criticisms of the CASLO Approach (Report B): Discussion

Published 18 November 2024

Applies to England

In the preceding sections, we presented AO views about the benefits and potential problems of the CASLO approach in the specific context of their ‘exemplar’ qualifications. We reported on what they saw as key mitigations or protective factors that helped reduce the risks of problems arising in these qualifications. We also drew out, where relevant, more detailed insights concerning the nature of certain problems and perceived tensions in the CASLO approach.

In this section, we draw together some of the findings about AO perceptions of the benefits of the CASLO approach, the extent to which they recognised different potential problems and the types of mitigations they proposed. We consider whether it is possible to distinguish between problems that are easier or harder to prevent or mitigate. And we consider how tensions within the CASLO approach, or contextual qualification factors (such as, purposes or cohort size), might affect the likelihood of problems arising, or the feasibility of mitigating relevant risks. Finally, we discuss patterns of mitigation type prevalence and applicability across different problems, and tentatively consider the plausibility of certain mitigations. We conclude by considering implications of our findings for our understanding of optimal functioning of the CASLO approach.

Perceived benefits of the CASLO approach

The AOs were largely positive about the use of the CASLO approach across their various contexts. Firstly, the approach was perceived to incorporate key mechanisms that enable AOs to design qualifications that help to promote student engagement and mastery learning, and satisfy the highly varied needs of their students. Secondly, the approach was simultaneously deemed to satisfy the requirements of employers and other stakeholders for relevant and dependable qualification results and for competent workers. The key mechanisms that were perceived as fundamental to meeting these needs, and which are embodied in the CASLO approach, are:

  • flexibility (in delivery or mode of learning; in qualification or assessment design; to enable domain personalisation or contextualisation in learning and assessment)
  • transparency (of the learning domain and of the alignment between the learning domain and assessment)
  • the mastery model (in learning and assessment)

Having a high degree of flexibility and transparency in qualification delivery, design and assessment were deemed by most AOs to be particularly useful for students. These were thought to:

  • create opportunities for learning which might not be facilitated by other qualification approaches
  • allow achievement of qualifications from different starting points
  • help promote student agency and engagement through a sense of clarity of their learning journey
  • ensure a sense of relevance for students, helping to promote engagement and motivation

The mastery approach was additionally seen as motivating for students, instilling them with confidence in their abilities to do the job that they are preparing for.

Flexibility was also believed to help satisfy the needs of employers and other users for qualifications that are relevant in their specific contexts, with transparency of specifications helping to ensure a higher degree of clarity and trust in what these qualifications certify. Transparency of the content domain and its alignment with assessment requirements was also valued by the AOs themselves as a mechanism that helped to promote and maintain comparability across the different contexts in which their qualifications were delivered. These aspects, together with adopting the mastery approach to learning and assessment, were deemed to contribute to the overall validity and dependability of qualification results. However, in addition to the abovementioned reasons, some AOs in our sample noted other reasons for adopting the CASLO approach, such as the expectations of employers, their sector or regulatory bodies, or historical reasons.

Problem recognition patterns as an indicator of potential problem significance in CASLO qualifications

While all AOs highlighted various benefits of the CASLO approach, the views expressed in our interviews were sometimes qualified by a recognition of some of the challenges that the approach also brings. Some of these challenges are related to potential problems that have been identified in the literature for CASLO qualifications. But they also reflect tensions that were often referenced in our interviews, which we return to later. Some of the challenges and tensions are essentially linked to the key CASLO approach mechanisms and involve difficulties in ensuring:

  • sufficient flexibility without compromising standards in teaching, learning and assessment
  • sufficient transparency without excessive predictability of assessment and negative backwash into teaching and learning
  • domain mastery (that is, exhaustive teaching and assessment) without excessive burden

While there was some recognition of all of the abovementioned challenges and tensions in the CASLO approach, the recognition patterns for the specific potential problems that were discussed in the interviews differed depending on problem type[footnote 1]. The potential assessment problems tended to be more commonly recognised by the AOs than potential teaching, learning and delivery problems. This pattern might suggest that teaching and learning problems are deemed to be less of a challenge in CASLO qualifications. However, it might also reveal something about the perceived boundary between AO responsibility and centre responsibility, with AOs feeling a stronger sense of ownership of assessment issues. We return to the broader theme of AO responsibility and impact in the next section.

The most frequently recognised assessment problem was that of inaccurate judgements, with 12 of 14 AOs recognising its potential relevance for their exemplar qualifications. The least recognised assessment problem was that of atomistic assessor judgements, with less than half of the AOs recognising it outright as a potential problem, although another 6 AOs saw some relevance in it. The other potential assessment problems were recognised by the majority of the AOs.

It should be noted that several assessment problems, including poorly conceived assessment tasks or events, lenience, malpractice and inappropriate support, can be related to the potential imprecision of the AC. Imprecise AC may allow some leeway for assessors both to design the tasks and interpret the standards in ways that could potentially, inadvertently or deliberately, reduce the level of demand or consistency of standards to which students are assessed. It is, therefore, unsurprising that most of these problems were recognised to a similar extent as the potential problem of inaccurate judgements based on imprecise AC.

Among the teaching, learning and delivery problems, the most frequently recognised one was that of incoherent teaching programmes, which was recognised by half of the AOs in our sample. The least recognised problems were superficial learning, lack of currency and downward pressure on standards, with only one or two AOs recognising them outright as potentially relevant to their exemplar qualifications. The rest of the problems were also recognised by only a minority of the AOs.

These recognition patterns tentatively suggest that some potential problems might have been perceived as more challenging than others. Incorrect judgements and inappropriate support topped the list of assessment problems, and incoherent teaching programmes and undue assessment burden topped the list of teaching, learning and delivery problems in this respect.

Furthermore, patterns apparent in whether a problem was not recognised outright or was not entirely recognised might further capture something about different AO attitudes towards different types of problems or the relevance of different problems to different qualifications. For instance, superficial learning, despite being explicitly recognised by only 2 AOs, was seen as somewhat more problematic than some of the other teaching, learning and delivery problems, most of which tended to relate to the specification of the content domain or standards, as noted in the previous paragraph. The latter problems were, perhaps, more in the domain of the AOs than in the domain of teachers, and maybe, for that reason, perceived to be more easily mitigated and, thus, posing fewer risks.

The AOs discussed a wide range of mitigations and several protective factors irrespective of whether or not they explicitly recognised potential problems. Among those AOs that did recognise the problems, there appears to be some relationship between AO perceptions about the relevance of the problems to their exemplar qualifications and the number of mitigations that they referenced in relation them, although this was not a completely clear-cut pattern.

Figure 3 (assessment problems) and figure 4 (teaching, learning and delivery problems) below show the number of references to mitigations or protective factors mentioned in relation to individual problems across the AOs that recognised them. The “all mitigations” bars (in blue) depict the total number of references to mitigations or protective factors per problem, including repetitions of the same mitigations or protective factors across AOs. The “distinct mitigations” bars (in orange) reflect the number of mitigations or protective factors counted only once per problem even if mentioned by multiple AOs.

Figure 3 Total number of mitigations and number of distinct mitigations mentioned per assessment problem across the AOs that recognised them

Figure 4 Total number of mitigations and number of distinct mitigations mentioned per teaching, learning and delivery problem across the AOs that recognised them

It can be seen that problems that were most frequently recognised (for instance, inaccurate judgements and incoherent teaching programmes) were associated with the highest number of total references to mitigations and largest variety of distinct mitigations. The reverse was true for problems that were least frequently recognised (atomistic assessor judgement, superficial learning, downward pressure on standards, lack of currency and local or personal irrelevance), as these were associated with the lowest number of references to mitigations and a somewhat smaller variety of distinct mitigations.

The overall smaller variety and number of mitigations proposed for the least frequently recognised potential problems cohere with the AO comments which suggested that some of these problems might have been, in some respects, outside of direct AO control or could be more easily mitigated through a smaller number of mechanisms. Superficial learning might fall into the former group, while lack of currency might fall in the latter.

On the other hand, the most frequently recognised problems were dealt with through many different mitigating mechanisms or protective factors, with AOs proposing on average 6 (and at least 4) mitigation types for inaccurate judgements, and on average 5 (and at least 3) for incoherent teaching programmes. This might be indicative of AO perceptions of the complexity of these problems, and perhaps also a reflection of a high degree of AO agency in mitigating associated risks. The AO comments described in earlier sections suggested a great deal of complexity in how far the multiple mitigations needed to work in concert to address the problems.

Interestingly, even though they were not recognised by as many AOs as some other problems, lack of holistic learning and lenience were associated with a relatively large number of mitigations, both in terms of overall number and variety of mitigations proposed. In contrast, malpractice and inappropriate support, despite being recognised by most AOs, were associated with comparatively fewer and less varied mitigations than other more widely recognised problems. We speculate that this finding, in conjunction with the profile of mitigations described earlier, may indicate that the AOs saw problems such as the latter 2 as relevant, but had fewer mechanisms at their disposal to address the associated risks. Alternatively, there may be less need for elaborate mitigations for these 2 problems as the threat of certain punitive measures may be sufficient to deter centres from engaging in such practices.

In our qualitative analysis presented in earlier sections, we largely did not separate the mitigations according to whether AOs recognised the problems or not. Nevertheless, we occasionally highlighted certain areas where there appeared to be some tendency for the profile of mitigations to differ in this respect.

In order to investigate potential patterns in mitigation profiles related to whether or not the problems were recognised, across all potential problems, we separated and summed the references to mitigation types when the AOs recognised the problems and when they did not. We then calculated the proportion of references to each mitigation type in relation to the total number of references to mitigation types mentioned within each group of references (that is, when recognised and when not recognised). This is depicted in Figure 5 for assessment problems and in Figure 6 for teaching, learning and delivery problems. In these figures, the blue bars represent the proportion of references to each mitigation type when AOs recognised the problems, and the orange bars represent the proportion of references to each mitigation type when AOs did not (entirely) recognise the problems.[footnote 2]

As can be seen, for most mitigation types, there were no substantive differences in the proportion of references that were made by the AOs that did recognise the problems and those AOs that did not recognise the problems. However, for some mitigation types, more tangible differences can be observed. In the case of assessment problems, AOs that did recognise them spoke proportionally more frequently about support and guidance and QA than the AOs which did not recognise these problems. Those AOs that did not recognise assessment problems, on the other hand, tended to speak more frequently about holistic aspects, attitudes, mitigations supporting learning, as well as contextualisation and relevance and the protective factor of operating on a smaller scale (that is, with smaller cohorts or within smaller sectors). For teaching, learning and delivery problems, the AOs that recognised them also mentioned support and guidance and QA more frequently, alongside occupational or professional expertise and inputs. The AOs that did not recognise these problems spoke more frequently about holistic aspects, contextualisation and relevance, attitudes, qualification or assessment design features and mitigations supporting learning through various flexibilities.

Figure 5 Mitigation types as proportions of all the mitigations mentioned when problems were recognised and when they were not recognised – assessment problems

Figure 6 Mitigation types as proportions of all the mitigations mentioned when problems were recognised and when they were not recognised – teaching, learning and delivery problems

Although these are very tentative patterns, and the conclusions speculative given the nature of our data and relatively small differences, they illustrate what we might expect to see. That is, the AOs that recognised the problems perhaps tended to provide somewhat more active mitigations. On the other hand, where certain problems were not recognised as potentially relevant for certain exemplar qualifications, it is unsurprising that certain contextual features (such as cohort size) or other design features of these qualifications (flexible delivery or mastery model) were referenced rather than active mitigations to explain why the problems were not seen as relevant.

CASLO-specific or universal qualification problems?

The AOs in our interviews sometimes responded to certain problems by suggesting that they were universal, irrespective of the qualification approach. A few comments suggested that assessment problems related to difficulties in interpreting AC or content specifications were not unique to CASLO qualifications and were caused by the inherent imprecision of language. However, most AOs acknowledged that the dependence on language transparency in CASLO qualifications was greater than in the classical approach. Overall, most AOs saw the teaching, learning and delivery problems as more universally relevant rather than CASLO-specific.

For instance, discussions of the potential problems of local or personal irrelevance and lack of currency often referenced the need to balance specificity and breadth of content in teaching and learning. Yet, the AOs did not think this was specific to CASLO qualifications and argued that content specification for any type of qualification may face similar issues. In fact, they thought that some of these issues were more easily addressed in CASLO qualifications due to their flexibility and contextualisation as well as providing the potential to incrementally review and update them in a more agile way.

Some AOs noted that the acquisition of certain esoteric skills or attributes such as communication, autonomy, resilience, collaboration, teamwork or problem‑solving, as well as application of knowledge, might be happening incidentally due to the contextualised, holistic nature of delivery of their CASLO qualifications. Some AOs saw these as “value‑added” aspects of the teaching and learning process rather than part of the construct that was being assessed in their qualifications. Where AOs spoke about challenges of specifying and assessing such content, they again tended to agree that this was not a CASLO-specific issue and that it was easier to teach and assess such constructs in CASLO qualifications.

Several AOs interpreted the potential problem of superficial learning as analogous to the problem of “teaching to the test”, which they saw as a universal assessment washback problem, irrespective of the specific qualification approach. AOs also thought that having a mix of highly motivated as well as less than motivated or engaged students, with the latter more likely to be prone to superficial learning, was inevitable in most qualifications, whether CASLO or not. AOs highlighted that, despite some elements of the CASLO approach increasing student motivation and engagement, a level of intrinsic student agency and engagement with learning and assessment was necessary for them to succeed and that no amount of tutor support or qualification properties can entirely compensate for that. Several AOs also saw a certain amount of assessment burden as unavoidable irrespective of the qualification approach, and as the price one has to pay for achieving a qualification. Relatedly, the AOs did not think that “poor teaching” was a CASLO-specific problem. Indeed, some AOs believed that the CASLO qualifications make it easier for AOs to detect and mitigate problems related to poor teaching through continuous support and multiple touchpoints with centres.

Contextual factors affecting potential problem relevance in CASLO qualifications

There were several contextual factors related to the qualifications in our sample that appeared to affect some of the problem recognition patterns, and the potential effectiveness of mitigations that the AOs proposed. These contextual factors included qualification purpose, cohort or sector size, qualification level and delivery context.

One tentative pattern involved differences in the extent of recognition of certain potential problems between ‘dual purpose’ and ‘confirm competence’ qualifications. For instance, the potential problems of inappropriate support and lenience and malpractice were somewhat less likely to be recognised amongst AOs offering ‘confirm competence’ qualifications. This might be due to the tighter alignment between the standards of those qualifications and occupational role requirements, often captured via NOS, which were, therefore, more likely to be well-understood and adhered to. The qualification purpose that enabled direct progression to employment perhaps also affected the likelihood of these potential problems arising, with practitioners less likely to be willing to exercise lenience and/or malpractice given safety and other high stakes concerns in the workplace context. However, one AO suggested that there could be an increased risk of malpractice in licence to practise occupational qualifications because of the necessity of achieving these qualifications for progression to employment, which may not be a strong requirement with other qualifications.

The AOs with ‘confirm competence’ qualifications also took the view that the fact that assessment in their context typically happens in real-life situations, and is thus not “designed”, in itself helped to overcome some of the potential issues with poorly conceived assessment tasks or events, and inherently ensured a high degree of validity as well as more holistic assessment. However, the potential problems of local or personal irrelevance and lack of currency were more frequently recognised by the AOs with ‘confirm competence’ qualifications. This might suggest potentially greater challenges in ensuring agreement on qualification content in ‘confirm competence’ qualifications and a more dynamic interaction with workplace practices or specific job roles.

The potential problem of incoherent teaching programmes seemed to be more frequently recognised in ‘dual purpose’ qualifications (as was that of lack of holistic learning). The apparent tendency of the AOs delivering ‘confirm competence’ qualifications to be less concerned with teaching and learning and to recognise this potential problem less frequently might be to some extent unsurprising, given the largely workplace-based delivery of these qualifications. These AOs seemed to adopt the view that, in their qualifications, traditional teaching is less fundamental than situated learning, which builds competence through observation and following of expert practitioners, and repeating work-relevant tasks in a community of practice. In general, there seemed to be more of an implicit reliance in these qualifications on the positive interaction between teacher or assessor occupational expertise and the holistic or contextualised nature of the construct and assessment, and less of an explicit attempt by the AOs to influence teaching approaches actively. These AOs saw their role mostly in providing guidance and enabling assessors to carry out appropriately holistic assessment to avoid creating negative washback into workplace learning, which could affect its implicit coherence. Conversely, the AOs with ‘dual purpose’ qualifications seemed to be more engaged with and more explicitly supportive of the teaching process and more focused on its QA.

For AOs with ‘confirm competence’ qualifications, the occupational standards related to the core content essential to the qualification had to be achieved and were not negotiable, particularly in safety‑critical domains. This, in the AOs’ views, made the potential problem of downward pressure on standards in their context largely irrelevant. In contrast, one AO offering a ‘dual purpose’ qualification suggested that, because such qualifications prepare students for either progression to education or entry‑level jobs, a more inclusive approach to specifying standards – and greater negotiation between stakeholders about the appropriate standard for different purposes and intended progression – were required. This AO seemed to prioritise ensuring that the qualification was pitched at a standard which was accessible to a wide range of students.

The size of a qualification cohort is another contextual aspect worth mentioning. Several AOs suggested that the relatively small scale of their exemplar qualifications, in terms of having a small cohort or catering for a small sector, helped them to mitigate several potential problems. For instance, qualifications with smaller cohorts potentially benefitted from more extensive QA, thus mitigating the potential problems of inaccurate judgements, lenience and malpractice. Communities of practice were deemed to be more reliable in smaller or long‑standing sectors, helping to mitigate the abovementioned problems further. Moreover, AOs with smaller networks of centres appeared more confident in their ability to gain intelligence from centres, where most practitioners knew each other, and operated in a tight, “self-policing”, community of practice.

Lower-level qualifications were considered to be more resilient to risks related to local or personal irrelevance, lack of currency and downward pressure on standards. This was mostly because their content often represented the fundamentals of the domain that were largely non-negotiable and, thus, not subject to personal preferences, and less likely to date quickly. Some AOs with lower-level qualifications believed that the potential problem of hard-to-pin-down content getting missed is less relevant in their context as they thought that content at lower levels was easier to capture in qualification specifications. However, others did not think that qualification level was related to increased difficulty in communicating LOs and thought that this was a subject-specific challenge.

Finally, some AOs noted various limitations that are more likely to arise when CASLO qualifications are delivered in school or college settings rather than in the workplace. They mentioned limitations related to:

  • teacher expertise to impart practical skills
  • inability to replicate commercial environments
  • use of assessment methods mirroring those of academic subjects
  • tendency towards unit-based delivery to support timetabling, which might atomise the content and teaching

On the other hand, one AO noted that the more restricted range of evidence typically used for assessment in college-based delivery was helpful in ensuring a higher degree of standardisation.

Tensions in the CASLO approach as indicators of potential problem relevance

Various AO comments provided insights into certain assumptions and tensions within the CASLO approach that might exacerbate certain potential problems. This provides further insight into which of the criticisms from the literature may have the most force and should receive the most attention to ensure the optimal functioning of CASLO qualifications.

Assumed versus actual transparency

The transparency of CASLO specifications, standards and assessment requirements was highly valued by the AOs. It was considered to be a helpful mechanism in promoting student engagement and agency, and in enhancing clarity in what needed to be taught. It also helped with interpreting the meaning of qualification grades, and establishing a clear link with relevant professional standards. However, there was evidence in our data that actual transparency is not easy to achieve and that it might be, to some extent, assumed rather than ensured in some cases.

This is, perhaps, most clearly illustrated by AO views in relation to the potential problem of inaccurate judgements due to challenges in interpreting the AC. The fact that this was the most widely recognised problem, combined with the extent of resources required by the AOs to ensure consistent interpretation of AC, suggest that transparency of standards is not necessarily a given in the CASLO approach. Consistent interpretation may often require the kind of heavy investment frequently described by the AOs in our interviews.

In relation to the potential problems of lenience and malpractice, some AOs thought that these were relatively easy to detect in assessment evidence during EQA. However, other AOs suggested that, in addition to student work, there is a need for triangulation of evidence from various sources, including scrutinising assessment processes, speaking to staff and students, observing assessment taking place or gaining intelligence from centres. In a similar vein, some AOs suggested that making judgements on the borderline between 2 grades, and being able to argue a position on that, was challenging for both assessors and QA staff, requiring discussion and sometimes negotiation. Holistic approaches to assessment, frequently mentioned as mitigations of various problems, also presented challenges for ensuring sufficient transparency of alignment between the construct and AC. All these challenges highlight the potential limits of qualification specification transparency as sole vehicle for ensuring consistency.

In relation to content specification, based on the extent of recognition of the potential problem of incoherent teaching programmes, it seemed that the AOs recognised the need to provide a degree of support for teachers and/or assessors, rather than assuming that transparency of specifications would in itself ensure coherence in teaching. It was also suggested in relation to several potential problems, including incoherent teaching programmes, that there was a need for reliance on implicit or tacit understanding of content links and other aspects, further suggesting limitations in the extent of specification transparency.

On the other hand, several AOs noted potential negative washback impacts from excessive transparency of assessment requirements and assessment alignment with the syllabus, and some actually attempted to disrupt this alignment to some extent. Simultaneously, too little transparency in holistic assessment was deemed likely to threaten consistency of judgements. This was one of several difficult balances that needed to be achieved to ensure the appropriate functioning of CASLO qualifications.

Flexibility versus prescriptiveness

Another tension that was apparent in AO comments was the balance between flexibility and prescriptiveness. It was relevant to several potential problems discussed. The challenge of finding the balance between ensuring sufficient specificity of the AC to support consistent judgement and allowing sufficient breadth to enable flexibility, personalisation and contextualisation of assessment underlined most of the discussions of the potential problem of inaccurate judgements. This tension was partly reflected in varied views about the extent of detail that should be provided in guidance or through exemplar materials, or whether the latter should be provided at all. In relation to the potential problem of poorly conceived assessment tasks, although AOs generally argued strongly in favour of flexible, contextualised assessments, they also recognised the potential challenge this brings to ensuring an appropriate degree of consistency and comparability between different centres or students within centres. Extensive QA processes were generally seen as necessary to ensure that flexibility and contextualisation do not tip into unreliability of judgements and standards. Within this, certain AOs argued that a degree of inconsistency is inevitable as well as acceptable, given the advantages of contextualisation.

This tension was apparent in relation to certain teaching and learning problems too, for instance that of incoherent teaching programmes. Although a certain degree of flexibility was seen as one of the key benefits of the CASLO approach, AOs also saw value in a degree of prescriptiveness in what needs to be taught and how, to ensure comparable quality of learning experience across centres. The views regarding achieving this balance in their qualifications appeared to partly influence AO positions on how much and what type of content specification and delivery guidance they thought it appropriate to provide to centres, including how to approach sequencing learning and progression through the content.

Discussions about the potential problems of local or personal irrelevance and lack of currency of qualification content highlighted the need to achieve another balance – that between the need to prescribe content and the flexibility to adapt it. This was often influenced by broader qualification purposes and attitudes of qualification users regarding how far that balance should tilt towards narrower occupational roles vs. broader educational goals. Increased personalisation of content was deemed by some AOs likely to lead to excessive narrowing of the content domain and lower transferability of qualifications even though it might be approved of by certain stakeholders. However, a relatively narrow focus on core content was deemed to mitigate certain other potential problems, through the sense of relevance this created in students, or through a reduction in assessment burden. Some AOs thought that their qualifications did not present a barrier to either personalisation or broadening of content, as required, and that centres had the flexibility to adapt content appropriately. They also believed a qualification awarded at one point in time could not be expected to “futureproof” someone’s career and suggested that this was mitigated by accepting the need to invest resources in life-long learning and CPD.

Cost-effectiveness

Another tension that was prominent in AO comments concerned how to establish cost-effectiveness, or value for money, in relation to the resources needed to ensure optimal functioning of their qualifications, partly due to their scale, but partly also due to other challenges, especially the degree of flexibility that they allowed. For instance, resource challenges appeared to permeate and to some extent shape the way that the mitigations were put in place for the potential problem of inaccurate judgements, be it in relation to QA, standardisation, qualification design and review, and so on. Resource issues were also mentioned in relation to investing in and supporting communities of practice, particularly where other bodies, such as sector skills councils, no longer provided support of this nature. With reference to lenience and malpractice, most AOs implied that a certain amount of unreliability would inevitably remain in the system despite best efforts to eradicate it. This was due to limited resources to moderate every single student result, as well as the complex nature of the judgements being made by everyone involved, including assessors, IQAs and EQAs.

The challenge of establishing cost-effectiveness also pertained to other actors in the qualification ecosystem, such as centres, according to some AO comments. For instance, resource limitations within centres might affect the extent of flexibility that their students experience, including the number of available resits, how tailored the assignments might be to specific student contexts or interests, or how many optional units they might be able to deliver to allow personalisation.

Overall, there appeared to be an implicit recognition that the CASLO approach inevitably required significant investment and resource to operate effectively, and to ensure reliable assessment alongside sufficient teaching and learning flexibility, but also that the benefits of the approach justified this investment. What was less clear from the views expressed in our interviews was where the optimal balance between investment and resource, as well as prescriptiveness and flexibility, should lie and how far a defensible balance could be achieved across all contexts and qualification types.

Lack of clarity over roles and responsibilities between AOs and centres

How AOs positioned themselves in relation to other actors in the broader educational ecosystem also needed to be balanced.

Some of the decisions about the amount and nature of support that the AOs provided to centres in relation to assessment appeared to depend not just on the amount of resources or investment available but also on the appetite of centres for receiving support and guidance (that might be perceived by centres as restrictive, given their individual delivery contexts). Some AOs also saw value in centre ownership of assessment and encouraged centre and assessor development in this respect. And some AOs thought that creativity in assessment approaches might be stifled if centres relied too much on exemplars and detailed AO guidance.

Most AOs thought that providing a degree of guidance was ultimately beneficial to reduce pressure on centre staff when making potentially difficult assessment decisions. The integrated approach to QA and support appeared to aim to get the centres to the point where they could operate with little support from the AO. Nevertheless, some AOs suggested that centres sometimes had a preference for off‑the-shelf assessment materials to support them, potentially because they did not have the time, resource or expertise to develop appropriate materials themselves.

Some AOs explained that nowadays, due to changes in policy and funding arrangements in centres, AOs had less direct influence on centres with respect to tutor or assessor CPD requirements or the length of industry experience. AOs could monitor these but could not enforce specific requirements on centres to adhere to. Although tutor or assessor occupational expertise might be seen as squarely in the domain of centre responsibility, it was noted that a lack of expertise might threaten their ability to interpret and apply the AC appropriately, for which AOs are ultimately responsible.

Interestingly, there were some inconsistent views about certain areas of AO responsibility which seemed to be clearly in the domain of assessment. While some AOs described their EQA process as involving checks of assessor judgement accuracy and consistency, others questioned how far the EQA checks should focus on judgement accuracy, rather than focusing upon broader assessment and IQA processes. One AO questioned whether it was EQA’s role to “second-assess”, suggesting that, partly due to resource-intensive nature of this process, EQA’s role was more to check that all processes in the centre were in place to support correct assessment decisions rather than the checking the decisions as such. Ofqual regulations indicate clearly that EQA processes must include checks of judgement accuracy, yet the balance between focusing on judgemental accuracy versus broader assessment and IQA processes is challenging to operationalise, as we considered in some detail in Newton & Lockyer (2022).

The domain of responsibility of the AOs in relation to potential teaching and learning problems was even less clear‑cut. Overall, most AOs seemed to recognise the need to provide a degree of support for teachers, even though the AOs appeared to have a great deal of confidence in and reliance on their occupational or professional expertise. This might suggest inherent tensions in the relationships between AOs and centres depending on centre attitude towards receiving explicit teaching guidance and their perception of the AO as a “credible authority” in this domain or not.

There was a suggestion by some AOs that the extent to which teachers seemed to want explicit guidance on schemes of work and/or pedagogy from AOs fluctuates over time. And some AOs argued strongly that only those who are occupationally competent, and who do not need additional resources such as schemes of work and textbooks, should be allowed to teach vocational qualifications. For the most part, the provision of support and guidance related to schemes of work or pedagogy was tentative, with these aspects deemed ultimately to be the prerogative of centres. Despite the perception that their role in providing pedagogy-related support was limited, AOs that provided it seemed to believe that such support did not present barriers to flexible delivery (unlike specific schemes of work).  

Some AO comments suggested a clear belief that there was a best approach to delivering their qualifications. For some AOs, this seemed to include an expectation that centres would teach content that was broader than the specified learning outcomes, although none of the AOs appeared to have strong requirements from centres in that respect. Similarly, in relation to the potential problem of lack of holistic learning, despite providing support for centres, as well as conducting some monitoring of holistic approaches, AOs again pointed out that their impact on how qualifications were delivered in centres was limited. There seemed to be a broad agreement that attempts to raise the level of prescriptiveness might jeopardise the flexibility that is highly valued in CASLO qualifications. Overall, there did not appear to be much in the way of agreement concerning the optimal amount of responsibility in relation to providing support for teaching and learning.

Perverse incentives

There were several potential problems which were said to be exacerbated by the influence of certain perverse incentives on centre or student behaviour. These mostly involved funding and accountability pressures, achievement rates, and time pressure while striving to conform to the rules of specifications. It was also suggested that potential biases could arise from familiarity with students, therefore affecting tutor or assessor decisions or actions, as could EQA overfamiliarity with centres. Some AOs included potential factors that accommodated these risks within their risk‑based sampling models, helping to ensure that centres deemed susceptible to these issues got extra monitoring.

Potential problems of lenience and malpractice were often discussed against the backdrop of potential perverse incentives, which were deemed likely to influence centre behaviour and complicate the task of quality assuring qualification results. Private training providers as well as schools and colleges faced pressures from performance-related pay and achievement rates too. It was also mentioned that certain roles that are normally fundamental for QA in the CASLO approach, such as IQAs, are potentially under a lot of pressure from their institutions to ensure appropriate achievement rates. Typically, the absence of time constraints on learning was discussed in terms of its potential to remove incentives for centres to pass students before they reached the required standard, thereby mitigating the risks of lenience and malpractice. However, some AOs noted that employer requirements or funding arrangements can still impose time constraints even if the qualification could (in theory) be delivered to less constrained time scales.

Interestingly, atomistic judgements and lack of holistic learning were also thought to be potentially exacerbated by some of the abovementioned pressures. Some comments suggested that the pressure to ensure that students pass, under achievement rate or funding pressures and pressure of the mastery model, might incentivise teachers to deliver or assess the qualification more atomistically (for fear of missing certain aspects in a more complex, holistic, approach).

Key mitigations and protective factors

In the following sections, we briefly summarise and discuss the nature and applicability of the various mitigation types we identified in the data across individual problems.[footnote 3] These mitigations and factors help to foster the conditions under which the quality and value of CALO qualifications may be ensured. We also point out certain tensions where particular mechanisms might represent mitigations for the risks associated with one problem while simultaneously creating challenges in the context of another problem. These tensions also reflect the complex interactions of different mechanisms that might ensure the quality and value of CASLO qualifications, with several mechanisms usually required to work in concert to achieve this.

Profile of mitigations across problem groups

Our earlier description of the mitigations that AOs discussed in response to various problems showed that there was a great deal of overlap, both within and across the different problem groups. This is further depicted in Figure 7 below, which shows the mitigation types referenced for assessment problems, and for teaching, learning and delivery problems, as a proportion of the total number of mitigations mentioned across all these problems.[footnote 4] The larger proportion of assessment problem mitigations also reflects the overall larger number of mitigations mentioned for these problems.

Figure 7 Mitigation types referenced for assessment and teaching, learning and delivery problems (TLD) as a proportion of the total number of mitigations across all problems

It can be seen from this chart that certain problems – including QA, support and guidance, occupational or professional expertise, qualification or assessment design features, holistic aspects, attitudes, contextualisation and relevance and qualification or assessment design processes – were seen as helpful across both groups of problems, though they were not used to the same extent in both groups. For instance, holistic aspects and attitudes seemed proportionally more relevant in the context of teaching, learning and delivery problems, while QA was more frequently referenced in the context of assessment problems.

The differences in mitigation frequency across different groups of problems is depicted more clearly in Figure 8 below. It shows the proportion of references to each of the mitigation types across assessment problems (blue bars), teaching and learning problems (orange bars) and delivery problems (green bars), as a percentage of the total number of references to a specific mitigation type. For instance, out of all the references to holistic aspects (across all the problems), 29% were mentioned in relation to assessment problems, 66% in relation to teaching and learning problems, and 4% in relation to delivery problems.

Figure 8 Differences in mitigation relevance to different groups of problems

Perhaps unsurprisingly, the figure also shows almost exclusive use of implicit content links and inputs as mitigations for teaching and learning problems, and almost exclusive use of standardisation, QA and references to operating on a small scale (in terms of small cohort size or sector size) in relation to assessment problems. Beyond these instances, there is a significant overlap in the use of most mitigation types with different types of problems. This probably testifies to the complex interaction that exists within CASLO qualifications between teaching, learning, delivery and assessment. Furthermore, all groups of problems also appear to require some mitigations that are relatively exact, such as certain design features, as well as those more esoteric, such as attitudes, expertise of practitioners and appropriate prioritisation of resources.

In the following sections, we discuss the key mitigation types and protective factors mentioned by the AOs across different potential problems that they pertain to.

Support, guidance and QA

AOs often described their QA as a dual process involving both monitoring and support in relation to interpreting standards and other aspects of qualification delivery. In our analysis, we coded as QA those aspects that related more explicitly to monitoring rather than to support and guidance, although it was not always straightforward to distinguish between them.

Support and guidance for centres and, to some extent, for AO staff including qualification writers and EQAs, was one of the most frequently mentioned mitigation types, featuring across all problems, in various different guises and with different foci. Support and guidance featured more prominently than QA in relation to atomistic assessor judgements, as well as for the majority of the teaching, learning and delivery problems.

As part of support and guidance, AOs provided advice from EQAs as “critical friends”, guidance documents, exemplar materials, training sessions, assessment checking service, glossaries, and video tutorials. Depending on the problem, support and guidance focused on a wide range of aspects, including:

  • clarification of standards
  • approaches to task design
  • holistic assessment and how to effectively map this to the AC and LOs
  • holistic teaching and learning
  • IQA and standardisation processes
  • planning of assessment
  • appropriate scaffolding of assessment and appropriate feedback
  • nature and sufficiency of assessment evidence
  • aspects of pedagogy and exemplar schemes of work

Support and guidance were largely seen as a continuous process, involving multiple touchpoints with centres throughout the delivery cycle. In relation to certain potential assessment problems, notably task design, and several teaching, learning and delivery problems, the importance of pre-emptive support, early in delivery, was seen as key. This was because of the restricted options to QA (and rectify identified gaps in learning) towards the end of the delivery process.

The AOs occasionally expressed some uncertainty about the optimal amount of support and guidance to provide. There were differing views about the amount and nature of exemplar materials that should be provided as well as about the extent of detailed guidance related to assessor judgements or teaching programmes. This was discussed earlier as an instance of lack of clarity and tension in relation to the domains and nature of AO responsibility and impact. Some AOs noted that some assessors or centres do not request or require detailed guidance, training or exemplars, or felt that they did not have time or sufficient resource to engage with them. This somewhat contrasts with the positive picture painted by some of our respondents about positive attitudes of practitioners towards receiving guidance and feedback and acting on it to improve their practices.

In addition to support and guidance, all AOs in our sample described complex and multi-faceted QA processes and strategies that, in their view, significantly mitigated the potential risks in relation to most assessment problems, but also to certain teaching, learning and delivery ones. Most AOs emphasised the importance of establishing from the start, through the centre approval process, that centres have appropriate occupational and assessment expertise to deliver the qualification, as well as appropriate processes in place to play their part in quality assuring their delivery and assessment through IQA. EQA monitoring was discussed as an important check and deterrent against inaccurate judgements, poor IQA and assessment practices, ineffective standardisation, as well as lenience and malpractice.

In relation to poorly conceived assessment tasks, AOs emphasised the importance of IQA to ensure that these are developed appropriately ahead of administration. Some AOs mentioned more explicit attempts to sample and monitor assessments at different stages of the development and delivery cycle. They also scrutinised centre assessment development processes, including related IQA activities, to help prevent assessment being based on poorly conceived tasks or events.

Various punitive measures on the back of EQA could be implemented, as well as monitoring of results patterns, to guard against lenience and malpractice. Triangulation of evidence from different sources in addition to student work was also mentioned in relation to these problems, including looking at the assessment management process, speaking to staff and students, observing assessment taking place or gaining intelligence from centres.

AOs explained that EQA could only be implemented through risk‑based sampling, with the chance of residual incorrect assessment decisions or inappropriate assessment tasks slipping through the net. One possibility in such cases was to require further assessment opportunities for students, or implement other interventions, if issues were detected in final moderation. However, the AO actions in such cases typically involved assigning those centres a higher risk rating and, therefore, providing additional support and additional monitoring in the next academic year.

In relation to certain potential problems, for instance, atomistic judgement, it seemed that some EQA processes, such as observation of assessors in action, risked influencing assessor performance, inducing assessors to approach assessment more atomistically than they normally would. It also raises questions about the potential effectiveness of real time monitoring as a diagnostic tool for atomistic judgement. Perhaps for that reason, explicit EQA monitoring seemed to be less referenced in relation to that potential problem, as well as in relation to inappropriate support, with AOs mostly discussing various aspects of support and guidance as mitigations.

Some AOs also recognised the potential for EQAs to become biased in favour of the centres that they might have worked with for a long time, mitigating this by implementing a hierarchy of EQAs, with more senior and more junior ones monitoring each other. Some rotated EQAs across centres, so that no EQA would monitor a centre for more than 3 to 4 years.

Holistic aspects

While introducing holistic aspects as mitigations was discussed for a few assessment problems, this tended to be mentioned as particularly helpful in relation to teaching, learning and delivery problems. Some of the key mitigations that we coded under holistic aspects involved holistic or project-based approach to assessment and/or delivery, use of synoptic units, and use of sufficiently broad AC or LOs to enable contextualisation.

Broad AC or LOs were deemed essential for designing sufficiently flexible qualifications. This was also thought to promote the use of professional judgement by assessors, which was likely to be holistic rather than atomistic against broad criteria. In addition, holistic consideration of a wider pool of evidence potentially required by broader AC or LOs would be more likely to lead to more confident and accurate decisions and thus mitigate the risks of both arbitrary and inaccurate judgements.

AOs also gave examples of different ways in which synopticity might be ensured in their qualifications, helping to mitigate the potential risks of inauthentic assessment based on deficient atomistic judgements and lack of holistic learning. Most AOs characterised holistic approaches to delivery and assessment in the context of contextualised or real-life tasks as ensuring implicit synopticity of assessment and teaching and learning situations (across LOs or even units). This was often mentioned in qualifications delivered in the workplace, where such holistic assessment was deemed to require integration of learning as a matter of course to complete workplace tasks. Use of integrated workplace tasks and making the best use of naturally occurring events to accumulate assessment evidence was believed by some AOs to prevent negative washback into the natural coherence of teaching and learning in workplace settings. Additionally, it was deemed to optimise the assessment process, therefore, minimising potential undue assessment burden. Other AOs also explicitly included synoptic units in their qualifications to promote holistic assessment and enable application of integrated performance across the entire content domain of their qualifications, additionally helping to mitigate the potential problem of incoherent teaching.

Finally, AOs thought that holistic delivery in realistic situations ensured a more organic learning of the more esoteric skills and attributes in CASLO qualifications than in some other qualification types. Thus, holistic delivery in the CASLO approach was viewed as mitigating the risks of the hard-to-pin-down content getting missed by students even if such aspects were not explicitly outlined in qualification specifications nor directly assessed or assessable. Some AOs believed holistic delivery to provide natural opportunities for students to revisit and consolidate learning in the context of holistic practical tasks or situations, mitigating some of the risks associated with superficial learning. A few AOs saw holistic delivery as a hallmark of good teaching practice although some pointed out that highly integrated teaching across units might be too challenging for students in lower-level qualifications.

Despite various benefits, holistic aspects were recognised to raise challenges for ensuring sufficient judgement consistency, transparency and effective assessment design. It also appeared that, despite the intention of qualification designers for judgements against individual AC to “add up” to an overall judgement about the coherence and effectiveness of an integrated performance, this was not easy to ensure in practice, nor to reflect in the specifications of the AC or LOs.

Contextualisation, relevance and context‑independence

Contextualisation and relevance of delivery and assessment were seen as highly protective against the risks posed by many of the potential problems discussed. For instance, contextualisation of AC was seen as one of key mitigations of the risks associated with inaccurate assessor judgements, while real-life or otherwise contextualised task setting was commonly seen as inherently protective against atomistic judgements. Judgements situated in context should be more holistic, as assessors have to take into account how the activities that happen during those tasks fit together, how students meet different requirements of the tasks, how they justify their decisions, and so on. Contextualised delivery was also considered to provide implicit coherence to the teaching and learning process, guarding against incoherent teaching.

In relation to task design, real-life, contextualised, task setting was considered to inherently ensure a high degree of validity, providing authenticity through the holistic assessment process. Many AOs discussed both the need for and the advantages of contextualisation to make assessment more effective in eliciting appropriate evidence, and to make it more supportive of students through a sense of relevance that such tasks are likely to engender. Contextualised assessment and delivery that were perceived as personally relevant to students were also deemed to guard against a superficial approach to learning with the sole aim of meeting the AC, especially where skills needed to be applied to (and frequently revisited during) day-to-day work practice. A sense of relevance from contextualised delivery and assessment was also deemed protective against perceiving assessment as burdensome as well as against demotivation or disengagement.

Some AOs noted that one of the sources of inappropriate support can be overly scaffolded assessment tasks. They suggested that task contextualisation helped to mitigate this risk because tasks anchored in a specific context lend themselves less to scaffolding and to formulaic responses. This was especially the case where assessment happens in real time and is part of a larger contextualised process (for instance, a theatre production rehearsal). These tasks or events were also less likely to lend themselves to repeated assessment with the sole aim of allowing students to eventually achieve a pass or a higher grade. In some qualifications, where assessment is carried out in a real-life setting, such as delivering a service for a client, the assessors would not provide feedback or guidance as this would be antithetical to normal workplace practices. This, then, also mitigated the risk of inappropriate feedback being given in the context of summative assessment situations.

Most AOs thought that their qualifications achieved appropriate specificity and personalisation through contextualisation, thus mitigating the potential problem of local or personal irrelevance. Where there was a need or attempt to assess some of the more esoteric aspects, it was thought that they could be more easily elicited and evidenced through assessment in contextualised situations.

While contextualisation was seen as helpful in relation to many teaching, learning and delivery problems, certain context-independent aspects were flagged as supportive of consistency in AC interpretation, and supportive of task design in terms of comparability. These were said to include certain types of skills, such as core technical or process skills, which involved following industry best practice protocols (often captured via NOS) or where standards were based on principles rather than specifics. These context‑independent aspects were also seen as essential and non‑negotiable to some qualifications, irrespective of personal or local preferences, and largely deemed to retain longer-term currency. This implied that potential problems such as local and personal irrelevance, lack of currency and downward pressure on standards did not apply to those, often fundamental, aspects of qualification content.

Occupational or professional expertise, assessment expertise and communities of practice

Alongside other mitigations, most AOs also emphasised the need for the practitioners involved in development, delivery and assessment of CASLO qualifications to have relevant occupational/professional expertise (including assessor or QA expertise and regular CPD) and to be members of a community of practice. These were suggested as key mitigations in relation to inaccurate judgements, as well as many other potential problems such as those related to content specification. Relevant occupational expertise should allow assessors to see the bigger picture and the significance of certain aspects of performance to meeting the AC rather than assessing in a mechanistic tick-box fashion, mitigating the risks of atomistic assessor judgements.

Some AOs also specifically emphasised assessment expertise as a potential mitigation of risks associated with inaccurate and atomistic judgement problems, as well as poorly conceived assessment tasks or events, but also suggested that this type of expertise was difficult to develop. This is unsurprising, given the complexity of expertise required by assessors, including needing to be flexible and able to tailor assessment to individual student needs, and able to plan and adapt assessment to sometimes challenging and dynamic workplace contexts. Less experienced assessors were deemed to be more likely to judge inaccurately or atomistically, being more dependent on AC specifications. Some concerns were also raised about the effectiveness of assessor or QA qualifications, and how far they capture or reflect what assessors and IQAs or EQAs are meant to be doing in practice.

It was also suggested that communities of practice in this domain might be helpful, as well as longer-term familiarity with a single qualification or its previous incarnations. Communities of practice were deemed to be more reliable in smaller or long‑standing sectors, and some AOs emphasised that it takes time and engagement with centres to engender these in relation to specific qualifications. While most AOs saw their normal support and guidance practices, alongside EQA, as helping to establish communities of practice, not all AOs actively promoted these through other means. It was also flagged that reliance on implicit understanding of standards within communities of practice needed to be balanced with sufficient adherence to AC as a strong sense of “sector expertise” might lead assessors to think that they have internalised the standard and result in impressionistic judging.

Occupational expertise and presumed ability to see implicit links in CASLO specifications were deemed as non-negotiable to enable effective and holistic teaching, mitigating the risks of incoherent teaching programmes and lack of holistic learning. Communities of practice in relation to pedagogy around long‑standing qualifications also had the potential to bridge the gap between what was laid out in the qualification specification and how best to deliver it to students.

Qualification and assessment design processes

Qualification and assessment design processes were referenced relatively frequently across a subset of both assessment problems and teaching, learning and delivery problems. These typically involved multiple rounds of development and review by expert and stakeholder panels before the qualification is launched, sometimes involving centres or students, too.

The need for precision and clarity in writing the AC – to ensure that there is a good chance of them being interpreted accurately – was often emphasised, mitigating potential problems of inaccurate judgement, poorly conceived tasks or events, and, to some extent, lenience and malpractice. This helped to make common interpretation of AC more likely, supporting effective QA, too.

Effective qualification design processes, which made heavy use of stakeholder input by including employers, teachers or students in development or review panels or consultations, as well as periodic qualification reviews, mitigated risks around local or personal irrelevance, lack of currency and hard-to-pin-down content getting missed in specifications. AOs spoke about ensuring sufficiently robust qualification design processes and involving stakeholder feedback so that appropriate standards can be set in their qualifications. This helped to guard against downward pressure on standards. There was little detail in our data regarding how specifically the AOs went about setting appropriate standards, but some AOs made references to notions of the “scope” or the “range” of a level and their understanding of typical students undertaking qualifications at a particular level.

Qualification and assessment design features

A number of specific qualification and assessment design features were mentioned as mitigations across most problems. These included command verbs, use of grading criteria or descriptors, different aggregation models including mastery model and multiple hurdles, the nature of constructs, and hybrid aspects such as use of external, mark-based, assessment.

Many AOs relied on command verbs, and, sometimes, grade descriptors, to help disambiguate the AC and thus mitigate the potential problem of inaccurate judgements. However, because most AOs resorted to somewhat underspecified, broader AC to allow for contextualisation, this meant that occupational expertise and a degree of assessor professional judgement were often deemed necessary in order to make contextualised, holistic judgements, as well as to design appropriate assessment tasks.

Command verbs were also seen as helpful in capturing different aspects of the content, mitigating the potential problem of hard-to-pin-down content getting missed. They also helped to denote the appropriate qualification level as a mitigation in relation to downward pressure on standards, and to provide some pointers about the nature of the assessment tasks. However, AOs were also aware that there were challenges in using command verbs consistently to differentiate between levels, and that the same command verbs could be used in a range of different tasks, targeting different AC. Nevertheless, some AOs mentioned the relative transparency of the AC as going a long way towards ensuring that the tasks targeting them were appropriate.

One AO suggested that the use of grade descriptors that apply across the relevant AC, rather than at individual AC level, helped to mitigate the potential problem of atomistic judgements. To mitigate the potential problem of incoherent teaching programmes, as well as a lack of holistic learning, another AO used grading criteria as descriptors to capture the alignment between assessment requirements and the content of teaching. The grading criteria were said to imply and “pull together” the range of content and skills that needed to be taught. This blurred the mapping between individual AC and the syllabus, which was provided separately from the qualification specification. Blurring this alignment between the AC and the syllabus was also thought to mitigate the potential problem of superficial learning.

There were several references to the mitigating effect of the mastery model, and multiple hurdles in assessment, across different problems. For instance, the mastery model was mentioned as helpful in motivating centres to design appropriate tasks, and to take their role in this seriously, given the stakes imposed by the mastery requirement if students failed to meet certain AC due to task inadequacy. The requirements of the mastery approach and its multiple hurdles were deemed to provide greater assurance overall about whether the appropriate standard was reached. The strong mastery model rather than some form of compensation, and continuous rather than terminal internal assessment, were thought to be more helpful in this respect. These aspects were deemed to mitigate the risks of lenience and malpractice, particularly in relation to the pass or fail threshold, but also the risks of downward pressure on standards and superficial learning. In some qualifications where skills have to be evidenced on multiple occasions over time to cover the range, this was said to further mitigate the potential problem of superficial learning.

The mastery model was largely believed to contribute to student engagement rather than to cause demotivation with learning. Nevertheless, one AO implemented “summative grading” that derives the overall qualification grade only from the final unit that is taken towards the end of the course. This mitigation was intended to address possible disengagement among students who may have received lower grades in units taken early in their programme, potentially preventing access to higher overall qualification grades. Some AOs introduced certain elements of other aggregation models, such as “charity” aggregation, to prevent the overall qualification standard from being too harsh. Finally, some AOs suggested that a degree of contextual compensation, or compensation across AC, is in some instances legitimate, even though there has been a tendency for AOs to operate mastery at the AC level as well as the LO level. This AC-level compensation was thought to be beneficial in mitigating the risks of atomistic judgements.

Several AOs implied that assessment should focus as much on the overall performance, and to how the discrete activities that correspond to individual AC are integrated within it, as on the discrete activities in isolation. In some qualifications, a mechanism employed to ensure that the integrated character of the task is captured in assessment involves a strong task-level mastery requirement across all the relevant AC. That is, while the AC might correspond to individual activities, they are to be jointly met each time in the context of a broader procedure, arguably amounting to overall successful and integrated performance. This appeared to mitigate the risk of AC being individually met without reference to the broader procedure, as well as the risk of atomistic judgement being used in such cases. This way of operationalising the assessment of integrated performance may not be applicable in all contexts.

Several AOs also discussed the mitigating effects of the nature of the constructs in their exemplar qualifications. For instance, where a qualification involved constructs such as the creative process, AOs flagged that assessors can only reach a judgement about the student grade having seen the whole process. Therefore, the nature of the construct seems to mitigate the potential problems arising from atomistic judgement made solely against individual AC.

There were suggestions that skills-related constructs, particularly basic technical skills, are more straightforward to assess reliably and validly than constructs such as knowledge. This implied that the potential problem of poorly conceived assessment tasks or events might be less of an issue for CASLO qualifications that largely deal with skills-related constructs. A relatively narrow focus on core content was deemed to mitigate potential problems such as superficial learning, demotivation or disengagement and undue assessment burden, through the sense of relevance this created in students, or through a reduction in the amount of assessment required.

Some AOs delivering certain ‘confirm competence’ qualifications (for instance, fenestration or construction) seemed to suggest that the nature of the construct of these qualifications meant that there were “no borderline performances” at the pass grade boundary. In these qualifications, the threshold between knowing and not knowing how to do something, or whether someone addressed or did not address pass criteria, was thought to be clear, and thus unlikely to serve as a smokescreen for lenience or malpractice. This implied that it would be relatively straightforward to detect these potential problems, at least at the pass grade boundary.

Several qualification or assessment design features were mentioned as mitigating the risk of superficial learning, including pass standards that are in themselves not minimal, alongside demanding content that requires a high level of engagement and perseverance to achieve the qualification. Progression to a higher level in a qualification where grading was not available was deemed to be motivating to students, further mitigating some of the risks of demotivation or disengagement.

Some AOs also implemented certain design features in their qualifications which promoted holistic approaches to teaching, learning and assessment. These included aspects such as consistent (“plan-do-review”) unit structure that was repeated across different units despite their different context and focus. This aimed to instil a transferrable reflective approach in students towards their creative practice.

Hybrid aspects

There were several qualification design features that we classed as ‘hybrid’ in our analysis. One of these features was use of externally set and/or marked assessment as a mitigation for the risks related to poorly conceived assessment tasks or events. The 2 AOs in our sample that did have some externally set and marked assessments or components alongside the CASLO ones justified this by stakeholder or accountability demands. One of them argued that external assessments were perceived by stakeholders to be more reliable in providing assurance about some essential aspects of competence, such as health and safety.

Use of external assessment was also seen by one AO as improving the perception of the status and parity of their CASLO qualifications with academic qualifications, which typically use external assessment. This was thought to contribute to student engagement through a sense of completing a valued qualification. There were also perceived benefits for students who would gain experience and confidence from taking part in external assessment, deemed essential if they were to progress to higher education. However, this AO advocated thinking more widely about different ways in which external assessment could be designed, and going beyond solely paper and pen tests, to retain a sufficient degree of construct validity.

Several potential challenges of creating hybrid CASLO qualifications with externally assessed components were discussed. These involved a challenge of balancing the 2 approaches within the qualification and ensuring that the grade profile across internal and external assessments is not skewed by potentially poorer performance of students on external assessments. It was also suggested that different students might interact in different ways with different assessment methods, with some finding the task-based internal assessments more challenging.

Some comments also highlighted potential issues that might arise from unclear interaction between mixing internal and external assessment in qualifications that do not allow for qualification-level compensation, and where each individual component has to be passed to achieve the overall qualification. This could lead to unwelcome washback into teaching, with undue focus on some (typically externally assessed) components where there might be a perception that they would be more difficult to pass, even though the content in those components was not intended to be seen as more important than the content in internally assessed units. Some manageability issues for both centres and AOs in combining internal and external assessment in one qualification were also mentioned. One interviewee saw the challenges of hybridisation as a higher-level balancing act of considering what assessment methodology might be appropriate for certain sectors and what would be meaningful for each sector and for qualification users.

Overall, despite an awareness of the potential risks of centres devising poorly conceived assessment tasks or events, few AOs in our sample had external assessment in qualifications as a potential mitigation of those risks. Internal assessment and direct grading were seen to facilitate transparency and student engagement as well as contextualisation, all highly valued features of the CASLO approach.

While not using external assessment by examination, some AOs explained that in their qualifications, typically delivered in the workplace, assessment was often carried out by external (visiting) assessors, rather than by the students’ own supervisors or other colleagues. In such cases, summative assessment was not continuous, helping to mitigate the potential problem of inappropriate support.

External assessors were also mentioned as mitigating the risks of lenience and malpractice. On the other hand, continuous internal assessment was seen by some AOs as more protective against malpractice than terminal internal assessment.

Terminal assessment, more generally, was more often than not seen as inferior in the context of the CASLO approach, and some AOs found it hard to see how it could be integrated in qualifications where the accumulation of large amounts of evidence over time, due to the mastery model, was seen as essential. Nevertheless, terminal assessment was indeed used in some qualifications in our sample, though it tended to involve large-scale projects or tasks rather than relatively brief written examinations.

Several other features that might be seen as hybridisations were proposed as mitigations of certain potential problems. For instance, one AO in our sample effectively tried to “quantify” some of the complexity involved in judging practical performances in their qualification. They described a hybrid approach to rewarding contextually justifiable partial performance via assigning a mark tariff to AC in some of their assessments and allowing for partial credit. They also used mark tariff without partial credit to, in effect, assign higher weighting to the AC that required full demonstration and where contextual factors should not play a role.

Several AOs discussed mitigations such as limited opportunities for resit or resubmission of evidence as ways to potentially increase the distinction between formative and summative assessment and reduce the opportunity for students passing based on inappropriate support. In some cases, the tighter resubmission rules were put in place to increase parity with academic qualifications because of perceptions that constant resubmission brings into question the level of demand of CASLO qualifications. However, to the extent that introducing additional constraints on learning time might present incentives for lenience and malpractice, even if they mitigate the potential problem of inappropriate support, there may be a need for additional mitigations for these risks in qualifications with such restrictions.

Supporting learning

There were several CASLO qualification design features that we grouped under the mitigation type called supporting learning. This is because all these features promoted flexibility in qualification delivery and assessment, which was deemed by many AOs as fundamental for helping to enhance student engagement with learning, creating opportunities for learning and improving qualification achievement.

These features included no time constraints on learning, multiple assessment and resit opportunities, unit-level achievement or credit, possibilities to extend the course time, special consideration policies, and use of calculated grades for missed units. They were often discussed in terms of their potential to remove incentives for centres to pass students before they reached the required standard. By doing so, these features mitigated the risks arising in relation to the potential problems of lenience and malpractice, downward pressure on standards and demotivation or disengagement. In relation to demotivation in particular, mitigations such as unit-level certification were deemed to provide a “safety net” for students and to reduce the likelihood of disengagement due to a sense of failure of an entire qualification.

Several other mitigations, for instance, teaching broader content and investing in life‑long learning by employers were deemed to reduce the risks of local or personal irrelevance and lack of currency. Expectations from centres to teach broader skills were also deemed by AOs to help bridge the potential gaps in qualification specifications in relation to more esoteric content that was considered valuable though not essential, and which might otherwise be missed. Undue assessment burden and demotivation or disengagement were also considered to be mitigated by engaged tutors, and continuous tutor support, sufficient guidance and feedback for students. These enabled students to take some ownership of the assessment process, which was additionally helped by the transparency of qualification specifications. Some AOs also saw the value of a teachers’ ability to track individual student progress and to motivate students with tailored approaches and individual support as further mitigating the risks of demotivation or disengagement. In relation to undue assessment burden, flexible delivery which is student‑focused and bespoke, helps centres to make potential time savings by focusing less on areas where students might have prior expertise. For some, this included flexibility in the type of evidence that might be collected, which reduced the burden on centres and students.

As already mentioned, some supportive aspects such as unlimited resits were limited in some qualifications to combat other problems such as inappropriate support. In such qualifications, the AOs believed that there was an onus on centres to evaluate the effectiveness or impact of their teaching and whether students might be summatively assessed too early in some cases.

Inputs

Different aspects of inputs to teaching and learning were largely discussed as mitigations for teaching, learning and delivery problems, although a small number were discussed in relation to certain assessment problems. AOs mostly discussed inputs in relation to the potential problem of incoherent teaching programmes. These included 3 broad types, namely, lists of content or syllabi, schemes of work and pedagogy.

The AOs did not always explicitly differentiate between the types of inputs in terms of their potential advantages and disadvantages. Indeed, some AOs only discussed one of these types, expressing sometimes negative views towards it and implying that the same might hold across any type of input. However, collectively, the AOs appeared to have distinct views about the different input types in terms of how useful or feasible they perceived them to be in the context of their qualifications, and how frequently they used them to mitigate the potential problem of incoherent teaching. Overall, based on what the AOs said about their practices in relation to the potential problem of incoherent teaching programmes, the LO-based specifications did not appear to be inherently incompatible with inputs such as syllabi or pedagogy guidance, contrary to some suggestions in the literature.

Most AOs provided some form of mandatory or indicative content lists, and some support in relation to pedagogy, mostly revolving around holistic approaches to delivery and, sometimes, advice in relation to approaches to revision or progression through the curriculum. The AOs seemed broadly in favour of supporting teaching and learning in this way, as long as the content was specified at a sufficiently high level to allow contextualisation. Some recognised the need for a certain level of prescriptiveness in content to support comparability of student experience and the use of their qualification results by stakeholders, especially where this was for HE selection purposes.

None of the AOs in our sample provided prescriptive schemes of work, though some did offer exemplars of these, or EQA support in developing such schemes. The need for contextualisation and tailoring of delivery was perhaps the main reason why the AOs were less in favour of providing centres with prescriptive schemes of work even where they recognised the potential problem of incoherent teaching. They argued that centre autonomy and ability to deliver qualifications in the way that worked for their context and students was paramount in these qualifications. This was perhaps more strongly expressed by the AOs that delivered the ‘confirm competence’ qualifications, who thought that the professional expertise of assessors was what was required instead of prescribed schemes of work. The AOs that delivered primarily college-based qualifications, especially those that are more explicitly time-bound, seemed to be more conscious of potential resource or expertise limitations in centres. They seemed more overtly supportive in terms of providing guidance about delivery approaches to centres, though these were never compulsory. Whether some ‘confirm competence’ qualifications could reap some benefits from greater emphasis on more explicit pedagogy may be worth exploring further.

Inputs such as screening assessments to ensure the appropriateness of students for their chosen courses – alongside (perhaps obviously) sufficient teaching – were deemed as important mitigations of downward pressure on standards. Assessment and delivery planning, as well as EQA support with planning, were seen by several AOs as important mitigations reducing the risk of undue assessment burden for both students and tutors and helpful in relation to the potential problem of inappropriate support. These aspects were also considered to mitigate the risks of lenience and malpractice by ensuring sufficient time to collect evidence, and then to assess and QA it.

The notion of implicit links across qualification content featured quite prominently in relation to several teaching and learning problems, in particular those of incoherent teaching programmes and lack of holistic learning. In relation to incoherent teaching programmes, several AOs suggested that implicit content links, or the implied best order to teach units or content within units, emanates from the nature of the domain of their qualifications, such as the creative process, and should be apparent to occupationally expert practitioners.

Other implicit aspects mentioned were those of a natural or implicit progression in the complexity of some of the tasks or skills, believed to be familiar to teachers, which also helped to ensure teaching coherence. Some AOs also discussed the benefits and drawbacks of implicit alignment between teaching and assessment, where the assessed curriculum provides insights into the taught curriculum helping to ensure coherent teaching. However, they noted that this might also lead to the content that is taught being overly driven by what is assessed, potentially limiting the breadth of the curriculum.

One AO suggested that where units could be completed in any order, this might exacerbate the potential problems of superficial and holistic learning as it becomes more difficult to utilise certain links between units to revisit and embed knowledge and skills. Several AOs implicitly agreed with this view, arguing that their exemplar qualifications benefited from “organic”, implicit links between units, which fed into each other. One AO suggested that it was necessary, where possible, for qualifications to be designed to allow “units to integrate”, enhancing the implicit links between them. The same AO suggested that repeating similar criteria across units can help achieve this aim, particularly in larger qualifications with many related components.

In relation to the potential problem of hard-to-pin-down content getting missed, several AOs suggested that some of the more complex or esoteric outcomes were implicit in qualification levels even if not stated in the qualification specification. For instance, level 3 would imply a higher degree of expectation regarding a construct such as autonomy than level 2. More generally, according to some AOs, the fact that this content was implicit did not mean it was missing, as such content would be acquired in the course of teaching and learning due to the contextualised, holistic nature of delivery, as a value-added benefit of teaching and learning in CASLO qualifications.

Attitudes and disincentives

In relation to potential problems of lenience and malpractice, it was interesting to observe the extent to which AOs referenced positive attitudes of different practitioners and stakeholders (tutors, assessors and employers). Their integrity, professional standards, high expectations from students and sense of pride or vocational passion were suggested as perhaps equally important as QA processes. This included the need for a degree of trust between these actors and the AOs. Relatedly, some AOs noted that despite perceptions that employers who deliver qualifications to their own staff (“employer centres”) might be incentivised to pass students that did not meet the standard, this was unlikely to be the case. This was attributed to the positive attitudes of employers towards employee upskilling and delivering qualifications “for the right reasons”, genuinely wanting to improve their employees’ skills. There was a sense in AO comments that these kinds of attitudes and QA processes had to go hand in hand to enable successful delivery of CASLO qualifications, and that positive attitudes to some extent compensated for limited resources on the QA side.

When discussing the potential problem of poorly conceived assessment tasks or events, some AOs pointed out that assessors often had positive attitudes towards creating engaging and high-quality assessment tasks, and towards their own professional development in assessment design. This was because assessors felt professionally invested in sharing their expertise and skills, both as practising professionals with their students and as assessment practitioners with other assessors. These kinds of attitudes were also said to help reduce the sense of undue assessment burden as assessors take pride in both their assessor role and in seeing the achievement of their students rather than seeing assessment as burden.

Proactive attitudes of centres about doing “meaningful assessment” and teaching broader enriching content, irrespective of whether these were explicit in qualification specifications, were deemed helpful in mitigating some of the risks linked to the potential problem of hard-to-pin-down content getting missed. Some risks around downward pressure on standards were mitigated by AO integrity and an awareness of the requirements for student progression, which would not allow these organisations to deliberately dumb down standards.

Student engagement, vocational passion, and a sense of choice and agency were considered to play a helpful role in mitigating the risks related to superficial learning. AOs offering creative practice qualifications added that their subject-matter created a “natural barrier” against superficial learning. This was because delivering poor creative pieces, even where these might meet the minimum standards of the qualification, would be an unpleasant experience for the students. Some AOs offering qualifications that support progression to HE suggested that students tended to know what grades they would need to achieve to progress to specific higher education courses and, therefore, would not be willing to “settle with just scraping through”. It was suggested that this attitude enabled students to challenge low quality teaching where teachers may just be aiming to get them to achieve minimal standards to pass rather than to achieve higher grades.

Student agency and choice in pursuing the qualifications that they were interested in further reduced the sense of undue assessment burden and demotivation. Both students and tutors were believed by the AOs to accept assessment as integral to their experience, and necessary for the qualification to achieve its broader purposes despite assessment being fairly extensive, reducing perceptions of assessment burden.

Some AOs mentioned the notion of managing stakeholder expectations, as well as expectations of students in some cases, as a way of mitigating certain risks. For instance, qualifications where there was an emphasis on relatively narrow, core skills – to ensure that students mastered them in the allotted time while guarding against disengagement and dropout – limited the time devoted to other, more peripheral though potentially useful, skills. Therefore, employer expectations about how much additional training students might need to receive on the job in relation to such skills had to be managed. Simultaneously, student expectations had to be managed to accept that such core skills, even if not stimulating in terms of creativity, needed to be sufficiently mastered. Making students realise that only well‑embedded core skills can be later personalised or built on creatively in higher-level qualifications, mitigated the risks of personal irrelevance of content and demotivation or disengagement.

Similarly, in one ‘dual purpose’ qualification, the AO emphasised the need to manage the expectations of qualification users about the standards that students were likely to achieve. As these qualifications prepare students for entry‑level positions, further learning should be expected.

Prioritisation

AOs made reference to different aspects of prioritisation given resource limitations in several areas. For instance, QA processes as a mitigation of inaccurate judgements were based on risk-based sampling, and, therefore, focused more extensively on higher-risk areas. There were references to prioritisation in relation to other mitigations of this potential problem, for instance providing only limited student work exemplars to centres, as this was not possible for every unit or every AC. The AOs had to prioritise areas which were either perceived to be core, higher-stakes, or known to be more prone to inaccurate judgements. Similar approaches to prioritisation were mentioned in relation to providing task exemplars to mitigate the potential problem of poorly conceived assessment tasks or events and in relation to standardisation. For example, some AOs focused cross‑centre standardisation on graded qualifications rather than those that only required pass or fail decisions, because of the perception that grading consistently was more challenging for assessors. Similarly, in relation to monitoring potential lenience, some AOs suggested prioritising higher‑grade assessment decisions over those at the pass threshold during moderation activities.

There were also some references to prioritisation in the context of the potential problem of undue assessment burden. This problem was considered to be partly mitigated by ensuring that centres streamline and optimise the assessment evidence that they collect and provide to AOs. In relation to superficial learning, some AOs emphasised that the content in their qualifications is streamlined and prioritises core skills, which in turn helped mitigate this potential problem by ensuring student engagement through content relevance. Relatedly, in relation to demotivation, where there were time constraints on learning, core skills were prioritised to allow enough time for them to be mastered.

These mitigations for the latter 2 problems might raise some questions about whether in CASLO qualifications with a broader educational purpose and content – which might not appear to the students as immediately relevant – the potential problems of superficial learning or demotivation might be exacerbated, especially under pressure of time-constrained courses. Furthermore, where content had to be slimmed down to the exclusion of potentially important, though not core, skills and knowledge, this might raise questions about how far a balance between specificity and breadth of content can be successfully struck under significant time constraints.

  1. These patterns were presented earlier in Figures 1 and 2. 

  2. For instance, across all assessment problems that were recognised by the AOs, references to support and guidance represented 32% of all the mitigations mentioned. In contrast, across all the problems that were not recognised, references to support and guidance represented only 15% of all the mitigations mentioned. 

  3. Appendix 4 contains tables with crosstabs showing the number of references to different mitigation types across individual problems. 

  4. To calculate these proportions, we first split the references to mitigation types for assessment problems and for teaching, learning and delivery problems, and calculated the number of references to each mitigation type for each group or problems. Then, for each group of problems, we calculated the proportion of references to each mitigation type from the total number of references across both groups of problems. Thus, for instance, across all assessment problems, references to support and guidance represented 17% of all the mitigations mentioned, while occupational/professional expertise represented 4%. Across all teaching, learning and delivery problems, references to support and guidance represented 5% of all mitigations mentioned, while occupational/professional expertise represented 3%.