Responding to Criticisms of the CASLO Approach (Report B): Method
Published 18 November 2024
Applies to England
Exemplar qualifications
Fourteen AOs responded to our call for participation and volunteered one exemplar CASLO qualification each, to be discussed in the interviews (one AO volunteered 2). This resulted in a sample of 15 exemplar qualifications. The qualification titles and some key information about each qualification (including the relevant AO, qualification level, purpose, size and grading patterns) are listed in Appendix 1. Note that throughout the report and in the quotations, we use qualification abbreviations (for instance, Creative_L3) rather than a full qualification title to denote each qualification. In our sample, we included qualifications across different levels (RQF levels 1 to 5), subject areas (for instance, health and social care, construction, hairdressing, business, creative and performing arts, public services, and hospitality), sizes and purposes.
With respect to qualification purposes, we divided the qualifications roughly into 2 categories – ‘confirm competence’ and ‘dual purpose’. As confirm competence qualifications we categorised those that are largely delivered in the workplace and/or can lead directly to employment or certify competent practice (for instance, Fenestration_L2, Hairdressing_L2 and First aid_L3). As dual purpose qualifications we categorised those that are largely delivered in college settings and prepare students for employment roles (usually entry-level) or progression to higher levels of education (for instance, Creative_L2, Business_L3 or Construction_L5). Note that the categorisation into ‘confirm competence’ and ‘dual purpose’ qualifications is not an official Ofqual categorisation. It was established for the purposes of this study, to capture some of the key differences between broad qualification groups (on a different basis from the current Ofqual register categorisation).
The AOs that volunteered exemplar qualifications also provided us with relevant qualification documentation to help us to understand their design and how the qualifications work in practice. This information was reviewed ahead of the interviews and informed our discussions with the AOs.
Procedure and participants
Given the relatively broad research questions of this study, and the need to give scope to the interviewees to offer insights beyond the specific questions arising from our literature review, we adopted a qualitative approach using semi-structured interviews. Fifteen group interviews were conducted with the employees of the relevant AOs.[footnote 2] Interviews typically involved 3 or 4 Ofqual researchers and between one and 4 AO employees. The AOs selected their own panel of interviewees, typically consisting of members of their staff with direct experience in key development and delivery areas of their exemplar qualifications, and other members of staff with senior roles and broader experience across their wider qualification offering.
The interviews took place between October and December 2023 and were conducted via video conferencing software, typically lasting for 2.5 hours, after interviewees had given informed consent. Each interview was recorded (video and audio) and the audio elements were transcribed verbatim for analysis.
Taxonomy of potential problems and the interview schedule
Having reviewed the literature, we identified a taxonomy of potential problems for CASLO qualifications and grouped them into assessment problems, teaching and learning problems and delivery problems, as shown in the tables in the panel below.[footnote 3] This formed the basis for the interview schedule that guided our discussions with the AOs. The interview schedule also included questions about the reasons why the AOs adopted the CASLO approach in their exemplar qualifications and some clarification questions to confirm our understanding of their qualification design. The interview schedule is presented in Appendix 2.
At the start of the interviews the AOs were invited to share their general views about the benefits of the CASLO approach and explain why they adopted that approach in their exemplar qualifications. In the remaining part of the interviews, we asked each AO whether they recognised each problem identified in the literature as a potential problem in the context of their exemplar qualification.[footnote 4] If an AO did recognise the potential problem, we asked what mitigations they implemented to prevent or alleviate any potential risks. If the potential problem was not recognised, we asked why it did not seem relevant in their context. The discussions were often wide‑ranging, providing insights about broader respondent conceptualisations of the problems we asked about, and some detailed descriptions of the relevant contexts and the sectors or market that their qualifications were part of.
Potential problems
Potential assessment problems
Inaccurate judgements
Ineffective standardisation
Atomistic assessor judgements
Poorly conceived assessment tasks or events
Lenience
Malpractice
Inappropriate support
Potential teaching and learning problems
Local or personal irrelevance
Lack of currency
Content hard to pin down gets missed
Downward pressure on standards
Incoherent teaching programmes
Lack of holistic learning
Superficial learning
Demotivation or disengagement
Potential delivery problems
Undue assessment burden
Analysis
Data analyses involved a 2-pronged approach. The first phase of the analysis used a variant of the framework method (Ritchie & Lewis, 2003; Gale et al., 2013) to thematically analyse and categorise whether or not each problem was recognised as well as to code and categorise the mitigations that each AO identified in response to the problems. This approach to thematic analysis was chosen because it was important to retain the link between the broader mitigation-related (and other) themes and the coding in relation to individual qualifications, problems and their recognition status, to enable relevant comparisons.
While categorising the mitigations that the AOs discussed, it became apparent that some of these seemed to be ‘active’ measures or processes – ones that the AOs (or others, such as centres) put in place to reduce the risk of certain problems arising or to increase the robustness of exemplar qualifications (such as, for instance, quality assurance, support and guidance, or certain qualification design choices). However, AOs also sometimes referred to certain properties that seemed to be inherent in the nature of their cohort, practitioner profile or attitudes or qualification sector (such as small cohort size, integrity of practitioners, or low level of the qualification). The AOs thought that these reduced some of the risks of problems arising, but they clearly did not actively put these in place to mitigate the risks. While we did not systematically distinguish between these in the coding and analysis, in the reporting we often refer to the latter mitigations as protective factors, reserving the term mitigation for the more active measures that the AOs referenced.
When categorising whether or not the potential problems were recognised, and thus whether the AOs saw them as potentially relevant to their exemplar qualifications, we used the following categories: ‘yes’, ‘not entirely’ and ‘no’. The decisions on how to categorise each response using these categories were driven by the combination of the initial reaction of the respondents when directly asked whether they recognised a potential problem or not, their overall position that emerged from the fuller discussion, and the nature of the mitigations that they spoke about.
Where the respondents initially explicitly said that they did recognise a potential problem and described mitigations or protective factors to reduce the associated risks, this was categorised as a ‘yes’ in terms of the recognition status. The responses that did not explicitly confirm they recognised a problem but discussed it in a way that clearly indicated they saw its potential relevance to their qualification, and suggested certain mitigations or protective factors, were also classed as ‘yes’. Responses that explicitly stated that a respondent did not recognise a potential problem and did not think there were any mitigations that should be put in place, perhaps only mentioning certain protective factors, were categorised as a ‘no’. Finally, those responses that explicitly stated a respondent did not entirely recognise a potential problem, or those that said they did not recognise it but went on to discuss significant mitigations for certain of its aspects, were categorised as a ‘not entirely’ in terms of the problem recognition status.[footnote 5]
As part of the framework method, in addition to coding individual mitigations and problem recognition status, the researchers summarised AO responses to each problem to give broader context to the associated mitigation codes and facilitate further analysis and write up. The coding for mitigations was deliberately detailed, as the aim was to record both general and unique approaches to mitigating various risks. Ultimately, the codes for individual mitigations were grouped into broader themes, referred to as ‘mitigation types’ throughout the report, to enable comparisons and discussion at a higher level. This analysis was captured in Excel spreadsheets through matrices at qualification level and further subjected to quantitative analyses, conducted with R Studio software (version 2022.12.0), to help explore some of the patterns.
In addition to this, a flexible thematic approach was taken to analysing the data for broader themes, with the goal of identifying patterns beyond specific mitigations across the interviews (Braun & Clarke, 2006). This part of the analysis involved an inductive coding approach, with most of the codes established during the first phase. These general themes, alongside the contextual information we had about the exemplar qualifications, were used to help situate and interpret AO views of the significance of individual problems and the nature, profile and scale of any mitigations. This part of the analysis was conducted using NVivo (version 1.7.1) qualitative analysis software.
The coding was conducted by 2 researchers, dividing the transcripts between them in a split coding approach. Each researcher initially coded their transcripts independently of the other, with the other researcher reading the entire transcripts coded by the first researcher and reviewing each other’s coding. This proceeded in phases which involved regular meetings and discussions to clarify the emerging codes until a joint set of codes was settled on. All transcripts were ultimately read, and the coding reviewed, by both researchers. This helped to ensure a reasonable degree of commonality in interpretations and coding used, additionally fostered by detailed discussions throughout the process.
The researchers usually coded sections of the transcripts rather than individual sentences, to take account of the context of a comment. The same unit of text could be included under more than one code where appropriate. The researchers had access to the video files while doing this, allowing them to clarify the tone of an extract, or verify the accuracy of the transcription, if needed.
Ultimately, 3 separate coding frameworks were developed. The first framework captured the themes related to discussions about CASLO approach benefits and AO reasons for adopting the approach. The second framework captured the mitigation coding and broader mitigation types. The third framework captured the broader themes which we labelled ‘general AO reflections’, across the entire data set.[footnote 6]
Most of the data analysis and discussion considers potential assessment problems separately from potential teaching, learning and delivery problems, comparing and contrasting the patterns of AO responses in relation to them where appropriate. We also consider the interaction of these different problem types, especially given the often integrated nature of teaching, learning and assessment that is typical of CASLO qualifications. Although the delivery problems do form a separate problem category in our taxonomy, we mostly discuss them and present them in charts alongside the teaching and learning problems. This was partly to avoid presenting separate charts with only 2 delivery problems in them but also because these problems were often raised in relation to qualification delivery in the centres and often said to affect teaching and learning in particular. This is partly reflected in the response patterns related to these problems, which tend to align with response patterns for teaching and learning problems more so than with those of assessment problems.
In the following sections we present our analysis as it pertains to the benefits of the CASLO approach, the assessment problems, and the teaching, learning and delivery problems respectively. Given the complexity of the analysis and the resultant length of the report, we decided to limit the extent of quotations to the minimum that we thought necessary to illustrate key themes or points, or some more controversial ones occasionally.[footnote 7] However, the prevalence of different themes and sub-themes in the data is further captured and illustrated through the analyses presented in the discussion section, as well as in Appendix 4, where there are tables showing the number of references to different mitigation-related themes.
-
Because one of these qualifications (Procurement_L4) only had a subset of core CASLO features, it was not included in the current report. However, its features are described and discussed in relation to several other CASLO qualifications from our sample within report 8. This example helps to illustrate that the distinction between CASLO and non-CASLO qualifications is not always clear-cut. ↩
-
While this taxonomy suggests that the potential problems related to the criticisms in the literature are neatly separable, this is not always the case. This will become apparent in our discussion of AO views later in the report. For instance, atomistic AC specifications could potentially encourage atomistic approaches to judgements, atomistic assessment design and atomistic teaching and learning, but these potential problems can also interact, with atomistic assessment design encouraging atomistic judging, and so on. ↩
-
Note that our interview questions were framed in terms of ‘potential’ problems rather than ‘actual’ problems. Therefore, by saying that AOs ‘recognise’ a problem, that means that they recognise it as (at least) a potential problem (though some might also recognise it as an actual one). ↩
-
Appendix 3 presents tables with problem recognition status by qualification. ↩
-
The coding frameworks are available upon request. ↩
-
Quotations are mostly presented as separate paragraphs, linked to specific qualifications. Occasionally, however, we present short quotations within the text, which are not linked to specific qualifications (where the identity of the qualification was not particularly relevant). ↩