Research and analysis

Origins and Evolution of the CASLO Approach in England - Chapter 4: Dominance

Published 18 November 2024

Applies to England

The first decade or so of the new millennium was associated with step changes in the regulation of VTQs in England. By the middle of the 2010s, it had become clear that the CASLO approach now dominated the TVET qualification landscape in England. These 2 observations are related, as regulations covering VTQs came to prescribe more and more of the core characteristics associated with the approach.

The NQF

In April 1995, as concerns over NVQs and GNVQs were coming to a head, the Chairman of the School Curriculum and Assessment Authority, Sir Ron Dearing, was invited to consider and advise on ways to strengthen, consolidate and improve the framework of 16 to 19 qualifications in England, Wales, and Northern Ireland. At the heart of his subsequent report was a proposal for “a coherent national framework covering all the main qualifications and the achievements of young people at every level of ability” (Dearing, 1996, page 3). The intention underlying this new framework was to incorporate existing qualifications – rather than to engineer new ones – and, in particular, to bring the structure of A levels and GNVQs into closer alignment. This was to help secure parity of esteem across these qualifications, as well as to facilitate programmes of learning that incorporated both qualifications. Dearing recommended 3 distinct pathways, differentiated on the basis of their purpose:

  1. A level and GCSE – where the primary purpose was to develop knowledge, understanding and skills associated with a subject or discipline
  2. applied education (GNVQ) – where the primary purpose was to develop and apply knowledge, understanding and skills relevant to broad areas of employment
  3. vocational training (NVQ) – where the primary purpose was to develop and recognise mastery of a trade or profession at the relevant level

These pathways would operate at 3 levels – an Advanced level (corresponding to A level), an Intermediate level (corresponding to GCSE grades A* to C), and a Foundation level (corresponding to GCSE grades D to G). They would be supported by a common Entry Level.[footnote 1]

The Qualifications and Curriculum Authority (QCA) was established by the 1997 Education Act, which was the last Conservative Act to be passed before Labour took the reins. It gave the QCA power to accredit qualifications, which included developing and publishing accreditation criteria.

The National Qualifications Framework (NQF) was introduced in 2000 as part of the Curriculum 2000 reforms. NQF qualifications were accredited under 1 of 3 broad categories according to their primary purpose: General Qualification, Vocationally-Related Qualification, or Occupational Qualification (see Figure 11, reproduced from QCA, 2000, page 5).

Figure 11. The National Qualifications Framework
Qualifications framework, representing levels vertically and qualification types horizontally.

Although it was intended that the NQF should incorporate many existing qualifications – suggesting that it was more of a descriptive framework than a prescriptive one – it was deemed essential that all qualifications should satisfy accreditation criteria, to promote transparency, quality, and rigour. These were published in a document entitled ‘Arrangements for the Statutory Regulation of External Qualifications in England, Wales and Northern Ireland’ (QCA, 2000), which was known informally as the ‘Statutory Regulations’.

The Statutory Regulations explained that accreditation criteria consisted of: criteria setting out the required characteristics of qualifications, and criteria – normally in the form of a code of practice – setting out necessary procedural standards. Importantly, the newly regulated market was not to be restricted purely to GNVQs and NVQs. Any qualification that was submitted for accreditation, and that met the relevant accreditation criteria, would be accredited.[footnote 2] Of direct relevance to the CASLO approach, the Statutory Regulations specified that all Vocationally-Related Qualifications must:

be constructed of units with content expressed as assessable outcomes of learning which provide worthwhile learning goals in their own right

(QCA, 2000, page 17)

In addition, all Occupational Qualifications must:

be directly based on relevant national occupational standards […]

be constructed of units with content expressed as assessable outcomes of learning

(QCA, 2000, page 20)

The Statutory Regulations also made reference to assessment criteria, although there was no explicit requirement that they should be nested within learning outcomes. Nor was there any requirement that all learning outcomes must be achieved for a qualification to be awarded. However, the very fact that all Occupational Qualifications (OQs) and Vocationally-Related Qualifications (VRQs) had to be specified in terms of units and learning outcomes would seem to acknowledge the growing influence of the outcome-based approach.

When the second edition of the Statutory Regulations was published (QCA, 2004a), there were no longer separate criteria for OQs or VRQs. However, the criteria that were common to all qualifications specified that:

47 A qualification must normally be made up of units that can include a core of mandatory units and a range of optional units, except where the qualification is of an established type that has not historically been unitised, such as the GCSE.

(QCA, 2004a, page 18)

50 The subject matter of the units and/or the qualification as a whole must: […]

c) be expressed in terms of what a successful candidate will have learned or will be able to do

(QCA, 2004a, page 19)

So, although expressed slightly differently here, the requirement to specify OQs and VRQs in terms of units and learning outcomes continued. It is worth noting that the Statutory Regulations were quite explicit over quality assurance arrangements, for example:

59 The awarding body must take steps to ensure that internal assessment is carried out in the same way across centres by providing a full assessment specification, including, where appropriate, assessment criteria, mark schemes, exemplar material, and guidance on the use of witness statements.

60 The awarding body must have arrangements in place to enable internal assessors to meet their responsibilities. These arrangements must include, where appropriate, providing assessors with information on:

a) how to ensure that any tasks set are consistent with the specification;

b) the nature and type of acceptable evidence;

c) the extent to which candidates can be allowed to redraft work before it is assessed;

d) the limits on the assistance that can be given to candidates with work that is to be assessed;

e) how to ensure that assessment requirements can be interpreted consistently;

(QCA, 2004a, page 21) [footnote 3]

Credit

Although the NQF Statutory Regulations appear to have played an important role in embedding the idea of learning outcomes within regulated qualifications, it was QCF regulations that fully embedded the CASLO approach, from 2008 onwards. To understand why and how the QCF was introduced, we need to consider the growth of the Credit Movement in England, during the 1990s, which influenced practices in both higher education and further education, albeit in slightly different ways (Pollard, Hadjivassiliou, Swift & Green, 2017). In the further education sector, ideas that stemmed from the Open College Networks, which were subsequently developed by the Further Education Unit, were particularly influential.

Open College Networks

As Principal Advisor to the QCA on what was to become the QCF, Peter Wilson was highly influential in its design. He was also an influential figure in the Open College Network (OCN) community – as Co-ordinator of the Leicestershire Open College Network and Chair of the National Open College Network – so it should not be surprising that the QCF was heavily influenced by the OCN approach.

Wilson’s detailed account of the emergence of the QCF traces its origins to the development of Open Colleges, Open College Federations, and Open College Networks in different parts of the UK, during the 1970s, which provided alternative progression routes for adults into higher education. The first Open College Network (OCN) credits were awarded in 1983 (Wilson, 2010).

During the early days, these organisations operated locally, without national co-ordination, and credit simply reflected the successful completion of a recognised course of a certain duration at a particular level. With the establishment of the National Open College Network (NOCN) in the late 1980s – and its commitment to develop credit accumulation and transfer agreements during the early 1990s – there was an increased need to reach consensus over what these credits were actually being awarded for (Wilson, 2010). This facilitated the move to an outcome-based approach. Paralleling developments in the NVQ system, the award of credit became more closely associated with the achievement of learning outcomes than with the completion of a programme of study.[footnote 4] Yet, according to Wilson, the rationale for this transition was quite different, being grounded in the rights of individual learners, not in the expectations of employers:

programmes […] should be designed in such a way as to make explicit, and available for public scrutiny, the “hidden” and “intuitive” assumptions being made by teachers about what they expect students to learn/achieve

(Wilson, 2010, page 36)

The most important point to note about the OCN approach was that it was designed to cater for the adult learning sector, with particular reference to the needs of ‘returning’ adult learners:

Not only does the QCF originate in a policy context with a focus on the needs of adult learners, but the ‘credit’ strand of the QCF draws on a long and rich history of recognising the achievements of adult learners in community-based learning, in informal adult learning and in contexts totally outside the ‘mainstream’ development of qualifications during the same period.

(Wilson, 2010, page 2)

Further Education Unit

The credit movement expanded beyond the Open College Network when Conservative Prime Minister John Major initiated work on the modular curriculum in 1991. The Further Education Unit (FEU) was asked to produce advice relevant to the further education sector, which evolved into a blueprint for a credit framework. This was presented in ‘A Basis for Credit?’ (FEU, 1992).

The report formally integrated 3 key concepts: credit, level, and unit. Furthermore, borrowing the concept of learning outcome from the OCN tradition, the report added the concept of assessment criteria (to parallel the NVQ distinction between elements of competence and performance criteria). The FEU model provided the foundation for a national credit framework for OCNs in 1994, based on credits, levels, and units, although not qualifications (Wilson, 2010). This was entirely consistent with the OCN mission, which was to formally recognise and reward the small steps of achievement that would not be recognised through a system that revolved around end-point certification. Thus, OCNs awarded credits, not qualifications.

The FEU continued developing the idea of credit accumulation and transfer, in an attempt to make it more generally applicable and palatable. The OCNs awarded credits within systems that were specifically designed to be flexible, responsive to local needs, and easily accessible to local organisations, supporting tailored programmes and customised assessment arrangements (Wilson, 2010). Quite explicitly, the idea of being constrained by an overarching qualification specification was anathema to this philosophy. Proposals in the FEU report ‘A Framework for Credit’ (FEU, 1995a) attempted to bridge the divide between the unit-driven approach of the OCNs and the qualification-driven approach of the exam boards. It did so by making certain of the core concepts more amenable to qualification providers, including the idea of deconstructing qualifications into units with size-related credit values (which watered down the idea of actually awarding credits to learners).[footnote 5] Thus, the FEU hoped to achieve its goal of establishing:

a post-16 CAT framework encompassing all curriculum and qualifications from key stage 4 of the National Curriculum/adult basic education to post-graduate level qualifications in HE/professional qualifications

(FEU, 1993, page 2)

Of relevance to the direction that CASLO qualifications were beginning to take by the late 1990s, it is worth noting how this 1995 report defined assessment criteria:

Learning outcomes: what a learner can be expected to know, understand and do.

Assessment criteria: statements of more specific learning outcomes.

(FEU, 1995a, page 11)

The idea of assessment criteria as mini learning outcomes seems to be far looser than the idea of performance criteria advocated by the NCVQ. The FEU developed this perspective in supplementary guidance, explaining that criteria should achieve greater specificity by using a specific action verb, content, and qualifiers that make reference to complexity, and/or autonomy, and/or range (FEU, 1995b).

This guidance document also compared pros and cons between the CASLO approach to qualification design (epitomised by NVQs) and the classical approach (epitomised by A levels). Drawing on conclusions from an FEU report written by Alison Wolf (1993), it proposed that a compromise could be struck within the proposed new credit framework:

In order to achieve consistency and effective communication of what learners know, understand and can do, FEU believes that some combination of three approaches is needed:

  • written specifications – of learning outcomes, assessment criteria, and level descriptors;
  • exemplars – indications of what should be taught and learned; programmes of study, test papers and their analyses, samples of students’ work, etc.;
  • networking – of unit writers, teachers, examiners and moderators.

The more widely and effectively exemplars and networking are used, the less specific the learning outcome statements or units need to be.

(FEU, 1995b, page 10)

In fact, the report went on to propose that no matter how clearly learning outcomes were expressed, their interpretation for assessment purposes will usually involve both exemplar materials and professional networking.

Policy impetus

Despite following in the wake of these credit system developments, there was no attempt to make credit integral to the NQF.[footnote 6] It is worth noting, however, that the NOCN had decided (in 1998) to become an awarding organisation and to seek to develop its own credit-based qualifications within the NQF (Wilson, 2010). This made sense given the QCA’s aim to recognise all achievements within the NQF with potential (lack of) funding implications for unrecognised providers. To accommodate individual learner needs in a manner that could satisfy funding requirements, the NOCN introduced flexible qualification structures that offered a wide range of unit choices, with rules of combination designed to help ensure a degree of coherence for the overarching qualification (Wilson, 2010). In practice, local OCNs still continued to award unit credits. Officially, though, in terms of NQF recognition, the NOCN only awarded qualifications.

Enthusiasm for the idea of credit waned, in England, towards the end of the 1990s. By way of contrast:

  • Northern Ireland continued to develop its Northern Ireland Credit Accumulation and Transfer System project (Cook, 2001)
  • Scotland launched its Scottish Credit and Qualifications Framework in 2001 (Gallacher, Toman, Caldwell, Raffe, & Edwards, 2005)
  • Wales formally adopted its Credit and Qualifications Framework for Wales in 2002, which was then launched in 2003 (Arad Research Ltd, 2014)

Pressure from Wales, in particular, helped to reignite interest in credit in England. Wales had been developing the idea of credit for many years, which included the Wales Credit and Modularisation Project (later known as Credis). When English awarding organisations were brought into the planning process for a credit framework for Wales, during the late-1990s, they suggested that England would need to be on board with the approach to make engagement viable for them (Jill Lanning, personal communication).[footnote 7]

Policy makers in England responded positively. In the wake of its discussion document ‘Success for All: Reforming Further Education and Training’ (DfES, 2002), the Department for Education and Skills committed to working with the Learning and Skills Council and the QCA to review barriers to qualification uptake and to explore the feasibility of a credit-based approach. The white paper ‘21st Century Skills: Realising Our Potential’ (DfES, 2003a) subsequently confirmed government commitment to a credit framework:

The consultation on the Skills Strategy has shown widespread support for developing a national credit framework for adults. This is seen as a way of offering the greatest flexibility and responsiveness, with units of qualifications being assigned credit using a standard system. Supporters argue that adult learners can more easily build up units of credit over time towards qualifications, transferring that achievement between different providers if they wish, and having more choice in the units of qualifications they combine. Employers can put together units of qualifications drawn from different sources to form the training programme that best suits their needs.

(DfES, 2003a, page 84)

At this stage, however, government committed only to exploring the idea of a credit framework for adults, acknowledging that credit “frameworks for young people raise quite different issues” (DfES, 2003a, page 85). These were to be considered separately in the light of the forthcoming 14-19 review. Subsequently, the Working Group on 14-19 Reform endorsed the idea of credit for young people, proposing that:

achievement within 14-19 programmes should be certified by diplomas available at the first four levels of the National Qualifications Framework, and using a credit system compatible with that being developed by QCA for adult qualifications.

(Tomlinson, 2004, page 6)

The QCF

The QCA had been exploring the potential for a unitised credit framework since the late-1990s, in conjunction with the Further Education Development Agency (Unwin, 1999). With a new commitment from the DfES, it undertook to consult on the matter. By the end of 2004, the QCA had released a consultation document entitled ‘A Framework for Achievement: Recognising qualifications and skills in the 21st century’ (QCA, 2004c). This included radical proposals to replace the NQF with an entirely new regulatory framework – not simply a framework for adult returning learners, but a framework that could “encompass all formally assessed learners’ achievements outside higher education” (QCA, 2004c, page 3).

Reform

According to the QCA, the new framework would address concerns that the NQF was:

  1. too complicated and difficult to understand
  2. insufficiently responsive to the needs of individuals and employers
  3. insufficiently inclusive of post-16 awards and programmes
  4. too procedurally bureaucratic
  5. insufficiently inclusive of post-16 (short course) training providers

To solve these problems, the new framework would incorporate a wider range of units from a wider range of unit providers, including customised awards that would meet specific market needs:

Our proposed design for the framework will make it possible for many more employees to gain credit for in-house training. Private training providers that offer high-quality short courses will be able to participate. Outcomes relating to employment sectors or occupations will be driven by the needs of employers.

(QCA, 2004c, page 2)

The fact that it would be credit-based would mean that combinations of units could be accumulated and transferred easily between qualifications and awarding organisations. The consultation explained that all achievements would be structured as units – from which qualifications would be built – and each unit would be defined in terms of: a title, learning outcomes, assessment criteria, a level, a credit value, and a unique database code.

Defining credit as “an award made to a learner in recognition of the achievement of designated learning outcomes at a specified level” (QCA, 2004c, page 19) confirmed that this unit-based framework was conceptually aligned to the 1992 FEU report and to the approach adopted by OCNs.[footnote 8] As subsequently explained by Ofqual in the introduction to its evaluation of the first 2 years of the new framework, its aim was to provide: “a stable currency for learner achievement across the qualifications system through the award of credit” (Ofqual, 2009b, page 2). The full range of anticipated benefits for learners was summarised in the ‘Final Business Case’ for the QCF:

The QCF offers the opportunity for learners to build up achievements over time and at their own pace. It will allow individuals to achieve smaller packages of learning (units), and, where appropriate, accumulate the associated credits to gain qualifications. No learning will be lost in the QCF, nor will it need to be repeated, as all achievements will be recorded on an individual’s [Learner Record]. Learners will have more control over the routes or pathways that they take through learning, as units can be combined in different ways to meet individuals’ personal, professional or social needs. Learners will be able to transfer their achievements between all AOs and across all learning providers.

(LSC & QCA, 2008, page 12)

Transition

After a couple of years in development, the new framework – now known as the Qualifications and Credit Framework – was tested and trialled over a 2-year period that lasted from April 2006 to May 2008. Following a decision to proceed, regulations governing the QCF were published by Ofqual in August 2008 (Ofqual, 2008a).[footnote 9] It was anticipated that all vocational qualifications should be accredited to the QCF by the end of 2010, at which point the QCF would replace the NQF.

Although the QCA had described the QCF as a framework for all achievements, this failed to materialise. It would certainly come to incorporate the vast majority of regulated vocational qualifications. However, certain key qualifications – including GCSEs and A levels – remained outside its orbit, continuing to be regulated under the Statutory Regulations of the NQF. Even the new Foundation, Intermediate, and Advanced Diplomas were regulated outside the QCF.

The situation for NVQs was ambiguous. Wilson argued that the objectives that underpinned the QCF – including simplicity, inclusivity, and responsivity – effectively undermined the strictures of the NVQ model (Wilson, 2010). QCF regulations permitted existing NVQs to be re-written and submitted for accreditation into the QCF without ‘NVQ’ in their title (Ofqual, 2008b). Yet, the same regulations also allowed for ‘NVQ’ to be included in a QCF qualification title as long as the qualification satisfied an additional set of operating rules (Ofqual, 2008b). It would then be regulated as a QCF qualification. In fact, NVQs also continued to exist outside the QCF – as a distinct qualification type – until their regulatory arrangements were finally withdrawn in 2015.

The framework

Across numerous guidance documents, the Qualifications and Curriculum Development Agency (QCDA) explained that the QCF was designed to recognise small steps, enabling students to build up their learning at their own pace, accumulating credit that could be built up into a full qualification (see QCDA, 2010a, for example).

The name of each QCF qualification was set out in exactly the same format, to explain how difficult it was (its level), how long it took to study (its size), and what it was about (its content description). This consistency was intended to ensure transparency for anyone who needed to use the information provided by a qualification, for example, an employer making a hiring decision.

There were 9 levels in the QCF, from Entry Level through to Level 8. The lower levels (Entry Level to Level 3) mapped directly onto the NQF. The QCF was also linked to the Framework for Higher Education Qualifications (FEHQ), and higher levels of the QCF (Level 4 to Level 8) mapped directly onto the FEHQ. These linkages are illustrated, below, in Figure 12, which is adapted from QCDA (2010b).

Unit size was expressed in terms of credits, which corresponded to notional hours of learning, enabling qualifications to be classified as either:

  • Award (1 to 12 credits – 10 to 120 hours of learning)
  • Certificate (13 to 36 credits – 130 to 360 hours of learning)
  • Diploma (37 credits or more – 370 or more hours of learning)

Examples of QCF qualification titles included:

  • Level 1 Certificate in sport and active leisure
  • Level 4 Diploma in buying and merchandising for fashion retail
  • Level 8 Award in strategic direction and leadership

Units were accredited to the QCF as either ‘shared’ (available to all awarding organisations), or ‘restricted’ (available only to a defined group of awarding organisations), or ‘private’ (available only to the submitting organisation). The idea of shared units underpinned the principle of Credit Accumulation and Transfer.

Stages of education or employment QCF levels (and NVQ-NQF levels) FHEQ levels
Professional or postgraduate education, research or employment. Level 8: Vocational Qualifications Level 8

Level 7: NVQ Level 5
Vocational Qualifications Level 7
Fellowships
Level 8: Doctoral Degrees

Level 7: Master’s Degrees
Integrated Master’s Degrees
Postgraduate Diplomas
Postgraduate Certificate in Education (PGCE)
Postgraduate Certificates
Higher education. Advanced skills training. Level 6: Vocational Qualifications Level 6 Level 6: Bachelor’s Degrees with Honours
Bachelor’s Degrees
Professional Graduate Certificate in Education (PGCE)
Graduate Diplomas
Graduate Certificates
Entry to professional graduate employment. Level 5: NVQ Level 4
Higher National Diplomas (HND)
Vocational Qualifications Level 5
Level 5: Foundation Degrees
Diplomas of Higher Education (DipHE)
Higher National Diplomas (HND)
Specialised education and training. Level 4: Higher National Certificates (HNC)
Vocational Qualifications Level 4
Level 4: Higher National Certificates (HNC)
Certificates of Higher Education (CertHE)
Qualified or skilled worker. Entry to higher education. Completion of secondary education. Level 3: NVQ Level 3
Vocational Qualifications Level 3
Advanced Diplomas
GCE AS and A Level
N/A
Progression to skilled employment. Continuation of secondary education. Level 2: NVQ Level 2
Vocational Qualifications Level 2
ESOL Skills for Life
Functional Skills Level 2
Intermediate Diplomas
GCSEs at grade A*–C
N/A
Secondary education. Initial entry into employment or further education. Level 1: NVQ Level 1
Vocational Qualifications Level 1
ESOL Skills for Life
Functional Skills Level 1
Foundation Diplomas
GCSEs at grade D–G
N/A
Qualifications taken at any age in order to continue or return to education or training. Entry Level: Entry Level Certificates (1–3)
ESOL Skills for Life
Functional Skills Entry Level
N/A

Figure 12.    Alignment of levels across the National Qualifications Framework, the Qualifications and Credit Framework, and the Framework for Higher Education Qualifications (adapted from QCDA, 2010b, pages 28 to 29).
Relationship between QCF levels and FEHQ levels.

The CASLO approach

Critical to this section on the dominance of the CASLO approach, QCF regulations now specified all 3 core characteristics as design rules, which meant that units (and qualifications) could not be accredited to the QCF unless they followed the CASLO approach. The blanket nature of this requirement is interesting in the context of the transition away from the approach with the final iteration of the GNVQ (the AVCE) and because the approach was subsequently rejected as a design template for Applied A levels and for the subsequent Diploma qualification. In this context, the lack of evidence of any debate over this blanket requirement seems surprising.

Rules concerning the specification of learning outcomes and assessment criteria were very clear in the new QCF regulations:

1.4 All units must contain learning outcomes that:

a    set out what a learner is expected to know, understand or be able to do as the result of a process of learning

b    are clear and coherent, and expressed in language that is understandable by the learners for whom the unit is intended or by a helper or adviser where the learners themselves are not able to understand the learning outcomes

c    are expressed in a manner that addresses individual learners in the third person and will make sense to a learner both before a unit is offered and after the learning outcomes have been achieved

d    are capable of assessment and, in conjunction with the assessment criteria related to that outcome, set a clear assessment standard for the unit.

1.5 All units must contain assessment criteria that:

a    specify the standard a learner is expected to meet to demonstrate that the learning outcomes of that unit have been achieved

b    relate to an individual learning outcome in language consistent with it

c    are sufficiently detailed to support reliable, valid and consistent judgements that a learning outcome has been achieved, without creating an undue assessment burden for learners or assessors

d    do not include any explicit references to the methods or instruments of assessment to be used.

(Ofqual, 2008a, pages 11 to 12)

Likewise, the idea of compensation, which underpins the classical approach to qualification design, was formally prohibited by the Regulatory Arrangements:

1.32 All awarding organisations recognised within the QCF award credits and qualifications (see Section 5).

1.33 Credits must be awarded to learners for the successful achievement of the learning outcomes of a unit. The number of credits awarded must be the same as the credit value of the unit. It is not possible for some credits to be achieved for partial completion of a unit or learners to be awarded credit when all the learning outcomes are not achieved by virtue of any ‘compensation’ for stronger performance in other areas of learning.

(Ofqual, 2008a, page 17)

Beyond Ofqual’s regulatory requirements, the QCDA produced a host of guidance documents to help qualification providers comply with the new QCF regulations. For instance, one guidance document, on assessment, included answers to questions like “Do all assessment criteria have to be met for credit to be awarded?” (QCDA, 2010c, page 8).[footnote 10] Another (QCA-developed) guidance document provided advice on how to articulate learning outcomes and assessment criteria within unit specifications (QCA, 2009). This included both tips, such as how many assessment criteria to write for each learning outcome, and warnings, such as the need to avoid compound statements.[footnote 11] By way of illustration, this document recommended re-writing the compound statement “know about computer hardware, software, and associated health and safety issues” as 3 discrete learning outcomes:

  • know about computer hardware
  • know about computer software
  • understand the health and safety issues associated with the use of computers

Respond to customer requests for repairs L34

LO 1 Know the organisation’s housing stock and possible defects which require repair.

  • AC 1.1 Describe the types of properties which the organisation manages.

  • AC 1.2 Identify, using the appropriate terminology, the types of faults which can occur in these properties.

LO 2 Know organisational policies and procedures relating to requests.

  • AC 2.1  Describe the different types of service agreements with customers.

  • AC 2.2  Identify the organisational policies and procedures relating to repair requests.

  • AC 2.3  Identify repairs which require emergency action.

LO 3 Be able to establish and respond to customer requests for repairs.

  • AC 3.1  Deal courteously, sensitively and fairly with individuals.

  • AC 3.2  Clarify requests from customers to determine the exact nature of what is required.

  • AC 3.3  Accurately record the details of customers and their requests.

  • AC 3.4  Identify the other parties involved in the maintenance and repair of the organisation’s properties and whether there are any associated charges.

  • AC 3.5  Identify requests which are outside the organisation’s responsibilities.

  • AC 3.6  Explain how to refer customers to other organisations and individuals.

  • AC 3.7  Arrange inspection visits and repair work according to organisational procedures and policies.

  • AC 3.8  Prioritise urgent repairs.

Figure 13. Example of how to combine knowledge- and skill-based outcomes
How to combine knowledge and skill outcomes within a unit specification.

Figure 13 reproduces an example of when this document considered it appropriate to combine learning outcomes, that is, where “there is a clear relationship between knowledge and skills” (QCDA, 2010d, page 9). In this instance, the assumption is that the first two knowledge-based outcomes are required to underpin the third skills-based outcome. While there was no obligation for learners to achieve the learning outcomes in any particular order, the document added, a chronology of learning outcomes where knowledge is followed by action can make the unit look more coherent. Incidentally, this document also noted that combining knowledge-based and skills-based outcomes can be useful in supporting an holistic and integrated assessment approach:

Where knowledge and skills are separated into different units, there is always the potential that one or the other may be lacking. The advantage of combining the two in a unit for learners is that they can satisfy the requirements to be competent to carry out the function in their job role and can demonstrate to an academic institution that they have the necessary underpinning knowledge and understanding. Designing units with this in mind is more likely to encourage a more holistic and integrated approach in the design of assessment activities for the unit.

(QCDA, 2010d, page 23)

Challenges

The guidance document on writing QCF units is interesting, in retrospect, for the way that it anticipates fundamental implementation challenges for the QCF. Annex D of ‘Guidelines for Writing Credit-Based Units’ (QCDA, 2010e) concerned how to develop units at Levels 4 to 8, exploring reasons why it might be more difficult to comply with QCF requirements when writing units at higher levels. This included problems with specifying standards (and the challenge of levelling) and problems with assessing standards (and the challenge of testing). In retrospect, it is fair to say that challenges like these threatened the credibility of the QCF in general and not simply at higher levels.

Specification

One set of challenges arose from the requirement that standards for all QCF units had to be specified in terms of both learning outcomes and assessment criteria. This proved to be less problematic when units were derived from older-style National Occupational Standards, typically related to lower-level jobs. However, particularly for units at higher levels, which were often based upon different kinds of standards set by professional bodies or associations, this necessitated a complex process of translation into the QCF format. Even when based upon NOS, it was no longer possible simply to ‘cut and paste’ statements into the QCF format, as many sectors had departed from the older-style approach to writing standards by then, often failing to articulate critical criteria (QCDA, 2010e). Consequently, even when utilising NOS, unit writing was far from a trivial process.

Levelling

During the early 1990s, when functional analysis was the recommended approach to developing NOS, the degree of challenge associated with any particular NVQ would have been no more nor less than the occupational standard itself, that is, the degree of challenge associated with performing the role adequately. NVQs were assigned to a particular level in the framework, but this levelling was more nominal than substantive, linked to historical hierarchical distinctions between roles, such as between ‘technician’ and ‘craftsman’. It was not substantively important, within the NVQ framework, to be able to infer that all NVQ units at a particular level represented the same degree of challenge.

With the introduction of the NQF, and particularly with the introduction of the QCF, the idea of levelness and of levelling became far more fundamental. Indeed, establishing a degree of challenge for each unit – its level – was crucial to the underpinning logic of the QCF as a credit-based framework. This was because the combination of the specified challenge (level) and the specified size (credits) determined the currency of each unit within the system. This, in turn, was used to justify claims concerning the exchangeability of units within a framework premised upon being able to mix and match units to form bespoke qualifications.

Consequently, the QCF (more so than the NQF) required a mechanism by which the currency of each unit could be specified and verified. Level descriptors, which articulated 3 dimensions of competence for each QCF level, were fundamental to this mechanism. They were published as Annex E of the QCF Regulatory Arrangements (Ofqual, 2008a). The descriptors for Level 2 are reproduced below, for the purpose of illustration:

Summary

Achievement at Level 2 reflects the ability to select and use relevant knowledge, ideas, skills and procedures to complete well-defined tasks and address straightforward problems. It includes taking responsibility for completing tasks and procedures and exercising autonomy and judgement subject to overall direction or guidance.

1st dimension – knowledge and understanding

Use understanding of facts, procedures and ideas to complete well-defined tasks and address straightforward problems.

Interpret relevant information and ideas.

Be aware of the types of information that are relevant to the area of study or work.

2nd dimension – application and action

Complete well-defined, generally routine tasks and address straightforward problems.

Select and use relevant skills and procedures.

Identify, gather and use relevant information to inform actions Identify how effective actions have been.

3rd dimension – autonomy and responsibility

Take responsibility for completing tasks and procedures Exercise autonomy and judgement subject to overall direction or guidance.

These benchmarks were designed to enable awarding organisations to identify an appropriate level for each unit. AOs were expected to compare the learning outcomes and assessment criteria that had been written for each unit against adjacent level descriptors, applying a ‘best fit’ principle to find the best match (Ofqual, 2008a). Of course, this assumed that learning outcomes and assessment criteria had already been written in a manner that appropriately captured the intended degree of challenge for each unit, which made the process somewhat circular.

According to the QCDA, the language used in formulating outcomes and criteria was crucial to defining and communicating the level of a unit. This language should be capable of conveying the appropriate level without reference to a targeted group of learners or to an anticipated context of learning (QCDA, 2010e).

If the language used to write criteria could somehow go beyond the unit-specific details of each outcome to support comparison on a more generic basis, then this would certainly be helpful in warranting claims of unit comparability and, by extension, unit exchangeability. Benjamin Bloom’s Taxonomy of Educational Objectives (which we discussed in chapter 2) was seen as a solution to this problem – in particular, his hierarchical taxonomy of objectives for the cognitive domain, which were ordered as follows:

  1. knowledge (lowest level of complexity)
  2. comprehension
  3. application
  4. analysis
  5. synthesis
  6. evaluation (highest level of complexity)

The QCF levels were based on the assumption that qualitatively different learning outcomes, from qualitatively different units, could be equated (roughly) with reference to the degree of cognitive complexity of their associated assessment criteria. In other words, units that were written using criteria of a similar level of complexity to ‘analysis’ – that is, criteria implying more complexity than ‘application’ but less complexity than ‘synthesis’ – could be considered to be at the same level. Which level would be determined by reference to the overarching QCF level descriptors.

This logic suggested that units with essentially the same learning outcomes could be written at multiple levels, differentiated only in terms of the level of complexity associated with their assessment criteria. Of particular relevance, here, was the command verb chosen to help determine the levelness of each assessment criterion, with more cognitively complex command verbs being selected for higher levels.

The approach is illustrated in Figure 14 (which is reproduced from QCDA, 2010e, page 31). Note how the command verb chosen for Level 1 (list), is less cognitively complex than the command verb chosen for Level 2 (identify), which is less cognitively complex than the command verb chosen for Level 3 (explain).

Unit title: Understanding health and well-being Learning outcome
The learner will: Understand the political and social context of health and well-being

Level 1 2 3
Assessment criteria List the government priorities for health promotion and health education Identify the main points in government policies to improve the effectiveness of the NHS, especially in relation to preventative health care and health education Explain the government thinking on how to improve the effectiveness of the NHS, especially in relation to preventative health care and health education

Figure 14.    A suggested hierarchy of complexity of command verbs.
Different levels are associated with different command verbs, with more complex verbs at higher levels: example unit.

The QCDA guidance document went on to illustrate the association between command verbs and levels via Figure 15 (which is reproduced from QCDA, 2010e, page 34). This clarified that there would not be a one-to-one mapping between command verbs and levels.[footnote 12] Indeed, the document was clear that levelness was not determined by any particular command verb, since the meaning of a criterion depends on all of the words used to express it, not just the command verb.

Entry level 3 Level 1 Level 2 Level 3
Define
Demonstrate
Give examples
Identify
Indicate
Locate
Outline
State
Use
Define
Demonstrate
Give examples
Identify
Indicate
Locate
Outline
State
Use
Apply
Assess
Classify
Compare
Define
Demonstrate
Describe
Differentiate
Distinguish
Estimate
Give (+/-points)
Illustrate
Perform
Select
Use (a range of…)
Analyse
Apply
Clarify
Classify
Critically compare
Demonstrate
Develop plan/idea
Diagnose
Differentiate
Distinguish
Draw conclusions
Estimate
Evaluate
Explain
Extrapolate
Implement
Interpret
Judge
Justify
Perform
Review and revise
Summarise

Figure 15.    The (loose) association of command verbs with unit levels.
Different levels are associated with different command verbs, with more complex verbs at higher levels: illustrative verbs.

Having said that, it is also important to recognise that certain command verbs did not appear in this table until higher levels, so their incorporation within assessment criteria might be used to indicate that the unit might well be pitched at a higher level.[footnote 13] It was exactly this (loose) association between command verb and level that provided a rationale for using them to help convey intended degree of challenge.

It is worth noting that all 3 of Bloom’s most complex categories of cognition – analysis, synthesis, and evaluation – appear among the command verbs listed for Level 3. This begs the question of how to write effective criteria for Levels 4 to 8, which brings us back to the guidance document on writing QCF units that anticipated fundamental implementation challenges for the QCF:

One possible impact of this shifting focus across different levels is that it becomes more difficult to develop precise and easily measurable learning outcomes and assessment criteria at higher levels of achievement. There is a danger that assessment criteria at higher levels either become repetitive, or that they fail to establish an explicit assessment standard for the unit.

(QCDA, 2010e, page 58)

[…] evidence from the QCF test and trial programme suggests that it is actually more difficult to establish meaningful distinctions between units at higher levels of the framework. […] It is possible that the reason why it becomes practically more difficult to distinguish between levels of achievement as one proceeds up the levels of the QCF is that the distinctions between levels 4 to 8 are theoretically less easy to establish. Although the levels of the QCF are nearly always presented as a neat and even set of ‘stages’ in a hierarchy of achievements, perhaps in reality these stages get progressively ‘narrower’ as one goes up through the levels. The difficulty in identifying the difference between a unit at level 6 and one at level 7 may actually be a reflection of reality.

(QCDA, 2010e, page 59)

Whereas the hierarchical structure of objectives appeared to provide a rough-and-ready solution to the challenge of defining and communicating levelness for lower-level units, the same does not seem to have been true for higher-level ones. There appears to have been genuine concern over the potential to capture the levelness of a unit via appropriately worded learning outcomes and assessment criteria.

Finally, it is worth mentioning a criticism that is sometimes voiced, but not well documented, related to the use of command verbs within QCF units. Problems arose when unit writers, who had been charged with developing qualifications at a particular level, interpreted the link between command verbs and levelness far more strictly than can possibly be justified (as though the use of certain command verbs within criteria for a unit straightforwardly warranted the claim that it was pitched at a certain level).[footnote 14]

Imagine, for instance, that we treated ‘analysis’ as a skill that resides inherently at Level 3, suggesting that when we see analysis occurring, we can safely infer that the analyser is operating at Level 3. If this were true, then there would be some legitimacy in treating units with assessment criteria framed in terms of command verbs like ‘analyse’ as though they were comparable and, therefore, exchangeable. If so, then command verbs would provide a simple – and easily verifiable – tool for engineering the kind of comparability required by the QCF. Unfortunately, this is not true, not even roughly. Command verbs are not that definitive. As recognised in the QCDA guidance, command verbs alone are insufficient to define the degree of challenge associated with a unit. Sometimes ‘analysis’ will be associated with a very low level of challenge, other times with a very high level. There is more to defining and communicating degree of challenge than can be captured by the use of a particular command verb. Exactly what that might be was assumed to be part of the body of expertise required to be a competent QCF unit writer.

Assessment

The QCDA guidance document on writing QCF units also grappled with the challenge of developing assessment methods at higher levels, where learning outcomes were often framed mainly in terms of knowledge and understanding, and especially for qualifications that had traditionally relied exclusively upon tests or exams. The document was clear that this was acceptable, in principle, although:

the requirements for assessment of such units are exactly the same as for other levels of the QCF: all the learning outcomes of the unit must be achieved to the required assessment standard in order for credit(s) to be awarded for that unit.

(QCDA, 2010e, page 58)

This raised (rather than answered) a fundamental question concerning the nature of testing within the QCF, which appears never to have been fully resolved. The question arises because written tests and exams tend to be associated with the classical approach to qualification design, which is based upon a compensatory (rather than a mastery) aggregation principle, and which is therefore far more open to sampling across learning outcomes. Under the QCF, not only did learners need to provide evidence of having achieved each outcome, they actually needed to provide evidence of having achieved each criterion for each outcome. First, this means no sampling. Second, this means that the assessment of each individual criterion needs to be sufficiently reliable in its own right. This contrasts with the classical approach to assessment design, where sufficient reliability only needs to be demonstrated at the highest level, that is, at the level of the total mark achieved across all qualification components.

Written testing

The dilemma for written testing within the QCF was that, to do it properly – consistent with QCF regulations and the underpinning mastery model – a mini test would need to be created for each criterion of each outcome. Furthermore, each student would need to pass the relevant mini test, for each criterion of each outcome, to pass the unit. It is certainly possible to imagine an assessment designed like this. However, it would probably end up as a mega test, with very many individual test items, and it would probably fail most if not all of the candidates who sat it (as sustaining that level of performance across a very long test is a very big ask). This, of course, is why the CASLO approach tends to be associated with continuous, or staggered, centre-based assessment, not written tests or exams.

Separate guidance on assessment confirmed that it was, in fact, permissible to assess units through tests and exams, for example:

The examination questions must be designed in such a way as to enable an assessor to make an assessment judgement about whether or not the learner has achieved the outcomes of a unit, but the questions themselves may be much more explicit than the learning outcomes of the unit. Indeed, where learners are offered a choice of examination questions, this separation of learning outcomes and examination questions will be essential.

(QCDA, 2010c, page 6)

Yet, it provided no guidance on exactly how this could be achieved without sampling or compensation. Note that the same document specifically ruled out both sampling and compensation. Finally, its guidance on using pass marks is worth reproducing in full:

  1. Can we set a percentage pass mark for a unit? If so, what should it be?

Yes, but the percentage pass mark must relate to all the learning outcomes of a unit. A learner cannot be awarded credit for a unit if only a proportion of the learning outcomes have been achieved. Again, the separation of test or examination questions from the learning outcomes of a unit enables such assessment judgements to be made easily.

In effect, a percentage pass mark reflects the level of confidence that the assessor has in the outcomes of the assessment process, not the proportion of the learning outcomes that a learner has successfully completed. The QCF sets no particular requirements about pass marks in such circumstances. Providing the assessor is confident that all the learning outcomes of a unit have been achieved against the stated assessment criteria, credit can be awarded for the unit.

(QCDA, 2010c, page 7)

It is hard to understand what this passage means. But, it seems not to shed any light on the fundamental question concerning the nature of testing within the QCF. Confusion persisted.

Subsequent investigations by Ofqual occasionally surfaced this tension, for example, it arose within a thematic review of Level 6 and 7 qualifications that were available to international students on a ‘Tier 4’ study visa. The report observed that some awarding organisations used “a compensatory system of assessment for QCF qualifications, in breach of QCF arrangements” (Ofqual, 2014a, page 21):

This issue seems to stem from awarding organisations moving long-running qualifications onto the QCF, but continuing to use compensatory models of assessment (for example, a written exam with a 40 per cent pass mark).

(Ofqual, 2014a, page 21)

Recommendations from the report confirmed that all QCF qualifications must require all learning outcomes to be met for a pass to be awarded, but failed to explain how this might be accommodated within written tests or exams. Finally, the report also expressed concern that command verbs in assessment criteria were “not sufficient for qualifications at level 6 or 7” (Ofqual, 2014a, page 17), illustrating the other key challenge, of levelling.

Questionable transitions

Consistent with its ambition to rationalise the qualifications landscape, the QCA anticipated that all regulated qualifications would become part of the QCF. Pressure to incorporate GCSEs and A levels was resisted, but other long-standing, well-respected qualifications were forced to transition. This included graded performance exams, which had existed for well over a century, including graded exams in music, dance, speech, and drama.

These exams had always been, and still are, fairly unusual in various respects. For instance, they are unusual in terms of target cohort, in being almost entirely elective – adults and young people choose to take them when ready, and typically fund themselves. In terms of qualification design, they are unusual in being based on a progressive mastery model – learners progress hierarchically up a suite of (normally) 8 qualifications, demonstrating skills of increasing technical difficulty and complexity. They are used for an unusually wide range of purposes, too, from building confidence and self-esteem, to satisfying a personal hobby, to developing technical (occupational) competence.

This family of qualifications is described in a report by Rachael Meech, which is particularly interesting for its account of how they were forced to adapt to increasingly stringent regulatory requirements, especially the QCF (Meech, with McBirnie, & Jones, 2018). She noted that:

Many awarding organisations found elements of the new QCF regulatory criteria challenging to meet whilst simultaneously preserving the ethos and purpose of their graded examinations and associated processes. Organisations faced challenges with the conceptual framework of the QCF and how it would work in practical terms for their units and qualifications, in particular:

  • Requirements for centre approval when a centre-based model of assessment is not operated for graded examinations
  • Drafting procedures for units and rules of combination for qualifications which were already well-established, having evolved through a set of detailed syllabuses. This procedure proved challenging due to the requirements placed by Section 1 of the regulatory criteria which arguably undermined well understood processes already instituted.

(Meech, et al, 2018, pages 15 to 16)

These awarding organisations struggled to redevelop their qualifications in ways that complied with QCF regulations, while preserving the ethos and value of graded exams. This included having to square requirements for detailed learning outcomes and assessment criteria, plus mastery aggregation, with a well-established approach to assessment based upon compensation and a one-off external exam. Of course, it was not possible to square this circle, which suggests that it was never reasonable to expect to be able to regulate these qualifications effectively under the QCF.

Evaluations

In July 2008, Ofqual committed to evaluating its new regulatory arrangements, to determine whether they were supporting effective regulation. Evaluations of the first and second year of operation – based on QCF user feedback, scrutiny of sampled units, and outcomes from regulatory activities – were published in January 2010 (Ofqual, 2010b) and May 2011 (Ofqual, 2011b).

The interim report (2009 evaluation) expressed satisfaction that 57% of sampled units were fully compliant, with only 10% requiring immediate review, and 32% having only relatively minor technical issues. These issues included mismatch between learning outcomes and assessment criteria, the clarity with which learning outcomes and assessment criteria were expressed, and manageability concerns related to the number of learning outcomes and assessment criteria within certain units.

Although the final report (2010 evaluation) found “broad support” for the framework (Ofqual, 2011b, page 12), it also recognised that some users felt that the QCF model and its regulatory arrangements were “fundamentally flawed” (Ofqual, 2011b, page 12). Ofqual accepted that the speed and nature of the introduction of the QCF had caused problems, acknowledging that many users who commented had criticised the “attempt to apply one set of design rules to a wide range of very different qualifications” (Ofqual, 2011b, page 12). There was concern that QCF design features were more suited to certain qualifications, students, and sectors than to others.

Even after 2 years of operation, it had become clear that opportunities for credit accumulation and transfer were lacking and, more importantly, that learners themselves had not yet expressed significant demand for it. Ofqual was frank that it needed to consider whether the benefits of regulatory requirements that provided for infrequently used flexibilities outweighed the costs that might be incurred. One particular threat arose from shared units being assessed in different ways, reducing the likelihood that comparability of standards could be maintained. A specific concern was that units devised by organisations with specialist expertise (for example, in conflict management training) could be used by organisations that lacked it and failed to understand how it ought to be assessed, presenting a major threat to standards.

In addition to cost and burden challenges related to having to assess all learning outcomes and assessment criteria, the final report also noted that qualifications that had traditionally been compensatory did not fit well into the QCF, particularly higher-level professional qualifications. Grading challenges were also noted. The report ended with a list of lessons learnt:

  • the potential pitfalls of a central policy that drives awarding organisations to redesign a large number of qualifications, with a range of different characteristics and purposes, to conform to one set of design requirements
  • the need for clarity in the lines of accountability in qualifications design, approval and delivery
  • the need for credit, which is the ‘currency’ of units, to be assigned consistently
  • the risk that detailed and/or poorly understood regulatory requirements can distract from, or overshadow, more important regulatory principles
  • the implications for commercial and/or competing organisations of sharing units and of collaborating with others to develop units; and
  • the challenge of imposing titling rules that do not align with established and understood titles.

(Ofqual, 2011b, pages 23 to 24)

Structure versus quality

Although the QCA anticipated that QCF regulations would ultimately displace NQF ones, this did not actually happen. Various qualification types continued to be regulated under the 2004 NQF Statutory Regulations until these were eventually displaced by Ofqual’s General Conditions of Recognition (GCR), which were first published in May 2011. When the GCR came into force, the 2008 QCF Regulatory Arrangements document was designated a subsidiary Regulatory Document (applying only to QCF qualifications), as was the 2006 NVQ Code of Practice (applying only to NVQs).

Bearing in mind the goals of the QCF reform process – to enhance simplicity, inclusivity, and responsiveness – it is worth considering whether anything important was lost when transitioning across successive regulatory documents. A particular question arises over the change in emphasis that occurred with the introduction of QCF regulations, and whether this elevated concern for unit and qualification structure over concern for assessment quality.

To be clear, it is not true that QCF regulations ignored assessment quality or took it for granted. For instance, they specifically stated that an awarding organisation must have in place the necessary systems, procedures and resources to ensure:

  1. assessment instruments and tasks are produced to the required quality standards
  2. assessment evidence produced by learners is authentic
  3. accuracy and consistency of standards in the assessment of units, across units and over time [and so on]

(Ofqual, 2008a, page 27)

However, it is true that QCF regulations focused primarily on structural design requirements. Furthermore, the plethora of QCDA guidance documents also prioritised unit and qualification structure over assessment quality. To some extent, this was a deliberate strategy. The new regulations were intended to focus on expectations of awarding organisations (regarding systems, procedures, resources, and so on) rather than expectations of practices. Hence, there was no QCF code of practice to specify, for instance, the methods by which learning outcomes ought to be developed or assessed.[footnote 15] Indeed, because the QCF recognised such a wide variety of qualifications, without specifying any distinct sub-types, it may not actually have been possible to produce regulations with that level of specificity.[footnote 16]

With this shift in emphasis, many of the detailed requirements that had been built into earlier regulations (to prevent or mitigate serious problems that had occurred) no longer featured explicitly in QCF regulations. This included, for instance, the requirement from the Statutory Regulations that an awarding organisation must take steps to ensure that internal assessment is carried out in the same way across centres, by providing support materials such as assessment criteria, mark schemes, exemplar material, and guidance on the use of witness statements (QCA, 2004a). Also no longer featuring explicitly in QCF regulations were requirements from the NVQ Code of Practice, such as the following, from sections headed ‘Internal verification’ and ‘External verification’ respectively:

56. Guidance produced by the awarding body must include exemplars of:

  • procedures for standardising assessment so that assessors are operating to the same standard
  • models for developing an internal verification sampling plan appropriate to the centre’s level of assessment activity. Models must ensure that over time all assessors, all assessment methods and all candidate units are included in the sample
  • procedures for standardising the judgements and decisions of internal verifiers operating in a centre
  • the types of records a centre must keep to demonstrate the effectiveness of its internal verification procedures.

(QCA, 2006, page 15)

60. Awarding bodies must require external verifiers to:

  • confirm that centres continue to meet the centre approval criteria
  • recommend the imposition of appropriate sanctions on centres that fail to meet the requirements
  • confirm that assessments are conducted by appropriately qualified and occupationally expert assessors
  • sample assessment decisions to confirm that they are authentic and valid and that national standards are being consistently maintained
  • confirm that assessment decisions are regularly sampled, through internal verification, for accuracy against the national standards
  • check that claims for certification are authentic, valid and supported by auditable records
  • [and so on]

(QCA, 2006, page 15)

By way of contrast, requirements in the 2008 QCF Regulatory Arrangements were far less detailed, for example:

The awarding organisation must ensure that it has arrangements in place for standardisation and quality assurance of assessment outcomes across centres and awards.

(Ofqual, 2008a, page 28)

Bearing these considerations in mind, it is reasonable to conclude that assessment quality took a back seat while the QCF regulations were being introduced, as unit and qualification structure took centre stage.[footnote 17] Of course, many awarding organisations would have continued to implement quality assurance arrangements of the sort embodied in earlier regulations. Yet, the fact that they were not explicitly required to do so by QCF regulations left a door open to organisations who saw the potential to cut costs by not implementing important controls. Furthermore, this lack of explicit requirement might also have made it harder to justify control-related burdens to centres.

Dominance

Even though the QCF never subsumed all regulated qualifications, it did come to dominate the qualifications landscape in England (until it was withdrawn).

Qualification Type 2010 to 2011 2011 to 2012 2012 to 2013 2013 to 2014 2014 to 2015
Qualifications & Credit Framework (QCF) 54 % 62 % 71 % 78 % 85 %
Vocationally Related Qualification (VRQ) 13 % 10 % 7 % 5 % 3 %
National Vocational Qualification (NVQ) 9 % 7 % 5 % 3 % 1 %
Occupational Qualification (OQ) 1 % 0 % 0 % 0 % 0 %
Other General Qualification (OGQ) 4 % 4 % 3 % 3 % 3 %
A level 2 % 1 % 1 % 1 % 1 %
GCSE 4 % 3 % 3 % 2 % 2 %
Functional Skills 1 % 1 % 1 % 1 % 1 %
All other qualifications 13 % 11 % 9 % 7 % 5 %
Total number of available qualifications 18,095 20,500 23,642 24,965 24,520

Table 5. Number of available qualifications as a percentage of the regulated qualification market

The figures in Table 5 were computed from data taken from the ‘Annual Qualifications Market Report’ for 2014 to 2015 (Ofqual, 2016). The percentage values were calculated in relation to the total number of qualifications that were available to learners, each year, from 2010 to 2015, combined across England, Wales and Northern Ireland. They illustrate how QCF (CASLO) qualifications became more and more dominant, increasing from 54% of the market (during the 2010 to 2011 academic year) to 85% (during the 2014 to 2015 academic year). Where the percentage of other qualification types decreased over time, this was largely a consequence of transitioning into the QCF. Note that, even in 2015, there were CASLO qualifications that were regulated beyond the QCF across multiple qualification types (including NVQs, VRQs, OQs, VRQs). In other words, we can be confident that, by 2015, the CASLO approach completely dominated the regulated qualifications landscape.

Having said that, it is important to conclude by acknowledging that the approach might already have become dominant in the landscape some time prior to the introduction of the QCF (although it is hard to tell for sure). Bearing in mind the growing significance of BTECs in the regulated market – which had embraced the CASLO approach since the early-1990s – it is quite possible that the approach may have become dominant much sooner, perhaps under the NQF and maybe even earlier. The QCF regulations give us surety that the approach had come to dominate by the mid-2010s, but many qualifications (not just NVQs) had come to adopt it long before these regulations came into force.

Having now considered nearly 3 decades of CASLO qualifications – from the late-1980s to the mid-2010s – we are in a good position to begin unravelling the multiplicity of goals that drove NVQ designers, and designers of other qualifications, to adopt the CASLO approach. The next chapter is devoted to this challenge.

  1. Although Dearing promulgated the idea of a formal national qualifications framework, it is worth noting that the NCVQ had developed (and was informally using) essentially the same framework structure some years prior to his report (see Hyland, 1994). 

  2. Criteria specific to the following were developed: Entry level Qualifications, General Qualifications, Vocationally-Related Qualifications (with additional criteria specific to GNVQs), Occupational Qualifications (with additional criteria specific to NVQs), National Occupational Standards, Key Skills. 

  3. The previous edition had specified a similar requirement within a section entitled ‘Internal assessment’ in its common code of practice: “An awarding body must set down assessment criteria, including mark schemes where relevant, to ensure valid and consistent assessment. The awarding body must provide centres with exemplar work showing clearly how defined standards are to be met.” (QCA, 2000, page 33). 

  4. Ironically, without the ability to cash-in credit for a qualification, the credit accumulation and transfer system was actually pointless for students (Wilson, 2010). Its significance was more symbolic, establishing a national currency for achievement. 

  5. For the OCNs, unitisation meant that programmes were constructed on the basis of units, with personalisation in mind. For the FEU, unitisation simply acknowledged that qualifications could be deconstructed into their component parts. 

  6. The 2004 revision of the Statutory Regulations introduced the idea of assigning a ‘credit value’ although this seems to have been in the FEU (watered down) sense of a size appraisal. The term ‘credit value’ was not formally defined in the 2004 regulations. 

  7. Ultimately, the QCF was incorporated as a component of the CQFW. 

  8. Bear in mind that, not only was Peter Wilson the Principal Advisor to this programme, the QCA also appointed key figures from the OCN community to lead development teams. 

  9. Three months earlier, the QCA had been split into Ofqual, the new regulator for England, and the Qualifications and Curriculum Development Agency (QCDA). The QCDA continued to develop support materials for the QCF (until it was wound up in 2010) while Ofqual focused squarely upon regulation. Technically, Ofqual was still part of the QCA until legislation came into force in April 2010. 

  10. Ironically, although the answer to this question confirmed a clear expectation that each learning outcome must be judged on evidence related to all of its associated assessment criteria, it was a little ambiguous over whether this meant that all assessment criteria actually had to be met for credit to be awarded. The guidance in QCDA (2010e, page 10) was clearer: “The learner must be able to demonstrate all of the assessment criteria for the judgement to be made that the learning outcome has been achieved.” 

  11. This guidance was continually being updated. For example, QCDA (2010e) was the 4th version of guidance on how to write units. 

  12. Note, for instance, that ‘demonstrate’ appears in all 4 columns and ‘define’ appears in 3 columns. 

  13. For example, ‘estimate’ appears in the Level 2 and Level 3 columns, while ‘analyse’ and ‘evaluate’ only appear in the Level 3 column. 

  14. Ofqual’s report on grading VTQs provided some evidence related to this criticism (Newton, 2018). Additional insights were provided by Barry Smith and Norman Gealy (personal communication). 

  15. Note that there were structural rules for writing and assessing learning outcomes. For instance, learning outcomes and assessment criteria needed to be written in a particular format, and judgements needed to be aggregated in a particular way. However, there were neither rules nor guidance on how to derive a set of outcomes for any particular domain of learning (which contrasts with the requirement to use functional analysis for NVQs during the early 1990s). And QCDA guidance was quite explicit that flexibility built into QCF regulations even permitted centres to use different assessment methods for the same unit offered to different groups of learners (QCDA, 2010c). 

  16. Regardless of whether this may also have been undesirable, from the perspective of enhancing flexibility. 

  17. It is worth noting that the 2001 NVQ Code of Practice was even stronger on the provision of exemplar work for standardising internal assessment: “The awarding body must provide centres with exemplar work showing clearly how defined standards are to be met.” (QCA, 2001, page 27). The use of “where appropriate” alongside similar requirements in the 2004 Statutory Regulations (and no comparable requirement in the 2006 NVQ Code of Practice) suggests that the requirement to provide exemplar work for all internal assessments may have proved to have been too stringent. Note that the rationale for the 2006 Code of Practice revision was to focus on quality assurance rather than quality control and to: “reduce perceived bureaucracy and allow the controlled development of innovative ways of assessing and quality assuring NVQs” (QCA, 2006, page 1).