Origins and Evolution of the CASLO Approach in England - Chapter 3: Genesis
Published 18 November 2024
Applies to England
National Vocational Qualification framework regulations required that NVQs should be specified in terms of outcomes, with formal criteria for ascribing their acquisition, and with a requirement that all specified outcomes must be acquired. That is, they specified the CASLO approach in full. This chapter on the genesis of the CASLO approach describes the emergence of NVQs, including their design, implementation, and evolution. It also describes other key qualifications that came to adopt the CASLO approach, including General National Vocational Qualifications (GNVQs) and BTECs.
NVQs
In the following subsections we will explain how the NVQ model emerged, before considering how the first NVQs were received and, consequently, how they and the context within which they were situated changed over time.
Background
In April 1986, a system of National Vocational Qualifications was proposed by a Working Group chaired by the industrialist Oscar De Ville. It was originally conceived as a framework into which existing qualifications could be accredited. Ultimately, though, it became associated with a new approach to designing qualifications. The Working Group also recommended a National Council for Vocational Qualifications (NCVQ) to administer the new system.
The model that the NCVQ was soon to specify as a template for developing NVQs had its roots in research and development undertaken by the Manpower Services Commission. In December of 1981, the MSC had published ‘A New Training Initiative: An Agenda for Action’ (MSC, 1981b) on the same day as the government had published its white paper ‘A New Training Initiative: A Programme for Action’ (Raggatt & Williams, 1999). These reports promoted the idea of ‘standards of a new kind’ – an idea that was soon to be unpacked by research and development teams working at the MSC and, subsequently, at the NCVQ.[footnote 1]
De Ville report
The 1985 white paper ‘Education and Training for Young People’ had concluded that future economic competitiveness depended on a coherent system for the assessment and certification of vocational competence. It set out proposals for a review of vocational qualifications in England and Wales. Under the chairmanship of Oscar De Ville, a Working Group was established by the MSC in conjunction with the DES. Its remit was to recommend a structure of vocational qualifications that would:
- be relevant to the needs of people with a wide range of abilities
- be comprehensible to users
- be easy of access
- recognise competence and capability in the application of knowledge and skill
- provide opportunities for progression, including progression to higher education and professional qualifications
- allow for the certification of education, training and work experience in an integrated programme
In his preface to the Review of Vocational Qualifications (RVQ) report, De Ville explained that:
In our recommendations we have sought to build on what is already good in present arrangements. There are many examples where local co-operation provides freshness, vitality and relevance. But nationally there is a lack of pattern or coherence; no clear overall accountability for vocational qualifications or ensuring standards; no assurance of progression or transferability. In spite of a plethora of institutions there are gaps. In short there is no single focus for attention.
(De Ville, 1986, Chairman’s Preface)
The report began by highlighting a number of strengths and weaknesses of existing arrangements. Strengths included credibility, diversity, and partnership. Weaknesses included opaqueness, duplication, gaps, inaccessibility, lack of take-up, and insufficient recognition of informal learning. On the positive side, the report identified “generally dependable assessment procedures and testing arrangements” whereas, on the negative side, it identified “assessment methods which are biased towards the testing of knowledge rather than skill or competence” (De Ville, 1986, page 1).
The report also reiterated 4 weaknesses that had been identified in the 1985 white paper, which concerned inadequate opportunities for:
- individual achievement certified by one part of the system to be recognised by other parties or parts of the system
- testing of skills and competence as well as knowledge and understanding
- recognition of learning achieved outside formal education and training situations
- flexible patterns of attendance and learning
In response, the report offered numerous recommendations, which included establishing:
- a National Council for Vocational Qualifications
- a national framework for vocational qualifications – the National Vocational Qualification framework
- objectives for a vocational qualifications system
The report anticipated that the NCVQ would provide a much-needed focus for a sprawling tripartite sector, which then comprised:
- examining and validating bodies (including City & Guilds, BTEC, RSA, LCCI, Pitman, and the REBs)
- examining and accrediting professional bodies (around 250, 76 with Royal Charters)
- industry training organisations (ITOs) and statutory testing facilities (including around 120 largely non-statutory ITOs and around 85 joint industry councils)
The NCVQ would be given a remit to secure a comprehensible system of relevant, credible, accessible, and cost-effective qualifications. Its primary function would be to exercise a quality assurance role by accrediting qualifications that had been developed by approved organisations to the NVQ framework.
De Ville envisaged that the NCVQ would bring coherence to the system by incorporating existing qualifications. This would certainly require changes to those qualifications. However, there was a strong sense that existing qualifications should be “brought within” the framework, rather than the NCVQ designing a “completely new structure to replace existing qualifications” (De Ville, 1986, page 25).
Yet, the Working Group also made many quite specific recommendations, which were interpreted in a manner that would ultimately frustrate this ‘onboarding’ presumption. These included recommendations for awarding credit, for credit accumulation across qualification components, and even credit transfer. Critically, De Ville insisted that:
A certificate that indicates performance in a written examination which tests the ability to describe, to state facts or to develop a logical argument is valuable but it is not a statement of competence as we would wish to have it. Many existing vocational qualifications are of this type and most fail to give recognition to work-based learning. Likewise a certificate that indicates the ability to exercise a skill or to perform a limited and sometimes artificial task is useful, but it is not a statement of competence within our meaning. In satisfying the criteria for the National Vocational Qualification, accredited awards should not continue these deficiencies of most current certification.
(De Ville, 1986, page 33)
Instead, the Working Group believed that vocational qualifications should be defined as follows:
A vocational qualification is a statement of competence clearly relevant to work and intended to facilitate entry into, or progression in, employment, further education and training, issued by a recognised body to an individual. This statement of competence should incorporate the assessment of:
- skills to specified standards;
- relevant knowledge and understanding;
- the ability to use skills and to apply knowledge and understanding to the performance of relevant tasks.
(De Ville, 1986, page 17)
The report added that neither existing assessments of knowledge related to occupational skills, nor existing assessments of performance of skills, necessarily indicated occupational competence, by which the Working Group meant: “the ability to perform satisfactorily in an occupation or range of occupational tasks” (De Ville, 1986, page 30). To underpin the NVQ framework, the NCVQ would therefore need to work with sectoral standards-setting bodies to secure the specification of new standards of occupational competence.
It is worth pausing to reflect on the intended scope of recommendations from the Working Group. The report certainly reads as though it were proposing a general overhaul of TVET qualifications – from craft and technician qualifications, through to higher education and professional qualifications, as well as the more specific problem of YTS certification. Yet, based on interviews with key officials, Hargraves identified a more pragmatic perspective, which recognised that the new system was likely to be restricted mainly to qualifications within the influence of the Department of Employment, that is, the “operative and craft level awards of City & Guilds, RSA and the various industrial training boards” (Hargraves, 2000, page 294). In particular, he noted De Ville’s reluctant acceptance that BTEC technician awards were unlikely to be part of the new framework. The problem of how BTECs were to relate to the new framework “simmered below the surface” until the end of the 1980s before it “boiled over” into debates regarding the General National Vocational Qualification (Hargraves, 2000, page 295). We will consider the evolution of GNVQs and BTECs later.
NCVQ
The 1986 white paper ‘Education and Training – Working Together’ endorsed recommendations from the Working Group and, in October 1986, the NCVQ was established as a non-departmental public body responsible to the Department of Employment. Its remit was to implement recommendations from the Working Group (Jessup, 1991).
From 1987, with support from the MSC, Lead Bodies were appointed to design the standards of occupational competence that lay at the heart of the new NVQ framework. The framework itself took shape very quickly and the first NVQs became available in 1987. The NCVQ began with two directorates: quality assurance and accreditation. In 1987, a research and development directorate was added, headed by Gilbert Jessup.
Gilbert Jessup
Jessup is a key figure in this account of the genesis of the CASLO approach. As Director of Research, Development and Information at the NCVQ, he became the principal architect of the NVQ model. He gained considerable experience with outcome-based qualifications during his time as an occupational psychologist in the Royal Air Force (Tim Oates, personal communication). He was subsequently appointed Chief Psychologist in the Work Research Unit of the Department of Employment where, incidentally, he and his wife, Helen, authored a book on ‘Selection and Assessment at Work’ (Jessup & Jessup, 1975). His next role was in the Manpower Services Commission where worked on the Youth Training Scheme until he was invited by Margaret Levy to join the European Social Fund project team, which elevated his profile in relation to assessment and qualifications (Tim Oates, personal communication).
Jessup developed a blueprint for the NVQ model while working in the Quality Branch of the MSC. In the appendix of his landmark book – ‘Outcomes: NVQs and the Emerging Model of Education and Training’ (Jessup, 1991) – he reproduced a technical note from March 1985, which unpacked implications of the MSC (1981b) report. He began by revisiting the report’s influential statement on standards:
At the heart of the initiative lie standards of a new kind. Such standards are essential for the following reasons:
i. modernization of skills training including apprenticeship can only be achieved if we can replace time serving by standards of training achievement and ensure all those who reach such standards, by whatever route and whatever age, are recognized and accepted as competent;
ii. if all young people are to have access to basic training, they and employers will want to have a recognized record of skills, knowledge and experience gained and;
iii. if there is to be wider access to opportunities for adults, there must be a recognized system which allows the individual to build upon what he has and secure recognition for what he has gained to date.
(Jessup, 1985, reproduced in Jessup, 1991, page 166)
He then identified various implications for assessment and accreditation, which he believed followed from the idea of ‘standards of a new kind’, including the need to:
- accredit workplace learning, consistent with the emphasis on work-based learning within recent training schemes such as the YTS
- separate standards from training courses, to ensure that accreditation is accessible to all learners, regardless of the ‘route’ to that learning
- reference standards against job requirements, that is, to criterion-reference rather than norm-reference
- move from sample-based to comprehensive assessment, to make sure that individuals have acquired all of the specified standards
- specify standards in terms of both relevant activities (and the conditions under which they need to be performed) and criteria by which success will be judged
- specify overall requirements for competence in terms of relatively small units of activity, suggesting a modular structure for awards [footnote 2]
- incorporate assessment arrangements that can accommodate all learners, regardless of the ‘route’ of their learning, bearing in mind that specifying relatively small units of activity tends to lend itself to continuous assessment
These stated implications clearly prefigure the core characteristics of all CASLO qualifications, including the explication of both learning outcomes and assessment criteria, as well as the mastery requirement. They also highlight additional characteristics that were to become more specifically associated with NVQs.
Jessup championed a movement that wanted to turn education and training on its head. Further education institutions had been widely criticised for being too inflexible, too dependent on young people for their clientele, too focused on teaching rather than learning, and “too acquiescent in accepting as holy writ syllabuses handed down by others” (Boffy, 1990, page 185). Despite detailed scrutiny by awarding organisation committees, there were some very outmoded practices enshrined in the syllabuses of certain courses (Cantor, Roberts, & Pratley, 1995). In addition, the recession had increased demand for full-time provision. Colleges responded by increasing opportunities for college-based simulation of real work experiences, but these opportunities were often inadequate. It led to a situation in which many students ended up obtaining the same paper qualification as past craftspeople would have done, but without a fraction of their practical experience (Cantor, et al, 1995).
Jessup questioned the established role of colleges in upskilling the nation. He believed that education and training had to be refocused. In particular, he argued that attention needed to be diverted from traditional syllabuses, courses, and training programmes – which specified ‘inputs’ to learning – towards standards of a new kind that specified ‘outcomes’ from learning (Jessup, 1990).
In a Foreword to Jessup’s landmark book, John Burke, Senior Fellow at the University of Sussex, emphasised the extent to which Jessup’s “personal contribution has shaped so many developments in the emerging model” (Jessup, 1991, page ix). It is clear from Jessup’s own writing that he was passionate about this new model. In a Foreword to a book edited 2 years earlier by Burke – titled ‘Competency Based Education and Training’ – Jessup explained that he found it “exhilarating” to be grappling with “fundamental questions on the way we learn and behave” (Burke, 1989, page x). He believed that many learners had failed to achieve their potential under the old input-driven model and he saw the new outcome-driven model as a revolutionary antidote. This was particularly true in relation to assessment:
We shall, I hope, see the demise of the last minute swotting of information soon to be forgotten for examinations. We shall not need to play those games in the future – games which few enjoy and where the majority finish up losers. Assessment will be open (the word ‘transparent’ is coming into vogue when what it meant is simply ‘explicit’ – able to be seen, rather than seen through). What is assessed and the standards of performance required are open to both the assessor and the candidate alike. Learners will be able to make judgements about their own performance which will have implications for their own learning. Self-assessment will become an important component in learning. It will also often contribute to and initiate assessment by tutors and supervisors.
(Jessup, 1991, page 135)
Along with new standards must go new forms of assessment, very different from sitting examinations. The model only works if assessment can cover all the things we want people to learn (and, more important, what the learner wants to learn). It also only works if assessment is more friendly and facilitates learning rather than acting as a deterrent or just an obstacle to be overcome.
(Jessup, 1990, page 18)
Assessment practices such as sampling, providing a choice of questions and adopting pass marks of around 50 per cent, are all imports from an educational model of assessment, which have little place in the assessment of competence.
(Jessup, 1990, page 20)
The fact that this top-down revolution was openly critical of the educational establishment – and the fact that it was championed largely by the government’s Employment Department rather than its Department of Education and Science – must have felt infuriating to many TVET scholars and teachers.[footnote 3] Many book chapters and journal articles were written during the 1990s on the merits and (far more frequently) demerits of the NVQ outcome-based model. These academic debates were characterised by vigour, passion, hostility, and anger (Bates, 1995; Ecclestone, 1997; Hargraves, 2000).
Specifying the new standards
Influenced by the North American Objectives Movement, National Vocational Qualification standards would be derived from an analysis of outcomes rather than inputs. They would focus explicitly upon the competence certificated by the qualification, rather than doing so implicitly via lists of syllabus content. Yet, as clearcut as that might seem, considerable ambiguity remained over how best to model the nature of occupational competence at the heart of each NVQ.
Mansfield (1989) identified 6 new models of occupational competence in use in England, which had been developed by various agencies from the early- to mid-1980s (the FEU, the NCVQ, the MSC, and so on). He noted that the existing training structure tended to be based on a narrow, task-based view of competence and standards, whereas he wished to advocate a far broader one. The NCVQ agreed. This lent support to a new approach to specifying occupational standards, based on functional analysis.
Functional analysis
The new approach that NCVQ came to promote – based upon functional analysis – was heavily influenced by David Mathews and Bob Mansfield. They were inspired by their experience of working on the ESF Core Skills Project during the early-1980s (which had been jointly funded by the European Social Fund and the MSC). This was a YTS initiative, which aimed to unpack the idea of ‘core skills’ within a model of work-based learning, with a particular focus upon describing generic workplace learning opportunities. The project team relied upon existing approaches to describing work activities – based on job and task analysis – whereby any particular job or task could be deconstructed into the set of discrete activities that comprised it. These approaches worked well in contexts where the work was highly routinised and procedural, which was certainly the case for many YTS trainees.
Mathews and Mansfield were unsatisfied with these existing approaches, however. They recognised that the world of work was changing and the labour market was moving towards a situation in which few (if any) jobs would remain highly routinised and procedural. They fully agreed with the central message of the New Training Initiative that Britain needed to develop a flexible, adaptable workforce. And they believed that this required a new way of thinking about competence. In 1985, as an antidote to the narrow, atomistic conception of competence implied by job and task analysis, they proposed a new Job Competence Model. This explicated a far broader, holistic conception, based upon 3 interrelated components of competence:
- task skills (of the sort elaborated by task analysis)
- task management skills (co-ordinating activities, solving problems)
- role and job environment skills (working within the parameters of physical, inter-personal, organisational, or cultural constraints and expectations)
This model informed a new approach to describing the nature of work, based on functional analysis. Whereas job and task analysis focused squarely on activities, functional analysis focused instead upon the intended results of those activities. In other words, it focused on outcomes, whether products or processes. Furthermore, whereas job and task analysis failed to represent holistic aspects of competence, functional analysis aimed to represent those aspects explicitly, which followed from its fundamental principle of specifying a work role, that is, an occupational function.
The Job Competence Model was widely acknowledged for its influence on the development of NVQs (Jessup, 1990; Debling & Hallmark, 1990; NCVQ, 1995). Functional analysis was formally adopted as the basis for developing NVQ standards (Jessup, 1991; Mansfield & Mitchell, 1996). In 1988 and 1989, the Training Agency, which had assumed responsibility for developing those standards, published 6 ‘Guidance Notes’ under the heading of ‘Development of Assessable Standards for National Certification’, which explained this new conception of competence, and how it would be represented via functional analysis:
- ‘A Code of Practice and a Development Model’ (TA, 1988a)
- ‘Developing Standards by Reference to Functions’ (TA, 1989a)
- ‘The Definition of Competences and Performance Criteria’ (TA, 1988b)
- ‘The Characteristics of Units of Competence’ (TA, 1988c)
- ‘Assessment of Competence’ (TA, 1989b)
- ‘Verification or Monitoring of Assessment Practices’ (TA, 1989c)
From the outset, functional analysis was promoted as the best available method for developing NVQs, particularly because “other approaches do not fully reflect the broad concept of competence” (TA, 1989a, page 5). However, the Training Agency fully acknowledged that the approach was still being tested. Indeed, some 2 years later, Jessup also acknowledged that the technique was “still being developed” (Jessup, 1991, page 36). So, it is not actually straightforward to provide a definitive account of the nature of functional analysis that was supposed to underpin NVQ development. The following account is based upon that provided in 1996 by Mansfield & Mitchell in their book ‘Towards a Competent Workforce’.
Functional analysis operates by analysing the functions that are carried out by an occupational sector, as a whole, before drilling down into particular roles. This is represented in Figure 3, which adapts illustrations from Mansfield & Mitchell (1996, page 95 and page 281). In this example, the key purpose of the occupational sector in question – construction – is to “establish, maintain and modify the use of the natural and built environment…” This can be disaggregated into a number of key areas, such as “formulate and implement strategies and policies…” These key areas can then be further disaggregated into key roles and functional units.
The basic question at the heart of the disaggregation process goes like this: in order to achieve the outcome described by the key area, what needs to be done? At higher levels of analysis, this might describe the concerted activity of a team, while at lower levels it would describe what individuals were expected to do. The first outcome in the example from Figure 3 answers the question of what needs to be done like this: “monitor and review environmental changes and need” – thus providing a specification for the first functional unit (A1.1) of the first key role (A1). This analytical process continues by specifying further key areas (key roles and functional units) until the key purpose has been exhausted.
Figure 3. Illustration of a functional map derived from functional analysis
A map of an occupational area, which breaks its key purpose into key areas, key roles, and functional units.
The idea of a key role takes us to the level of an individual worker, although this is an idealised role, not necessarily a blueprint for a specific job in the sector. This helps to clarify an important distinction between occupational standards and occupational qualifications as represented in Figure 4. Occupational qualifications relate to actual jobs, as opposed to idealised roles. Although some jobs may inherit their standards directly from functional units associated with a key role, others may not. Figure 4 (adapted from Mansfield & Mitchell, 1996, page 133) illustrates a situation in which an NVQ has been created for a job that draws its standards from multiple key roles.
Figure 4. Relationship between occupational standards and qualification units
Illustration of how NOS functional units do not map directly onto NVQ units of competence.
The principles of functional analysis also determine how functional units are specified (for occupational standards) and therefore how units of competence are specified (for qualification standards). From here on, we shall consider how units are specified using the qualification standards nomenclature, that is:
- qualification title – analogous to the key role
- unit of competence – analogous to a functional unit
As Mansfield and Mitchell describe functional analysis, it requires each unit of competence to be characterised in 2 dimensions, which involves specifying:
- elements of competence (later to become known as ‘learning outcomes’), and
- performance criteria (later to become known as ‘assessment criteria’) [footnote 4]
Each element of competence captures an outcome that needs to be achieved, specifying what needs to happen as the result of performing the role in question. For each element of competence, performance criteria capture the quality of performance expected, specifying how the role needs to be performed.[footnote 5]
The following 2 examples are taken from Mansfield and Mitchell (1996, page 163) and derive from an occupational standard for the Plant, Animal and Land sector. They illustrate how units can be decomposed into elements of competence:
Unit 1: Monitor and coordinate the movement of people within sites
E1.1 Welcome and receive visitors to the site
E1.2 Care for visitors
E1.3 Monitor and control unwelcome visitors
Unit 2: Commission, monitor and evaluate projects
E2.1 Commission projects to enable objectives to be met
E2.2 Monitor and evaluate the process and progress of projects against targets
E2.3 Support project teams to enable them to achieve project objectives
Unit 1 indicates that an employee who is capable of monitoring and co-ordinating the movement of people within sites will be able to welcome and receive visitors to the site, care for visitors, and monitor and control unwelcome visitors. Note that these elements go beyond simply listing low-level, routine, procedural activities. E1.3, for instance, involves problem solving, that is, dealing with exceptions to the routine. E2.3 describes a high-level management function. Note also how E1.2 is fairly holistic, albeit perhaps fairly low-level, while E2.2 and E2.3 are both holistic and high-level. Mansfield & Mitchell (1996) suggest that there is an art to writing effective standards: the scope of an element should not be so narrow that it begins to look like a specific task, but it should also not be so broad as to encompass distinct competencies that would be better elaborated separately.
It is particularly important that performance criteria are written carefully and with precision, to minimise ambiguity concerning the required performance standard. Typically, they describe a critical outcome or process plus an evaluative phrase. The following examples are taken from Mansfield & Mitchell (1996, page 163) and illustrate performance criteria written for qualitatively different kinds of outcome:
Physical products
Drawings and associated graphical material produced are complete, accurate and comply with design information and relevant documentation.
Interactive processes
Oral presentations are complete, accurate and presented in a pace, style and manner which are intended to maximise the trust and respect of all parties and are appropriate to the level of formality of the hearing.
Planned courses of action
The aims and objectives of production are identified in sufficient detail to allow planning to take place.
Process stages/requirements
Established conventions and procedures are followed.
Contingent outcomes which only occur if certain conditions apply
Incomplete and inconsistent input information is clarified promptly and appropriate and accurate amendments are made.
Mansfield & Mitchell were clear that performance criteria need to be as precise as possible without unnecessarily constraining action. However, they also noted that:
Most of the evaluative terms used in performance criteria require careful human judgement and consideration – and that is as it should be. The evaluative terms provide a benchmark which prompts participative discussion, negotiation and judgement – human attributes for a human system.
(Mansfield & Mitchell, 1996, page 192)
Governance
Fundamental to developing these ‘standards of a new kind’ was the principle that they must be employer-led. An organisation formerly known as the Training Commission – part of the Manpower Services Commission – was disbanded in September 1988. Its functions were absorbed into the Employment Department, and it was rebranded the Training Agency (TA). The Training Agency assumed responsibility for co-ordinating the development of National Occupational Standards (NOS) and NVQs. Each standard was developed by an employer-led representative body – an employer membership organisation, a professional body, or one of the many Industry Training Organisations that had been established since 1982 to keep training needs and standards under review (Debling, 1991; Laczik & Fettes, 2021).[footnote 6] These bodies were responsible for defining, piloting, and promulgating the standards.
Debling (1989) described how the 1988 white paper ‘Employment for the 1990s’ further delineated this process, by specifying that development would be spearheaded by Industry Lead Bodies. Since there would be no more than one set of standards per occupation or activity, there would also only be one Lead Body.[footnote 7] All new NVQs were to be approved by the relevant Lead Body, before final approval from the NCVQ (Gokulsing, Ainley, & Tysome, 1996).
Design
Substantial preparatory work at the MSC enabled the NCVQ and the TA to hit the ground running. This involved establishing the NVQ framework itself, specifying the NVQ design template, and promoting quality assurance processes.
Framework
A key objective for the NCVQ was to replace the array of widely divergent qualifications currently available to learners with a single, coherent, national system. This required the creation of a framework – the NVQ framework – which would specify the structure of this new system. The NCVQ represented this new structure using the diagram that is reproduced in Figure 5 (adapted from Jessup, 1990, page 26). This suggested that NVQs would be available at multiple (although not necessarily all) levels across a range of sectors. Prospective NVQs would be accredited (by the NCVQ) within a particular sector at a particular level.[footnote 8]
Figure 5. The NVQ Framework
Qualifications framework, representing levels vertically and areas horizontally, indicating that some areas will not have NVQs at all levels.
Originally, only 4 levels were proposed, although this was soon raised to 5. Level 1 was associated with competence in activities that were mainly routine or predictable, or provided a broad foundation for progression. Level 4 was associated with competence in activities that were complex, technical, specialised, and professional, typically requiring a significant amount of autonomy and accountability. According to the white paper ‘Education and Training for the 21st Century’ (DES, DE, & WO, 1991) these levels would roughly map onto historical reference points as follows:
- Level 5 – Professional Qualification, Middle Management
- Level 4 – Higher Technician, Junior Management
- Level 3 – Technician, Advanced Craft, Supervisor
- Level 2 – Basic Craft Certificate
- Level 1 – Semi-skilled
A core function of the NCVQ was to specify the criteria that qualifications were required to meet in order to be accredited to the new national framework. The first incarnation of ‘The NVQ Criteria and Related Guidance’ document was published in January 1988 (and revised in March 1989). It was originally envisaged that existing qualifications might be ported into the framework, albeit with some tweaking to make them fit. However, because the criteria ended up being highly specific, they effectively defined a new approach to qualification design. Thus, contrary to sentiments expressed in the De Ville report, the NVQ constituted a completely new type of qualification.
Design template
In contrast to traditional approaches, the NVQ design template was intentionally prescriptive in terms of (tightly) specifying outcomes and standards, yet also intentionally permissive in terms of (minimally) circumscribing implications for teaching, learning, and assessment. Indeed, flexibility came to be something of a watchword for delivering NVQs, particularly in relation to teaching, learning, and assessment.
The most significant feature of the NVQ model was the breadth of the construct that each NVQ was intended to represent, which was defined as nothing more nor less than the ability to perform an occupational role competently. This meant that each NVQ was designed to certify full occupational competence. It also meant that qualifications that were designed to certify knowledge, skill, or understanding beyond full occupational competence – for instance mathematical understanding that might be useful for progression but not for the current job – would not be accredited to the framework (Stanton & Bailey, 2004).[footnote 9]
Competence
The foundation of each NVQ was a statement of competence. As noted earlier, this had 3 levels of detail:
- the NVQ title
- units of competence
- elements of competence with associated performance criteria
A guiding principle of functional analysis was that statements of competence should have a common structure and grammar. This came to be viewed negatively, as jargon. Yet, NVQ designers deemed this to be essential for ensuring clarity (Mansfield & Mitchell, 1996). Thus, the NCVQ and the TA were very specific about how elements of competence should be written, insisting that each element should have an active verb, an object, and conditions. Jessup (1991, page 32) quoted an example from catering:
- maintain (active verb) standards of hygiene (object) in food preparation areas (conditions)
- assess (active verb) the physical condition of the patient (object) by inspection (conditions)
Performance criteria also had a common structure and grammar, which should always include both a critical outcome and an evaluative statement. This is an example from a Business Administration unit on filing from Marshall (1991):
- all materials are filed (critical outcome) without undue delay, in correct location and sequence (evaluative statement)
- all documents are classified (critical outcome) correctly (evaluative statement)
The NCVQ specified that elements of competence and performance criteria should:
- be stated with sufficient precision to allow unambiguous interpretation by different users, eg awarding bodies, assessors, trainers, and candidates;
- not be so detailed that they only relate to a specific task or job, employer or organisation, location or equipment.
(Jessup, 1991, page 17)
Range
Specifying statements of competence at just the right level of detail proved to be one of the most fundamental design challenges for NVQs. Research into criterion-referencing in the USA had already revealed the risks associated with wanting to make outcomes and criteria as unambiguous as possible. Wolf discussed this challenge at length, including the temptation for developers to be seduced into a “never-ending spiral of specification” resulting in standards that became unwieldy, unmanageable and, ultimately, unused (Wolf, 1995, page 55).
Element 1.1: Identify opportunities for improvement in services, products and systems
Performance criteria
- a) Relevant, valid, reliable information from various sources on developments in materials, equipment and technology is accessed and analysed for its significance at appropriate time intervals.
- b) Information on developments is disseminated to the appropriate people in a manner which is likely to promote its value.
- c) Information is related to current practices and used to identify opportunities for growth in operations and improvements in quality.
- d) Operations are continuously monitored and evaluated and where improvements can be made the necessary action is taken.
- e) Obstacles to change are accurately evaluated and measures to alleviate the problem implemented.
- f) Evaluation of the outcomes of previous developments is used for improvement.
Range indicators
Opportunities for improvement are identified:
- within the manager’s line responsibility
- outside line responsibility, but where the manager has an impact
Opportunities for improvement involve:
- personnel requirements/team composition
- employment/work practices
- work methods and patterns
- cost factors
- nature and availability of services and products
- quality of services and products
- methods to reduce waste
- new equipment/technology
- design of systems
Implications of change are in terms of:
- profitability
- productivity
- quality of service/product
- environmental impact
- working conditions
- working relationships
- reactions of individual employees
Analysis methods are
- qualitative
- quantitative
Dissemination is to:
- higher level managers
- subordinates
- colleagues, specialists, staff in other departments
Obstacles to change are:
- internal to the organisation
- external
Figure 6a. Illustration of additional detail provided
A large amount of additional information is provided for each element of competence, including both performance criteria and range indicators.
Element 1.1: Identify opportunities for improvement in services, products and systems
Evidence required
Evidence must cover all those services, products and systems within the manager’s line responsibility and those outside the line responsibility where the manager has an impact. Evidence must include the following items from the range:
- identification of the opportunities for improvement in:
- personnel requirements/team composition
- employment and work practices
- work methods and patterns
- costs
- nature and availability of services and products
- quality of service and products
- methods to reduce waste
- new equipment/technology
- design of systems
[…]
Forms of evidence
Evidence can be outputs or products of performance, such as reports and documentation, supplemented by a personal report detailing actions that have been undertaken and why recommendations for action have been made. Evidence can also include extensive witness testimony from higher level managers, colleagues and subordinates.
In the absence of sufficient evidence from performance alone, questioning, projects and assignments based on real work situations may be used to elicit evidence of knowledge and understanding of the principles and methods relating to:
- accessing and analysing relevant information on changes to technology and resources
- analysing market need and marketing opportunities
- applying relevant items of legislation and organizational rules to actual/typical circumstances
- establishing, defining and reviewing objectives and performance measures
- informing and consulting others about problems and proposals
- monitoring resource utilization and costs and analysing efficiency and effectiveness
Figure 6b. Illustration of additional detail provided
A large amount of additional information is provided for each element of competence, including both evidence requirements and forms of evidence.
Despite recognising this risk, the NCVQ and the TA decided, early on, that elements of competence would be open to different interpretations unless more detail was provided on what each element was supposed to cover. This led to the development of range statements, which indicated the variety of contexts across which each element was supposed to apply and, therefore, the variety of contexts across which each learner was expected to be competent. An NCVQ working group started exploring their use in 1988 (Wolf, 1990), and they became a formal requirement when the March 1991 revision of the NVQ criteria came into force (Gokulsing, et al, 1996).
Figure 6a presents an example of performance criteria for Element 1.1 of the 1992 Management Charter Initiative (MCI) Management Standard, supplemented by range statements, described here as ‘indicators’ of range (see Melton, 1997).[footnote 10] These were amplified further by a statement of ‘evidence required’ and a list of ‘forms of evidence’ (see Melton, 1997), both of which are illustrated in Figure 6b. This level of detail would have been specified for each element of competence within each unit.
Bearing in mind the requirement for generality in the specification of competence statements – such that they should not be defined in terms of particular task requirements, specific employer requirements, or suchlike – Jessup noted that range statements might indicate all sorts of dimensions of divergence, or variation, including organisation, equipment, materials, customers, products, and so on (Jessup, 1991). The idea, here, was that a competent individual should be able to perform the certificated functions in any company or job where they happened to practice, which emphasised the importance of being able to transfer competence across contexts. Range statements therefore played an important conceptual role in demarcating the boundaries of an element of competence – such that competence should not be assumed to transfer beyond the specified contexts – and provided a mechanism by which those boundaries could be redefined, if necessary, to reflect changing workplace demands (Mansfield & Mitchell, 1996). Having said that, Lead Bodies were instructed to be selective when constructing range statements and only include contexts that were common and critical.
Assessment
The logic of the NVQ model prescribed just one absolute assessment requirement: to assess all elements of competence (against their associated performance criteria) to determine when all of the competence requirements for the job in question had been met. In other words, no sampling. Beyond this principled requirement, the way in which NVQ standards were defined – in terms of real-world performances – made other assessment decisions highly desirable or likely. For instance, it made sense for competence to be assessed in real-world environments, and for assessment to be a more continuous process, conducted hand-in-hand with learning. Unitisation of the statement of competence made unit-level certification seem quite natural too, with implications for assessment processes and timings.
Although the NVQ model was clearly oriented towards workplace assessment – and therefore to more naturalist assessment techniques including observation and discussion of naturally occurring events – it was open to a variety of assessment approaches. Not only was simulation a pragmatic fall-back option if naturally occurring events could not be observed, so too were many other assessment approaches. Mitchell (1989) argued that using multiple methods was likely to be important, as the evidence available from one method alone was unlikely to be sufficient for inferring competence. Having said that, she also indicated that certain methods were preferable to others, in the following order:
- naturally occurring evidence
1.1. ongoing work
1.2. predetermined samples set in work place - specially elicited evidence
2.1. performance (for example, traditional skills tests, college assessments)
2.2. knowledge and understanding (for example, written or oral assessments)
More specifically, she suggested that an assessor ought to start by considering whether it was possible to assess the element of competence in question via a naturally occurring event. If not, then they should move down the list to the first viable alternative, even suggesting that an assessor could make do with a written assessment of competence if absolutely necessary. Again, though, the NVQ criteria were quite explicit in stating that performance should be demonstrated and assessed under conditions as close as possible to those under which it would normally be practiced. This also suggested that the most likely candidate for an assessor would be the learner’s direct line manager or supervisor. If so, then this also meant that the assessor would be likely to be very familiar with the specified elements of competence and performance criteria being judged.
Knowledge and understanding
In January 1989, the TA hosted a symposium to consider the role of underpinning knowledge and understanding in competence-based vocational qualifications. Papers from the symposium were subsequently published in a booklet titled ‘Knowledge and Competence’ (Black & Wolf, 1990), with a Foreword from the Head of Standards Methodology at the Training Agency, Graham Debling, which read:
If good quality, cost effective vocational education and training is to be established we need a clearer insight into how and what knowledge and understanding underpins competence. […]
This book seeks to contribute to a debate and investigation which is both almost as old as man and yet is only just beginning.
(Debling, 1990, page 2)
Confusion over the significance of knowledge and understanding within NVQs proved to be a huge destabilising influence for far too long. In a sense, protagonists like Debling were right that centuries of philosophical thinking had not yet furnished a straightforward account of knowledge and understanding, so perhaps practitioners could be forgiven for embarking on the NVQ project without a watertight account. Yet, heated, unrelenting debate over the nature of the NVQ competence model complicated rollout and undermined confidence in the new system.
The importance of knowledge and understanding to NVQs had never been in doubt. What remained unclear, however, was how, or even whether, NVQ standards ought to represent these constructs. According to both the original and revised editions of the NVQ criteria document (NCVQ, 1988; 1989), underpinning knowledge and understanding ought to be encompassed within the NVQ statement of competence itself. This could include specifying knowledge and understanding requirements as discrete elements of competence. Yet, within a year or so, following activities such as the TA symposium, the NCVQ had taken a far stronger line.
The 1991 NVQ criteria document (NCVQ, 1991) indicated that an NVQ statement of competence ought to be specified purely in terms of competence, leaving underpinning knowledge and understanding requirements implicit. This embodied the principle that knowledge and understanding were logically distinct from occupational competence (Jessup, 1991). They underpinned occupational competence – in the sense of being applied during the demonstration of competent performance – but they were not what was meant by competence. What was meant by occupational competence would be explicated by functional analysis, in terms of outcomes that a competent employee ought to be able to demonstrate to specified standards across a range of contexts.
Having clarified what competence means – in terms of what a competent employee is capable of doing – assessment ought to be a fairly straightforward matter of observing candidates, in real working environments, to determine whether they are capable or not. From this perspective, assessment ought:
- to target ‘external’ workplace achievements – which are achieved by a competent employee when performing their role – which means targeting specified activities
- not to target ‘internal’ cognitive achievements – which are acquired by a successful learner when learning how to perform their role – which means not targeting specified elements of knowledge or understanding
In theory, then, NVQ standards ought not to be defined in terms of knowledge and understanding, but in terms of competent performance. Likewise, in theory, observations of competent performance ought to be sufficient to infer competence, as long as they are sufficiently exhaustive (across contexts, over time, and so on). Indeed, according to Mansfield (1989), this was the only way to guarantee that an employee was genuinely competent.
Although the priority given to observing competent performance made sense in theory, it was unclear how strictly the principle could be respected in practice. A serious issue had already emerged following the move towards specifying range statements. Jessup acknowledged that practical constraints within real working environments often limited demonstrations of competence to single contexts, due to constraints on roles within particular organisations, or due to servicing only a limited range of clients, or suchlike. So, how might it be possible to determine whether a context-bound demonstration of competence was robust enough to transfer across contexts?
Well, the employee could be asked questions. For example, do they know how the features of successful performance will need to vary across contexts? Or, do they understand the underpinning theory of their role sufficiently well to be able to infer how their performance will need to vary across contexts? Competence can still be defined in terms of what a competent employee is capable of doing. But we can shortcut the assessment process by sampling competence in a small number of contexts and then use evidence of relevant knowledge and understanding to provide a warrant for generalising the inference of competence to the full range of contexts. It is important to acknowledge that this idea of supplementing observation with questioning was part of the original TA guidance:
We do not want to attribute competence until we can be confident that they will be able to perform to standard consistently, or across the required range of situations. So before attributing competence we normally need evidence of repeated demonstrations to standard, possibly in a range of different situations, and we may want to draw on more than one source of evidence by supplementing the demonstrations with questioning.
(TA, 1989b, page 5)
This approach was consistent with Mitchell’s recommendations for triangulating multiple assessment methods (Mitchell, 1989). Jessup argued that it was necessary to develop NVQs along these lines (Jessup, 1991). Mansfield & Mitchell (1991) developed a flow chart to help identify when performance evidence alone would be insufficient, as well as the kind of knowledge evidence that could bolster confidence in the attribution of competence. Wolf also agreed, but went a step further by suggesting that it was entirely legitimate to extend the definition of competence itself to include knowledge and understanding (Wolf, 1989).
The Employment Department published a second compendium in its ‘Competence & Assessment’ series, in 1992, which contained a chapter on assessing competence at higher levels. This recounted the experience of the Management Charter Initiative (MCI) in developing and promulgating standards for management, arguably the most complex area in which to specify competence according to the authors (Edmonds & Stuart, 1992). They began their analysis with a quotation from a recently published NCVQ guide to NVQs:
The opportunities for assessing performance will normally be limited in context or location … and will not in themselves provide sufficient evidence relating to the full range of situations and contexts … It is therefore necessary to supplement the assessment of performance with assessments of knowledge and understanding for most elements of competence. … Assessment in NVQs requires evidence of competent performance, supplemented where necessary, by supporting evidence of underpinning knowledge and understanding. The importance of this is likely to be greater at higher levels.
(Reprinted in Edmonds & Stuart, 1992, page 49)
Owing to the particular challenges of assessing its management standards, guidance from the MCI proposed that there would not be a strong focus on direct observation. Instead, there would be an emphasis upon evidence arising from products, from witness testimony and, in particular, from personal reports. These reports might comprise written statements, or responses to oral questioning, which would focus on:
- details of actions taken
- reflections on actions taken, and
- knowledge of what was done and why
The MCI guidance emphasised the importance of employees being able to answer:
- ‘why’ questions (to explore understanding of underpinning principles), and
- ‘what if’ questions (to explore knowledge of variations across contexts)
Although, from very early on, it was accepted that it was often useful to assess underpinning knowledge and understanding independently of competent performance, the NVQ model (and the functional analysis methodology) still presumed that standards ought to be specified purely in terms of competence.
Having said that, the status of the ‘NVQ model’ itself was far from clear, as critical details evolved over time, including its stance on knowledge and understanding, as well as the addition of range statements. Indeed, it seems fair to say that theoretical details of the NVQ model were as much in flux, over the first few years of rollout, as practical ones. It is significant that Jessup’s book included a chapter on ‘The Problem of Knowledge’ in a section entitled ‘Outstanding Issues’ (Jessup, 1991). It is also significant that Black & Wolf italicised the following extract from the first ‘Guidance Note’ penned by the Training Agency in 1988:
Each element of competence should describe something that a person who works in the particular occupational area should be able to do; an action, behaviour or outcome which the person should be able to demonstrate. Or it should describe a knowledge or understanding which is essential in that it underpins sustained performance or facilitates the extension of skill to new situations within the occupation.
(Black & Wolf, 1990, page 6 - editor’s italics shown here in bold)
Of the various contributors to the TA symposium, Bob Mansfield was the most ‘fundamentalist’ in his stance on not incorporating knowledge and understanding within NVQ standards (Black & Wolf, 1990). Yet, by the end of the year, it would appear that he had brought both the NCVQ and the TA on board.[footnote 11] Indeed, in an edited book in the same series as the Guidance Notes (Fennell, 1991), even Wolf respected this model as she described skills, knowledge and understanding as preconditions for competence, rather than as part of competence itself:
If standards are well and fully specified, they should assist the clarification of the knowledge and understanding implied by a unit or element of competence, both for learning and assessment purposes.
(Mitchell & Wolf, 1991, page 25)
The NVQ model
It is worth pausing to summarise details of the NVQ model, circa 1991 to 1992, following this early debate over the role of knowledge and understanding. First, NVQs were based upon National Occupational Standards, which provided a comprehensive, outcome-based specification of an occupational role:
- full occupational competence was broken down into component elements (the ‘elements of competence’ or ‘learning outcomes’ in more recent CASLO terminology)
- further details were provided for each component element, to describe what performing competently looked like (the ‘performance criteria’ or ‘assessment criteria’ in more recent CASLO terminology)
- the standards were intended to embody a broad definition of occupational competence, to describe what it meant to perform a role intelligently rather than mechanistically (hence the use of functional analysis rather than task analysis) [footnote 12]
- the standards were intended to be supplemented by a separate description of underpinning knowledge and understanding (to support the development and revision of NOS and NVQs, as well as to influence the development of effective learning content and processes) [footnote 13]
Second, NVQs were based on an authentic approach to assessment:
5. competence was intended to be inferred on the basis of evidence of consistently successful performance (the model distinguished clearly between performance that was observed and competence that was inferred, meaning that an isolated example of successful performance should be deemed insufficient, and evidence of successful performance should be required across a range of contexts)
6. assessors were expected to prioritise the most authentic assessment evidence available (ideally this would come from extended workplace assessment, although this was often supplemented by oral questioning, and was sometimes supplemented by other assessment formats including written testing)
7. evidence needed to be provided for all elements of competence (which is the ‘mastery’ requirement in CASLO terminology)
It is worth summarising these details because the original NVQ model has often been mischaracterised by critics. For instance, it has been said that:
- the NVQ competence model is inherently narrow and mechanistic [footnote 14]
- the NVQ competence model denies the utility of constructs like knowledge and understanding, substituting any reference to knowledge and understanding with a reference to performance [footnote 15]
- the NVQ delivery model eschews the very idea of a syllabus, course, or programme of learning [footnote 16]
As we will soon see, the NVQ model was sometimes (perhaps often) implemented in ways that corresponded to these mischaracterisations. However, key details of the model were reasonably clearly articulated, even in early accounts by the Training Agency (1988a; 1988b; 1988c; 1989a; 1989b).
Flexibility
One of the selling points of the NVQ model was that it was designed to flex to the different circumstances that learners found themselves in.[footnote 17] Thus, NVQ certification was intended to be independent of:
- mode of learning (no expectation of having followed a particular course with a particular provider in a particular location – indeed, no expectation of having followed any formal course of learning at all – which was associated with the principle of accreditation of prior learning)
- order of learning (no expectation of having followed a particular learning trajectory, which facilitated flexible teaching across learners and cohorts)
- age (no minimum nor maximum age requirements, excluding legal ones)
- minimum study time (no requirement to have served as an apprentice for a specified amount of time)
- maximum study time (no requirement to have reached competence within a specified amount of time, which was facilitated by the ability to accumulate unit credits, as well as by the ability to transfer unit credits across providers)
- session (no expectation that either assessment or certification should be paced according to fixed calendar dates)
In other words, there should be no explicit nor implicit expectation that all learners ought to experience essentially the same regimented course of learning, prior to certification. Conversely, transparent certification requirements – laid out as elements of competence with performance criteria – should make it easier for learning experiences to be tailored to the particular needs of individual learners.
The potential for a more extended (ideally work-based) assessment process also enabled flexibility. Assessment tasks were not controlled centrally, meaning that the same outcome could potentially be assessed in different ways for different learners, depending on the circumstances of their learning and work. More generally, teaching and learning would not be constrained by practical requirements associated with external assessment, including timetabling.
Finally, the requirement to specify elements of competence and performance criteria as generally as possible – to apply with equal relevance across a broad range of occupational contexts – provided some flexibility for learning providers to adapt their provision to meet local or personal demands.[footnote 18] Thus, training for a national qualification could be reconciled with the desire to develop courses that were tailored to local and individual needs (Burke, 1989; Jessup, 1991).
Reconciling the idea of a national qualification with the divergent needs of local employers proved to be particularly challenging. Historically, England had prioritised diversity over coherence, to such an extent that City & Guilds was able to boast that there was no such thing as a typical qualification, because their qualifications were tailor-made to satisfy a defined need and category of industrial employee (Wheatley, 1959). Of course, ‘bespoke provision’ is simply the ‘qualification jungle’ by another name, which motivated the development of a single national system in the first place. So, there seems to be a sense in which the flexibility that was built into the NVQ model was an important concession to the very idea of a national qualification – a generic qualification not a bespoke one – that needed to possess a common currency despite divergent delivery contexts.
Ironically, despite flexibility frequently being cited as a selling point of NVQs, they also came to be criticised for their inflexibility. This tended to revolve around the detailed specification of learning outcomes in combination with the requirement that learners must achieve all specified learning outcomes for all units. This was problematic when standards were defined either too narrowly, where learners failed to acquire competencies that they actually needed, or too broadly, where learners were required to acquire competencies that they did not actually need (Debling, 1989; Field, 1995; Unwin, et al, 2004).
Quality assurance
A central argument in favour of the NVQ approach – when compared to a more classical approach to qualification design – was that it “demystified” assessment, because both outcomes and standards were stated clearly and comprehensively (Jessup, 1991, page 59). Transparency, so the argument went, builds validity into these qualifications. Wolf put it like this, albeit adding an important caveat:
As with all competence-based systems, the assumption has always been that assessment will be unproblematic because it simply involves comparing behaviour with the transparent ‘benchmark’ of the performance criteria. Unfortunately, in practice this turns out not to be the case.
(Wolf, 1995, page 64)
Although the NCVQ may well have oversold the transparency of NVQ standards and the validity of NVQ assessment, it never actually claimed that NVQ assessment would be unproblematic. Jessup, in particular, was quite open about the nature and scale of the challenges that would need to be faced in rolling out the new model (Jessup, 1989; 1990; 1991).
Particularly given how much flexibility the NVQ model afforded, the importance of establishing and following rigorous delivery processes could not be underestimated. Although a little late to the party, publication of the ‘The Awarding Bodies Common Accord’ (NCVQ, 1993) helped to address this challenge, significantly developing the original TA guidance on quality assuring NVQs (TA, 1989c). The main features of this Accord were:
- common terminology to describe the roles of individuals and organisations in the assessment and quality assurance system (including Assessor, Internal Verifier, External Verifier)
- certification to national standards for assessors and verifiers (units D32 to D35)
- defined roles in quality assurance for both awarding organisations and the centres they approve to offer NVQs (including Approved Centre, Awarding Body)
- explicit criteria for approving centres to offer NVQs (covering management systems, physical resources, staff resources, assessment, quality assurance and control, equal opportunities)
- quality assurance and control systems to ensure rigour and monitor equal opportunities implementation (including sample checking)
Implementation and evolution
It should already be clear that rollout of the NVQ model was far from straightforward. In fact, the very idea of a coherent rollout is misleading, as the NVQ model remained in flux throughout its early years. The following sections take up the NVQ story from the early 1990s, focusing on a number of key developments of particular relevance to the more general CASLO story.
Uptake
Having only been established in 1986, the NCVQ managed to accredit the first NVQs quite quickly, such that certificates were being awarded by 1988.[footnote 19] Figure 7 represents NCVQ data collated by Gokulsing, et al (1996, Appendix IV, page 87), which illustrate the number of certificates awarded from 1989 to 1993. It is clear that NVQs were largely catering for those working at Level 1 and Level 2. Across this period, the proportions of certificates awarded at different levels were 29% (Level 1), 58% (Level 2), 9% (Level 3), 5% (Level 4) and 0.2% (Level 5).[footnote 20]
Figures from Field (1995) suggest that there were 1,346 NVQs in place by the end of 1993. Of these, 42% were at Level 2, although there appeared to be more NVQs at Level 3 (28%) than at Level 1 (16%), suggesting that certifications were higher, per qualification, at Level 1 than at Level 3. Field noted that these certifications were largely in the areas of goods and services, construction, and health care – rather than in engineering and manufacturing – and that the demand for Level 4 NVQs related almost exclusively to accountancy (Association of Accounting Technicians awards in the main).
Field noted a powerful bias toward the mass purchase of low-skill entry level qualifications. The largest single market was among colleges offering full-time training to young people, where the Further Education Funding Council favoured NVQ uptake. The second major market was among providers of training for unemployed people, where funding also favoured NVQ uptake. Field characterised this as largely state-led rather than employer-led uptake.
Figure 7. Number of NVQ certificates awarded 1989 to 1993
The number of NVQ certificates awarded rose steadily from 1989 to 1993.
Shackleton & Walsh (1995) identified a variety of means that were used to suppress competing certificates, for example:
- requiring awarding organisations to phase out non-NVQ awards in related fields
- restricting central funding for training schemes and tax relief for individuals investing in their own training to the pursuit of NVQs
- requiring individuals who performed certain roles, such as health and safety, to have acquired relevant NVQs
Theory versus practice
To say that the NVQ model remained in flux throughout its early years oversimplifies the situation. It is not even that the model evolved through a succession of design templates over time. Instead, at any particular point in time, NVQs with quite different design characteristics coexisted, for a variety of reasons.
This was partly attributable to an early compromise known as ‘conditional accreditation’ which meant that certain qualifications were awarded NVQ status despite only having been partially reformed, largely to ensure an income stream for the NCVQ (Williams, 1999). By October 1990, out of about 240 accredited NVQs, only a handful were fully accredited (Raggatt, 1991). Conditional accreditation helped the NCVQ to establish the system more rapidly, but it was a risky strategy and these pseudo NVQs accounted for much of the early criticism (Gokulsing, et al, 1996).[footnote 21]
Beyond conditional accreditation, some organisations were granted permission to offer NVQs that departed substantially from the intended model in critical ways, including heavy reliance upon formal written exams. This continued over time, leading Young to conclude that bodies that occupied relatively powerful positions, such as the Association of Accounting Technicians, were able to shape the framework to suit their own needs rather than having to adapt to it (Young, 2011).
Divergence also occurred due to a lack of clarity and differences of opinion over the optimal approach for designing NVQs, which led to different organisations developing them in different ways. The Lead Bodies were provided with numerous ‘Guidance Notes’ on developing assessable standards for national certification, but they were also given considerable freedom to develop standards how they saw best, in consultation with whichever individuals or organisations they chose to collaborate (Debling, 1991). Furthermore, there was no statutory basis for requiring the Lead Bodies to comply with published guidelines (Ainley, 1990). Inevitably, some organisations developed standards far better than others, and some continued to plough existing furrows regardless of NCVQ or TA expectations (Ainley, 1990).
The recommended methodology also proved to be far from straightforward to apply. Standards that had supposedly been developed using (broad) functional analysis sometimes ended up looking like they had actually been developed using (narrow) task analysis. Arguably, the technical notes that Lead Bodies were required to follow when developing standards could be read in a way that appeared to justify the development of narrow, task-based competences (Raggatt & Williams, 1999).
Williams (1999) has argued that rollout moved increasingly towards a task-based orientation as NVQs catered increasingly for low-level jobs that reflected the impoverished content of many Youth Training Scheme programmes. Although NVQ standards were supposed to embody a broad definition of occupational competence – describing what it meant to perform a role intelligently rather than mechanistically – many NVQs failed to live up to that promise in practice.
Ultimately, the degree of mismatch between NVQ theory and NVQ practice makes it hard to judge the viability of the (intended) NVQ model on the basis of evidence from (actual) NVQ rollout.
Rollout
An early study conducted by Claire Callender, from the Institute of Manpower Studies at the University of Sussex, illustrated vividly many of the challenges encountered during the early years of implementation (Callender, 1992).
Commissioned by the Employment Department, the evaluation focused specifically upon the construction industry. This focus illustrated how successful implementation would inevitably have depended, at least to some extent, on employment and training structures within a particular sector. Rollout in the construction industry proved to be particularly challenging, given features such as the fragmentation of the sector (related to the increasing prevalence of subcontracting and self-employment), the large number of narrowly defined professional organisations with limited mutual understanding, volatility of demand and a highly mobile workforce, and a conservative approach to training with a strong focus on craft skills and time serving. Additional industry-specific challenges included increased capital and ongoing costs associated with the new NVQ model, such as having to adapt buildings to incorporate more authentic training activities, and higher costs associated with increased use of consumable training materials. On top of this, many employers, employees, and trainees remained unaware of NVQs.
More substantively, Callender identified issues that threatened implementation of the NVQ model itself. A critical concern was the lack of co-ordination and co-operation between Lead Bodies in the construction sector. This led to unnecessary duplication, but also to inappropriately narrow standards.
Callender was particularly concerned that the emphasis given to employer ownership had been at the expense of educational considerations. The Level 2 NVQs were too narrow, simplistic, and mechanistic. There was a lack of integration between NVQs at different levels, which ought to have been developed as a progressive sequence. On this specific point, she noted that the views of the construction industry and the Construction Industry Training Board – which were informed by practical and industrial relations considerations rather than pedagogical ones – were contrary to those of the Training Enterprise and Education Directorate of the Employment Department, the NCVQ, and City & Guilds (Callender, 1992).
Callender attributed problems of this sort partly to conflicting interests within the industry – with different interest groups attempting to ensure that their particular skill needs were met by the standards – but also to the fact that standards had been specified before the process of functional analysis for the construction industry had even been completed. Problems of this sort increasingly demotivated trainers, who had been very wary of change from the outset.
Other factors threatened the very idea of workplace assessment, which was at the heart of the new NVQ model. Construction trainees would not necessarily be exposed to all of the required elements of competence and range. It was often not possible to pause progress for the sake of assessment. NVQ standards were sometimes higher than those expected by certain employers. Costs were high. Paperwork was laborious. Supervisors were resistant to the idea of having to retrain as assessors. Supervisor bias and inconsistency were real threats. And so on. Ultimately, the very idea of workplace assessment was resisted, leading to a general consensus that assessment needed to be undertaken by training providers.
Attack
It is fair to say that there was a lot of criticism of both the NCVQ and of NVQs during the early years (Unwin, et al, 2004). One of the most high-profile critiques was mounted by Alan Smithers who had been commissioned by Channel 4 with the Gatsby Foundation to investigate the situation. His 1993 report – ‘All our futures: Britain’s education revolution’ – was promoted via a Channel 4 Dispatches television programme.
Focusing on both NVQs and the more recently introduced GNVQs, the report cited “real fears that there are deep flaws in the detail of what is being attempted” (page 8) and concerns over a “disaster of epic proportions” (page 3). Smithers traced the root of the problem to NCVQ insistence that students should be assessed “solely on what they can do rather than including also what they know and understand”. He characterised this as “behavioural psychology ruthlessly applied” and claimed that NVQs disdained knowledge, further insisting that “the notion of a syllabus is seen as antipathetic to the spirit of NCVQ” (all 3 quotations from page 9). The report contained a wide variety of criticisms of the NVQ approach and rollout, including the:
- lack of external testing
- flexibility promoted by the system
- lack of educational experience among Lead Body consultants
- narrowness of the standards
- incomprehensibility of the standards
- financial pressure on colleges to pass students
The report’s recommendations included a number that were of particular significance to the CASLO approach:
That the content of NVQs and GNVQs should consist of an appropriate mix of skills, knowledge and understanding aimed at developing both vocational capability and educational achievement;
That in setting out the new content of NVQs and GNVQs, the schematic framework of “performance criteria” and “range statements” be superseded, and that course requirements be more simply and directly stated;
That the assessment of both NVQs and GNVQs should include written examinations as well as assessment of practical skills, independently set with marks externally verified;
(Smithers, 1993, page 43)
Gokulsing, et al (1996) described the Smithers report as an outlet for increasing demand for public debate over growing concerns with NVQs and GNVQs, with which the NCVQ appeared not to want to engage in public. Although its sensationalist reporting was prone to inaccuracy, bias and caricature – leading the NCVQ and even some of the people and organisations cited by Smithers to publicly denounce the report (Hodkinson & Issitt, 1995; Burke, 1995) – the attention that the report received led to further public scrutiny and reporting.
Inspection
The Further Education Funding Council was the inspectorate of its day. Its report on ‘National Vocational Qualifications in the Further Education Sector in England’ was, in effect, an extended response to the Smithers report, based on inspections during the 1993 to 1994 academic year.
The report claimed that strengths of the new learning programmes clearly outweighed any weakness for the majority of sessions observed (56%), a figure that was slightly better than for GCSE and slightly worse than for A level. It observed that the introduction of NVQs, with their emphasis on flexibility and responsiveness to individual students’ needs, had led to a strong trend towards student-centred learning approaches.
Contrary to the claim from Smithers that providers were caving in to funding pressures, inspectors found “no evidence” of students being certificated as “having competences they did not possess”. However, concern was expressed over “trainees’ understanding of the principles underlying job competences” and “the poor levels of literacy and/or numeracy of some trainees” (all quotations from page 5). While acknowledging that more could be done to explicate underpinning knowledge and understanding requirements, the report directly countered misleading claims in the Smithers report:
There is evidence of widespread misunderstanding of this view, and NVQs have been criticised for giving insufficient attention to knowledge acquisition. […]
The proportion of time devoted to underpinning knowledge and understanding in the NVQ programmes inspected ranged between 15 per cent and 50 per cent, depending on the level and type of programme. Inspectors were generally satisfied with the level and quality of underpinning knowledge in terms of meeting NVQ requirements, although there were a few instances where it was deemed inadequate. There were concerns, however, about access to underpinning knowledge for some of the small proportion of candidates based in the workplace.
(FEFC, 1994a, page 16)
The report suggested that the NCVQ should insist upon greater clarification of the knowledge, understanding and core skills elements of NVQs prior to accreditation.
Beaumont report
In May 1994, the white paper ‘Competitiveness: Helping business to win’ announced a review of 100 of the most used NVQs and their Scottish counterparts (SVQs). Nearly a year later, former Chair of the Confederation of British Industry Training Committee, Gordon Beaumont, was invited to Chair the Evaluation Advisory Group that was to conduct this review, with support from the NCVQ and its Scottish counterpart (SCOTVEC). The review incorporated a wide range of research methods, including literature review, document analysis, stakeholder surveys, interviews, consultations and consultancy projects. Beaumont reported in January 1996 (Beaumont, 1996).
Although he recognised widespread criticism of NVQ implementation, Beaumont emphasised that the review had found widespread support for the NVQ concept, including the idea of competence-based standards, noting that he had seen such qualifications working effectively. He characterised those who questioned the concept as a “minority” (page 12) and linked their criticisms to early versions of the NVQ model, which had failed to pay due attention to knowledge and understanding.
That said, Beaumont did identify numerous significant criticisms of the NVQ model, which included (of most relevance to the CASLO approach):
- standards being “marred by complex, jargon ridden language” (page 13) that leaves candidates unsure of the competencies they are expected to acquire and that leaves assessors and verifiers unsure of the standards they are required to apply
- insufficient attention to core skills, which are key to enabling transferability
- prohibition (until recently) of optional units, which are key to ensuring relevance and that can prevent the proliferation of overlapping qualifications
- insufficient clarity over the centrality of knowledge and understanding requirements, typically where these are left implicit rather than stated explicitly
- the need to assess knowledge and understanding separately at higher levels
- excessive bureaucracy that leads to frustration, time wasting, unnecessary costs, and reduced uptake
- concern over lenient application of standards related to funding pressures, where funding requires completion within fixed time limits and assessors have a vested interest in timely completion
Although specifically asked to examine how external assessment might be included in NVQs, Beaumont identified mixed views and reached no clear conclusion other than that it is the “combination of methods which create rigour” (page 18) and that the NCVQ should therefore lay down the assessment methods appropriate to each situation with guidance on how to select them.
The most controversial recommendation from the Beaumont report was that ‘Part One’ NVQs should be developed. This was a response to ministers asking how knowledge and understanding might be separately certificated and whether this would be desirable. Beaumont noted that many employers already made use of traditional qualifications for this purpose. He suggested that greater use could be made of this approach if existing qualifications were made compatible with NVQs. Indeed, he formally recommended that traditional vocational and professional qualifications be made outcome-based and aligned to NVQs. Preparatory qualifications of this sort would also be useful to the many candidates who were not working. The controversial nature of this recommendation was captured by an article in the ‘Times Education Supplement Magazine’ following publication of the report:
John Hillier, chief executive of the National Council for Vocational Qualifications, said: “We will never change the concept of the NVQ through a preparatory qualification. The NVQ remains the goal.”
In sectors such as construction, he suggested, trainees might be sent to college by employers to pick up theory, then put their skills into practice in the workplace. The system would formalise existing moves by some employers to link with colleges for some elements of training.
The proposal for the so-called NVQ part one could reconcile the difficulty of applying one qualification for both employees and jobless school-leavers and adults.
Those without work might also take the preparatory qualifications, but then adequate Government funding would be needed to ensure they moved on to a full NVQ, said Mr Hillier. “To maroon a young person with only a preparatory qualification is not satisfactory.”
The decision on the part one qualifications would be left up to employers during consultation on the report, he said.
(Ward, 1996)
Revised Criteria
The revised NVQ ‘Criteria and Guidance’ (NCVQ, 1995) had actually been published a full year in advance of the Beaumont report. So, many of the problems identified in the report had already been addressed. Introducing the revised Criteria and Guidance, John Hillier stated that:
As a result, the document captures advances in thinking and methodology, without, however, changing the fundamentals on which NVQs are based. In particular, we have been able to reflect more fully the character of NVQs at higher levels and the role of knowledge and understanding within NVQs.
(NCVQ, 1995, page 2)
His foreword also recognised the new approach to designing NVQs around a mandatory core of units, with optional units tailored to particular employment needs. This flexible structure was to be matched by flexible assessment and delivery arrangements, which were adaptable to different organisational circumstances.
While attempting to remain true to the principles of functional analysis, and to the components of the Job Competence Model proposed by Mathews and Mansfield, the Criteria and Guidance document stated that it was “also necessary” to consider and “to reflect in the standards” the knowledge, understanding, practical and thinking skills, which are required for effective performance (quotations taken from page 17). Therefore, in addition to specifying outcomes (within elements of competence) that reflected the practical consequences of applying knowledge and understanding, the NVQ statement of competence now had to be accompanied by a formal knowledge specification. The document did not promote a particular approach to specifying underpinning knowledge and understanding, explaining that examples of good practice would be developed and disseminated subsequently.
The Criteria and Guidance document was also explicit over the breadth that NCVQ expected Lead Bodies to build into their specifications of standards. Consistent with a broad, role-based approach (rather than a narrow, task-based approach) it noted that:
People need to be able to communicate effectively with colleagues, organise and prioritise their work activities, respond to contingencies, make decisions, solve problems, apply ethical judgements, work safely and so on. It is the ability to integrate these demands when performing in the work environment that defines the competent individual. Lead bodies, therefore, are required to take a wide view of national standards which incorporates these broader aspects of competence.
(NCVQ, 1995, page 16)
Breadth was also promoted by the development of separately specified and assessed core skills units in communication, application of number, information technology, working with others, improving own learning and performance, and problem solving.
Finally, of particular relevance to the CASLO approach, the Criteria and Guidance document now specified evidence requirements in addition to the statement of competence. These requirements would indicate, on an element-by-element basis, the evidence required for a satisfactory judgement of competence, which might include types of evidence, methods of evidence gathering, and so on. The document also addressed the potential for confusing performance judgements with competence judgement by clarifying that:
It is not expected that any single item of evidence will be sufficient to establish competence in even the smallest assessable component of an NVQ, the element. Instead, it is expected that combinations of evidence should be used to attest to competence.
Combinations of evidence should be used flexibly to suit individual circumstances. Performance evidence should be combined with evidence of knowledge to cover the whole of the element specification, including range.
(NCVQ, 1995, page 29)
Rigour and responsiveness
The Qualifications and Curriculum Authority (QCA) came into being on 1 October 1997, assuming responsibilities previously assumed by the School Curriculum and Assessment Authority and the NCVQ. With a statutory remit to promote quality and coherence in education and training, it took the lead in designing and developing a new national qualifications framework (which we will consider later) and in ensuring clear and high standards across the system.
Recognising the need to enhance both the rigour of NVQs and their responsiveness to qualification users, QCA conducted a series of forums throughout 1998 and 1999 (QCA, 1999a). This provided an opportunity to discuss the new criteria, a new code of practice, and the development of risk management strategies, all intended to enhance rigour.[footnote 22] The QCA forums also provided an opportunity to consider how NVQs could be made more flexible and how awarding organisations could reduce bureaucracy.
QCA published its ‘Arrangements for the Statutory Regulation of External Qualifications in England, Wales and Northern Ireland’ in 2000 (QCA, 2000), which set out how both NVQs and NOS would be accredited to the new National Qualifications Framework, contingent on having met necessary accreditation criteria. In addition to criteria that were common across all qualifications, NVQs had to meet type-specific criteria, which were “designed to allow flexibility in the format of qualifications, while strengthening the processes to be followed, including greater emphasis on external quality control of assessment” (QCA, 2000, page 21). In particular, an assessment strategy had to be provided for each NVQ to explain:
how external quality control of assessment will be achieved, normally through the use of independent assessment. Where independent assessment is not recommended by the standards-setting body, other equally rigorous measures must be specified;
which aspects of the standards must always be assessed through performance in the workplace;
the extent to which simulated working conditions may be used to assess competence and any characteristics that simulation should have, including definitions of what would constitute a ‘realistic working environment’ for the qualification concerned;
the occupational expertise requirements for assessors and verifiers;
the amount and type of evidence to be collected.
(QCA, 2000, page 22)
Concern over the independence of assessors had been identified by the Beaumont report, particularly in relation to funding pressures, and guidance on independent assessment had already been prepared by the NCVQ (1997a). A revised version of this guidance explained that a significant part of the assessment ought to be carried out in a manner that was demonstrably independent of anyone with a vested interest in the decision (QCA, 1999b). This could be achieved in various ways, including via:
- externally set and marked tests or assignments
- visits from an external assessor
- externally set assignments, internally marked with external moderation
In addition to its Statutory Regulations, the QCA also published a ‘Code of Practice’ that was specific to NVQs (QCA, 2001). This built upon, and superseded, the Awarding Bodies Common Accord (NCVQ, 1993), with detailed requirements outlined in sections that included:
- Assessment and awarding
- Internal assessment
- Assuring quality in internal assessment
- External quality control of assessment including independent assessment
- Internal verification of internal assessment
- Assuring quality in internal verification
- Support and guidance
- Record keeping
- Awarding body quality assurance and control arrangements
- External verification of internal assessment
- External verification
- Sampling
- External verifier reports
These regulatory documents were supplemented by guidance including the ‘External Verification of NVQs’ (NCVQ, 1997b), the ‘Internal Verification of NVQs’ (QCA, 1998), and the ‘Joint Awarding Body Guidance on Internal Verification of NVQs’ (Joint Awarding Body Steering Group, 2001).
Raggatt & Williams noted that increasingly flexible assessment approaches – supported by the claim, from the Beaumont report, that rigour is underpinned by a combination of methods – were becoming a matter of concern to the NCVQ by 1997 (Raggatt & Williams, 1999). Flexibility of this sort – which was perceived as laxity by the NCVQ – risked throwing the baby out with the bathwater. The promotion of independent assessment, with its emphasis on tests and assignments, reflected a further departure from the original NVQ ethos. Raggatt & Williams noted that this “relaxation of the NVQ criteria continued under the auspices” of the QCA (Raggatt & Williams, 1999, page 163).
QCA eliminated the detailed prescriptions that NCVQ had placed on NOS. The new Statutory Regulations made no reference to ‘functional analysis’ and contained only minimal prescriptions, including the requirements that NOS should:
show the outcomes of competent performance, including the essential knowledge and understanding required;
be written in plain language and in a format which is easily understood by those who will use the standards;
(QCA, 2000, page 24 and page 25, respectively)
The 2004 revision of the Statutory Regulations specified requirements at a similarly high level, for example:
National Occupational Standards must: […]
-
describe the outcomes of competent performance; […]
-
include the essential knowledge and understanding required, the relevant technical, planning and problem-solving skills, the ability to work with others, the ability to apply knowledge and understanding, and other skills which will enhance flexibility in employment and opportunities for progression;
(QCA, 2004a, page 43)
Definitions in the final version of the ‘NVQ Code of Practice’ (QCA, 2006) also illustrated this more relaxed conception:
Competence: The ability to carry out activities to the standards required.
Content: The coverage of a qualification, programme, module, unit or other component, expressed as the knowledge, understanding, skills or area of competence that is covered.
(QCA, 2006, page 37)
An internal QCA report noted that this new flexibility to adapt the form and structure of NOS in response to sector-specific needs – which represented a rejection of strictures previously associated with functional analysis – had resulted in “a plethora of models” and new problems (QCA, undated, apparently circa 2001, page 11).
Apprenticeship reform
Ambiguity over the assessment of underpinning knowledge and understanding continued throughout the 2000s across numerous apprenticeship reforms. Apprenticeship reform had begun in 1994, as the publicly funded Modern Apprenticeship (MA) scheme was rolled out in response to continuing concerns over uptake, completion rates, and failure to secure employment (City & Guilds, 2014). Launched nationwide in 1995, the scheme was intended to revitalise the idea of apprenticeship training (Maguire, 1998) and to increase the number of young people achieving Level 3 NVQs, plus key skills, within 2 or 3 years. By the time of the Dearing review, MAs covered two-thirds of the available NVQs at Level 3 (Dearing, 1996).[footnote 23]
The importance of requiring more than just an NVQ (for example, key skills too) was reflected in the idea of an apprenticeship framework for achievement. Dearing made a number of recommendations concerning the new scheme, including this one related to the framework concept:
Employers should ensure that apprenticeships provide not only the necessary skills, but sufficient underpinning knowledge and understanding to enable Modern Apprentices, having obtained the NVQ level 3, to go on if they wish to part-time, full-time, or sandwich courses leading to diplomas and degrees.
(Dearing, 1996, page 40)
A few years later, this issue was developed in reports from the National Skills Task Force (NSTF), which had been appointed by Secretary of State for Education and Employment, David Blunkett, in 1997. Its final report presented a vision, goals, and main components for a National Skills Agenda. This included the proposal that apprenticeship “programmes at Levels 2, 3 and 4 should be available to all who want them and include key skills, assessed knowledge and understanding, and options for general education, so as to maximise transferability of skills and progression opportunities” (NSTF, 2000, page 7). This reinforced recommendations from interim reports for separate assessment of underpinning knowledge and understanding via related vocational qualifications (including existing BTECs). These qualifications would allow institutional providers to deliver vocational courses that were “more directly complementary to apprenticeship training” (NSTF, 2000, page 38). Government responded by proposing that Technical Certificates (TCs) would be developed to assess underpinning knowledge and understanding (DfEE, 2000).
Blunkett subsequently appointed a Modern Apprenticeship Advisory Committee to develop a 3-year action plan. Chaired by Sir John Cassels, the committee reported in September 2001 (Cassels, 2001). A lack of clarity concerning the “fundamental content” that every apprenticeship should contain led Cassels to propose a national framework for apprenticeship, which would specify required content and expected duration of apprenticeship at Foundation level (Level 2, age 16 to 19 entry) and Advanced level (Level 3, age 16 to 24 entry). The committee proposed minimum standards for key skills qualifications at both levels and offered reflections on the new TCs, including a recommendation not to reinvent the wheel where minor adaptations to well-established qualifications could fulfil the brief. It even proposed consulting higher education institutions to determine what content could be imported into TCs to facilitate progression from the Advanced apprenticeship. Cassels recommended that a diploma should be awarded to recognise an apprentice’s achievement of:
- an NVQ
- key skills qualifications
- a Technical Certificate, and
- any other awards gained during the apprenticeship
In early 2004, the QCA reported on an evaluation of TCs, conducted via a survey of colleges and training providers (QCA, 2004b). Since centres had first started teaching vocationally related qualifications designated as TCs in September 2002, approximately 200 qualifications had received this designation. The majority of respondents indicated that TCs were useful for assessing knowledge and understanding related to NVQs. Most TCs were also available for teaching to candidates who were not following the apprenticeship route, and around half of respondents adopted this approach. Some respondents were concerned about the use of exams in qualifications, particularly for students who had chosen the vocational route specifically to avoid them. QCA recommended that awarding organisations should review the use of external assessment in TCs, to ensure that its demand was relevant and suitable for target learners.
Later that year, the Sector Skills Development Agency (SSDA) published research that had been conducted with employer-led Sector Skills Councils (SSCs) to investigate whether apprenticeship delivery was meeting employers’ needs. The report indicated an “overriding perception” that frameworks were “not sufficiently flexible or employer-centred” and straightjacket diversification (Pye, Pye, & Wisby, 2004, page 5). In effect, this challenged the very idea of a national apprenticeship framework. The inclusion of NVQs proved least controversial, although some whole sectors eschewed them entirely where other qualifications were better regarded. Employers in England particularly disliked the mandatory external testing of key skills. From a survey of 37 providers, 97% wanted the external testing ended and all of this group felt that key skills could more effectively be assessed via a portfolio approach. Many apprentices were simply failing to show up for the tests. More than half the SSCs disliked TCs for a variety of reasons, including the perceived irrelevance of content, duplication of learning and assessment with NVQs, problems of releasing apprentices for off-the-job training, and the inflexibility of Guided Learning Hours across certain subsectors. Where apprentices saw the NVQ as a gold standard, they often dropped out of the scheme without bothering to complete additional framework requirements.
Recommendations from the report implied a massive deregulation of the approach:
- allowing NVQs to be replaced with other vocational qualifications (or units)
- flexibility on the level of NVQ or vocational qualification required
- choice of whether or not to include a TC, or to replace it with a different programme of learning
- no longer externally testing key skills (deemed the most important request)
- continuing to regard apprenticeship as a scheme to be completed rather than a qualification to be achieved (contra the Cassels recommendation for a diploma)
The Leitch review of skills recognised widespread employer concern that the apprenticeship system was “complex and bureaucratic and often does not meet their needs” and that this contributed to low achievement rates (Leitch, 2006, page 98). Yet, Leitch was very positive about the role of apprenticeships going forward and recommended “dramatically” increasing the number of apprentices in the UK to 500,000 by 2020 (Leitch, 2006, page 21), as well as strengthening the role of employers. Government subsequently commissioned a review of all aspects of the apprenticeship system in England, which confirmed that apprenticeships would continue to play a central role (DIUS & DCSF, 2008). Plans to strengthen the apprenticeship framework included:
- improving the ‘blueprint’ to incorporate expectations of mentoring, progression, entry requirements and time off-workstation to train
- issuing a national completion certificate (rather than a diploma) at the end of the programme
- robust quality assurance against the revised blueprint
- integrating apprenticeship component qualifications within the new Qualifications and Credit Framework (QCF) to improve transferability and transparency
- greater employer ownership of apprenticeships
It was assumed that the QCF would be key to improving employer ownership:
As set out in the Leitch implementation plan, in future all vocational qualifications will be based on updated national occupational standards and will fall to be approved by Sector Skills Councils before being entered onto the QCF. This will provide a readymade bank of qualifications and units that employers, through their Sector Skills Councils, believe are needed in the workplace. In future, any organisation wishing to offer an Apprenticeship simply needs to submit to the relevant Sector Skills Council a short description of its plans, the qualifications and units it wishes to utilise, and how they meet the requirements of the strengthened Apprenticeships blueprint. In the case of an employer, the qualifications or units may include its own, accredited on to the QCF, so as to tailor the Apprenticeship to its own way of training.
(DIUS & DCSF, 2008, pages 36 to 37)
The integration of apprenticeship programme qualifications within the QCF is not incidental to the present report, as it was to become a statutory requirement that all QCF units should adopt the CASLO approach.[footnote 24] We will discuss the impact of QCF regulations later in this report.
The basic structure of the apprenticeship framework remained intact until the end of the Labour administration.[footnote 25] This included the following separately certificated components (see DIUS & DCSF, 2008; Skills Commission, 2009):
- a competency-based element, indicating the ability to carry out a certain occupation (typically certificated by an NVQ)
- a knowledge-based element, indicating theoretical knowledge underpinning a job in a certain occupation and industry (typically certificated by a TC)
- functional skills in numeracy and literacy (and other personal skills in some frameworks) – which replaced key skills and core skills
- a module on employment rights and responsibilities (often integrated within the TC)
Later in this report, we will consider how these requirements changed following the Richard review of apprenticeships.
Quality
Responsibility for developing National Occupational Standards originally fell to the Industry Training Organisations. Subsequently, ITOs, Lead Bodies, and Occupational Standards Councils were merged to form a smaller network of National Training Organisations. NTOs operated from 1997 until 2002, when they were replaced by an even smaller network of Sector Skills Councils. The Sector Skills Development Agency was established simultaneously, to fund, support, and monitor the performance of the SSCs (DfES, 2001). In April 2008, in the wake of the Leitch Review, the SSDA was replaced by the UK Commission for Employment and Skills (UKCES), which took over responsibility for overseeing the production of NOS by SSCs (and other standards setting bodies).
In the autumn of 2008, the UKCES commenced a consultation into whether NOS were fit for purpose. It concluded that the system needed to be improved, noting that the majority of employers still did not use them (UKCES, 2011a). The UKCES proposed a strategy for change that involved improving NOS before promoting them more widely. A new set of quality criteria was central to this strategy, to ensure that:
high quality NOS, informed by a representative sample of employers, written in clear language and complying with common definitions are available for all significant functions in the workplace.
(UKCES, 2011a, page 11)
The resulting ‘Quality Criteria’ document (UKCES, 2011b) began by stating that NOS may only be developed by recognised bodies, that all personnel working on NOS in these bodies would need to be competent in their functions, and that any recognised body would be required to meet all of the specified quality criteria.
Significantly, the criteria required that NOS must be derived from functional analysis, and that all NOS should contain certain mandatory components: unique reference number, NOS title, NOS overview, performance criteria, specification of knowledge and understanding, and technical data. Certain optional components were also permitted: scope or range statements, values, behaviours, skills, and so on.
These requirements were elaborated in considerable detail in a ‘Guide to Developing National Occupational Standards’ (Carroll & Boutall, 2011), which was published by the UKCES alongside the criteria and strategy documents. The guide advocated a broad conception of competence, adapted from the Mathews and Mansfield Job Competence Model, and drew a clear distinction between occupational competence, per se, and the knowledge and skills that underpinned it. The guide focused upon the first 4 stages of developing and reviewing NOS: initial research, functional analysis, identification of existing NOS, and development of new NOS.
Although the guide was intended to promote quality, it is worth considering an example that the authors used to illustrate the articulation of performance criteria:
2R2/04 Deal with the arrival of customers
Performance Criteria
You must be able to:
1. Assist the customer to feel welcome in the hotel
2. Identify the customer’s requirements
3. Ensure customer details are correct on the booking system
4. Offer alternatives for any services that are not available
5. Make sure the registration document is completed as required
6. Give accurate information to the customer about their room and its location
7. Promote the services and facilities of your organisation
(Carroll & Boutall, 2011, page 61)
In this example, the intended outcome is to ‘deal with the arrival of customers’ so the title describes the element of competence in question. Two points are worth noting concerning the articulation of performance criteria. First, each begins with a verb, such as ‘assist’ or ‘ensure’ which gives the impression of listing stages in a task. Second, the criteria do not always contain clear evaluative statements, without which the occupational standard remains unspecified (for instance, criterion 6 provides some clarity, while criterion 7 provides less). Mansfield & Mitchell (1996) identified both of these issues as common mistakes that are made when drafting performance criteria. Indeed, they identified both as typical of approaches that tend towards task analysis rather than functional analysis.
Norman Gealy (personal communication) has argued that the adoption of this approach in the UKCES guide, which had previously been advocated by the FEU, illustrates a turning point in the articulation of criteria – which generalised beyond NVQs to QCF qualifications – whereby criteria came increasingly to be written as mini learning outcomes, which meant that they no longer represented occupational standards. This, he believes, had a backwash impact on teaching and learning, such that mastering the domain became associated with having covered the necessary ground (a teaching expectation) rather than with having achieved the necessary standard (a learning expectation), which was precisely the problem that NVQs were originally designed to solve.
This is perhaps too harsh a criticism of the guide, per se.[footnote 26] The authors explicitly illustrated how their performance criteria looked quite different from a straightforward list of tasks. And they certainly did not ignore the evaluative dimension, as they wrote a full section on using evaluative words in performance criteria (although they did focus more on the risks of introducing ambiguity than the need to achieve clarity). The authors were grappling with a widely recognised challenge (post-Beaumont) that if employers and other users could not understand the NOS, then they were unlikely to make much use of them, and this affected the way that they wrote their performance criteria (Carroll & Boutall, 2011).
Finally, it is worth noting how knowledge and understanding were dealt with under the new quality criteria. Carroll & Boutall argued that knowledge and understanding requirements needed to be located within the NOS to indicate the breadth of competence expected by employers.[footnote 27] They provided the following illustration of requirements associated with the same element of competence described above (only the first 3 items have been reproduced):
2R2/04 Deal with the arrival of customers
Knowledge and Understanding
You must know and understand:
-
Expectations which customers may have when visiting the hotel, including standards of service
-
Why customers should be made to feel welcome in the hotel and the effect this has on their attitude to the business and the likelihood of repeat business and recommendations
-
Different techniques to help the customer feel welcome in the hotel and how to do this in different situations, for example when there are delays at reception or when there is a failure of equipment or services
(Carroll & Boutall, 2011, page 69)
These requirements were to be derived directly from the element of competence in question, and were only to include what was essential. They would typically list critical facts (the what), principles (the why), or methods (the how). Carroll & Boutall explained that these knowledge and understanding requirements had to be formally assessed, and that they tended to be assessed via questions, reflective accounts, or professional discussions.
According to this analysis, it appears that knowledge and understanding were not included in NOS as distinct outcomes in their own right with their own criteria. Instead, for each element of competence, they sat alongside the performance criteria, playing a different kind of supporting role. Consequently, these statements provided little insight into the level of knowledge and understanding required.[footnote 28] Presumably, where these knowledge and understanding requirements were incorporated within qualifications, such as Technical Certificates, the awarding organisations would have had to transform them into discrete learning outcomes with associated assessment criteria.
Stocktake
The NVQ story is central to our account of the origins and evolution of the CASLO approach in England, which is why we have recounted it in considerable detail. It is interesting and important for a number of reasons:
- the NVQ was the first CASLO qualification of national prominence
- it survived a long time, outliving its CASLO cousin, the GNVQ, by decades, which suggests that it must have got something right (or, at least, not entirely wrong)
- it continued to embrace the CASLO approach until its eventual (official) demise
- yet, it also remained controversial until its eventual (official) demise [footnote 29]
In presenting this story, we have attempted to illustrate:
- how the CASLO approach was fundamental to the NVQ model
- how the NVQ model reflected a particular take on the CASLO approach, specifying additional features beyond the 3 core characteristics, some of which were critical to its fate (including its unique take on outcome specification)
- ongoing controversy that targeted both the model and its implementation
- continuing evolution of the NVQ model, while retaining the CASLO approach
This level of detail helps us to consider (if not definitively judge) the extent to which problems that beset NVQs might have been due to adopting the CASLO approach, per se, or to the particular version(s) of the approach adopted, to ancillary features of the NVQ model, or simply to poor implementation.
Principled design
Bearing in mind the amount of criticism levelled at NVQs over the past few decades, it is worth remembering that those responsible for designing the NVQ framework believed that they were providing a principled solution to some very serious problems with Technical and Vocational Education and Training provision in England. Problems with off-the-job education and training included a predominance of overly theoretical qualifications, which were plagued by drop out and failure. Problems with on-the-job training included there being no guarantee that employees were achieving the right competencies to a satisfactory standard. These problems were compounded by high levels of unemployment, which raised new education and training challenges. The principled solution to these problems drew upon insights from North American educational movements, specifically the Objectives Movement and the Mastery Movement.
The objectives that the NCVQ located at the heart of the NVQ model were competence standards, which represented outcomes that would be manifested through competent performance in an occupational role. National Occupational Standards were therefore intended to specify occupational competence comprehensively and authentically, which would provide a solid foundation for subsequent curriculum, pedagogy, and assessment planning. They aimed to be authentic in terms of specifying the elements of competent performance that comprised occupational competence. They aimed to be comprehensive in terms of specifying all critical elements. By doing so explicitly, as a foundation for all subsequent planning, there would be no excuse for misaligned teaching or assessment.
The central innovation, then, was to refocus attention on what learners needed to learn, rather than on what teachers wanted to teach or what exam boards chose to examine. Just as importantly, learners would have to master all of the specified outcomes. As we will explain in more detail below, this was not just a certification requirement, but an educational expectation, consistent with insights from the Mastery Movement. This would be key to minimising drop out and failure.
Revolutionary zeal
Gilbert Jessup promoted his new model of education and training with revolutionary zeal. Critics certainly responded as though England was in the throes of a revolution, and commentators described exchanges between the NCVQ and its critics using war-like metaphors. Cantor described a situation in which the: “stage was thus set for some battles royal, with the control of further education curricula, and even the survival of colleges, as issues to be resolved” (Cantor, et al, 1995, page 60).
The impression of NVQ policy making as fundamentally confrontational was no doubt reinforced by the fact that it was being driven by the government’s Employment Department, rather than its Department of Education and Science. It seems likely that a consequence of this confrontational stance was to polarise stakeholders, which was particularly evident within the academic literature (Hodkinson, 1992). This appears to have led to a situation in which certain mischaracterisations of the NVQ model became widespread, including the idea that it intentionally slayed certain sacred cows of the education world.
For instance, critics like Alan Smithers promoted the idea that the NVQ model was fundamentally opposed to the development of syllabuses and learning programmes (Smithers, 1993; Smithers, 1997). This mischaracterisation was fuelled by the manner in which outcomes were specified in the NVQ model, as competence standards, which gave the impression that the model had no place for knowledge and understanding. This, in turn, seemed to question the role of colleges in delivering NVQs, with their traditional focus on theory rather than practice.
It is true that the NVQ model was intended as an antidote to problems identified with extant qualifications. These qualifications had traditionally catered for trainees and apprentices who needed off-the-job education and training, with a focus on theoretical foundations, and they had often failed to fulfil even this limited role effectively. NVQs were intended to certify full occupational competence – not just the knowledge and understanding that comes from studying a book – and the principal yardstick of occupational competence was to be competent performance of an occupational role. This meant that workplace assessment would occupy a central position in the NVQ model, which again seemed to question the role of colleges in delivering NVQs.
Despite this impression, it is not actually true that the NVQ model negated the role of colleges. Instead, it implied that their role would need to change (see, for example, Nash, 1995; Stanton, 1995). More fundamentally, the NVQ model did not eschew the very idea of a syllabus or learning programme. Those who designed the NVQ model were very clear about this, for instance:
It is a primary focus of the new system that learning programmes, qualifications and assessment systems will be derived from clear and precise occupational standards, rather than standards being embedded unstated as a feature of qualifications, or within the processes of learning and assessment.
(Mansfield, 1991, page 12)
The point of adopting an outcome-based approach to qualification design was to focus attention on the proper foundation for planning curriculum, pedagogy, and assessment, that is, to focus attention on the outcomes that a qualification would need to certify if it were to serve its purposes adequately. In other words, outcomes were never intended to substitute for syllabuses or learning programmes. They were intended to provide a foundation for them. Exactly the same logic applied to determining the nature and scope of the knowledge and understanding that underpinned competence (that would need to be built into a learning programme) that was supposed to be inferred from detailed scrutiny of the intended outcomes.
Having said that, it is important to acknowledge a fundamental tension between NVQ theory and practice. Although the preceding analysis follows directly from the underpinning logic of the NVQ model – and although it was articulated explicitly by Mansfield (1989) and by others – it is still fair to say that NVQ rollout focused principally on assessment, and the importance of developing coherent learning programmes was often overlooked. Jessup insisted that the system needed to pivot to focus upon what gets learnt, not what gets taught, particularly as different learners might need different learning programmes. In practice, however, this refocusing shifted attention away from learning toward assessment, resulting in a system that was dominated by the voice of the assessor rather than by the voice of the teacher or trainer. In the introduction to this report, we explained our intention to explore whether problems that beset the CASLO approach were best understood as inevitable consequences of an unworkable model or as avoidable consequences from poor implementation. This would seem to be a good example of the latter. The very idea of an outcome-based approach is that it ought to provide a solid foundation for teaching, learning, and assessment. So, the fact that NVQ rollout was often associated with impoverished teaching and learning is unfortunate, if not ironic, but it was not inevitable.
Turning to the issue of underpinning knowledge and understanding – and bearing in mind that NVQ designers never denied the importance of these concepts to competent performance – it remains unclear why knowledge and understanding requirements were originally left implicit, rather than being spelt out explicitly, as they later came to be. At least in theory, the NCVQ could have required awarding organisations to be far more proactive in terms of NVQ syllabus development. Ainley observed that leaving so much work to educators and trainers alienated colleges and training providers (Ainley, 1990).
From a sociological perspective, Young (2008) has argued that NVQs were explicitly designed with a view to wresting control of the curriculum from providers (colleges) and placing it in the hands of users (employers). Consequently, NVQ designers were required to “avoid allowing the traditional syllabus-based approach to knowledge to return” to prevent colleges from reclaiming the curriculum (Young, 2008, page 141). This explanation might be overly simplistic.[footnote 30] But it does, once again, capture the idea of revolutionary policy making that unhelpfully polarised stakeholders, which seems undeniable.
It seems likely that part of the reason why the NVQ model appeared to eschew traditional college-based provision was the expectation that NVQs should be capable of servicing programmes like the Youth Training Scheme, which were designed to provide on-the-job training, often for fairly low-attaining young people within fairly low-level jobs (Hargraves, 1995). Many NVQs were developed for circumstances of this sort, where the need to unpack detailed underpinning knowledge and understanding requirements may have been less evident. And, of course, a fundamental tenet of the NVQ model was that it did not matter whether underpinning knowledge and understanding was acquired in college, in work, or anywhere else, just as long as it had been acquired.
The NVQ model can be seen as an extension – or perhaps even the culmination – of the approach pioneered by the Industry Training Boards during the 1960s. They, too, adopted an outcome-based approach to specifying training needs, albeit still recognising the traditional distinction between on-the-job (work-based) training and off-the-job (college-based) education. The NCVQ took this approach a step further by attempting to develop an even more comprehensive and authentic model of occupational competence, without having much at all to say about either on-the-job training or off-the-job education. However, its prioritisation of workplace assessment would certainly have appeared to have downplayed the importance of further education colleges. And the model itself, which lionised the concept of personalised learning programmes, would certainly have been challenging for colleges to implement. It seems fair to conclude that the NCVQ attempted to transform the concept of a TVET qualification from being focused squarely on education and colleges (Craft Certificates, ONCs, HNDs, and so on) to being focused squarely on training and employers (NVQs). In retrospect, this strikes us as an attempt to swing the pendulum of change from one inappropriate extreme to another.
Extreme critique
The theoretical basis of the NVQ model was heavily critiqued by scholars working in the TVET field, many of whom claimed that it was naïve. Although the present report is not the place for an extended evaluation of this literature, it seems fair to conclude that some of the most extreme criticisms were overstated, perhaps as a consequence of the kind of polarisation just discussed. This includes the claim that the NVQ model was behaviourist on at least 2 counts, and therefore fundamentally flawed.[footnote 31]
First, proponents of the NVQ model were said to have been influenced by behaviourist philosophy, causing them to reduce knowledge and understanding to nothing more than competent performance. In fact, the concepts of knowledge and understanding were not reduced to competent performance within the NVQ model. It is true that occupational competence was explicated in terms of competent performance. But knowledge and understanding were defined quite differently – and separately – as constructs that underpinned occupational competence. The model even acknowledged that knowledge and understanding sometimes needed to be assessed ‘directly’ to provide ‘indirect’ evidence of occupational competence. This is not philosophical behaviourism.
Second, proponents of the NVQ model were said to have been influenced by behaviourist learning theory, causing them to specify occupational competence in a manner compatible with behaviourist learning approaches, such as stimulus-response conditioning. Consider the following claim, for instance:
This procedure clearly draws upon the work of the classical behavioural school of psychology. The work of Watson, Guthrie, Thorndike and Skinner is strongly represented. The classical behaviourists also concentrated upon the outcome of learning and judged the success of learning entirely by the behavioural outcome. This simplistic view of learning is now only of historical interest. It is surprising that the NCVQ have based so much of their work in this orthodoxy because the theoretical consideration of learning has advanced considerably in the last twenty or more years. Even the most radical behavioural psychologist would not now subscribe to the traditional view of learning so evident in the work of the NCVQ.
(Marshall, 1991, page 61, footnote references removed)
As noted earlier, it certainly is true that behaviourists, including Thorndike and Skinner recommended specifying intended learning outcomes in terms of behavioural objectives. Yet, non-behaviourists, like Tyler, also recommended this! Indeed, Tyler railed against the specificity of behaviourist objectives, citing for example a case from Thorndike in which each of the 100 addition combinations taking 1-digit numbers 2 at a time were separately specified as distinct intended learning outcomes: can add 0 + 0, can add 0 + 1, can add 0 + 2, and so on (Fishbein & Tyler, 1973). Tyler defined intended learning outcomes far more generally, at a much higher level, in a manner that would not have been amenable to behaviourist learning approaches. The NVQ model also recommended far more general outcomes. To put it simply, the NCVQ did not base any of its work on behaviourist learning theory, nor was the work of Watson, Guthrie, Thorndike, and Skinner represented at all in the NVQ model (see also Burke, 1995).
Having said that, it is important to appreciate that the radical critique of the NVQ model followed in the wake of a radical critique of the Objectives Movement, which included trying to argue that even Tyler was a behaviourist.[footnote 32] In other words, when England began experimenting with outcome-based qualification design in the 1970s and 1980s, there was already plenty of ammunition available to critics, arising from a vast amount of experience of the Objectives Movement in North America. Some of these criticisms were entirely legitimate, while others were clearly not.
Although this scholarly debate might seem a little arcane, it is important to engage with it explicitly. This is because many of the early criticisms of the NVQ model argued that outcome-based approaches were either fundamentally flawed (for example, the ‘naïve behaviourism’ critique from this subsection) or required sacred cows to be slain (for example, the ‘rejection of learning programmes’ critique from the previous section). If true, then this would render outcome-based models both implausible and unworkable. Indeed, this would challenge the very concept of an outcome-based model, which extends the critique of NVQs to all CASLO qualifications, as a matter of principle. However, we remain unconvinced that outcome-based models can be dismissed quite so easily.
Confusing model
Although we have argued that scholars sometimes mischaracterised the NVQ model, it was undoubtedly a confusing model, and its proponents did themselves no favours by failing to resolve fundamental ambiguities before rolling the model out.
The outcomes at the heart of the NVQ model were supposed to be explicated on a principled basis, using functional analysis, which drew upon the Job Competence Model. Unfortunately, the status of underpinning knowledge and understanding proved to be highly controversial, and this ambiguity was never entirely resolved.
According to NVQ theoreticians, it was incorrect to specify underpinning knowledge and understanding as part of the occupational competence construct. Nor, they argued, was it necessary to provide evidence of underpinning knowledge and understanding independently of competent performance. In other words, if that knowledge and understanding genuinely underpinned occupational competence, then evidence of competent performance would, by definition, imply possession of the underpinning knowledge and understanding. The logic seems undeniable.
Yet, when NVQ developers grappled with the pragmatics of assessing competent performance – both authentically and comprehensively – this logic began to unravel. The assessment of knowledge and understanding was soon introduced to the NVQ approach as a substitute for assessing competent performance, either when it was impossible to assess competent performance directly, or as a warrant for generalising competent performance from a single context to multiple contexts. Thus, knowledge and understanding came to play a more central role in the model, albeit even then not as a component of the occupational standard itself. Eventually, knowledge and understanding requirements became a mandatory component of occupational standards. Ultimately, they even came to be assessed via discrete qualifications, known as Technical Certificates.
Even today, it is still not obvious how knowledge and understanding requirements ought to be specified and assessed, although – given the complexity of inferring underpinning knowledge and understanding requirements from an occupational competence model – it would make sense for this not to be left entirely to teachers and trainers to work out. If so, then the burden must fall either to standards developers or to qualification developers. What remains particularly unclear is the extent to which underpinning knowledge and understanding need to be assessed directly, whether to provide indirect evidence of occupational competence, or simply for their own sake. Assessing knowledge and understanding separately, for their own sake, runs its own risks, particularly where knowledge and understanding requirements are separately certificated. Winch, for example, has noted the subtle absurdity of it being possible (by around 2010) for an apprentice to achieve an NVQ before being awarded the certificate that confirmed their acquisition of the underpinning knowledge and understanding required for that NVQ (Winch, 2011).
Shambolic rollout
Implementation of the NVQ model was shambolic. Even accounts provided by key protagonists, including Gilbert Jessup and Graham Debling, give the impression that the NCVQ and the Training Agency were to some extent making the model up on the fly. It is clear that all concerned were very aware of the size of the practical challenges that would need to be overcome (see Burke, 1989, for examples). Yet, the implementation timetable was extremely ambitious.
Implementing functional analysis proved to be especially challenging. Mansfield & Mitchell described it as “probably the most misinterpreted, misunderstood and most haphazardly applied method ever to emerge from the discipline of occupational analysis” (1996, page 98). Some years earlier, Mitchell had warned that “functional analysis is not a method (for the moment at least) which can be taken off the shelf by those with a little time and a handbook and applied well […] it is an expert system which requires [a] good deal of background understanding” (1989, page 59). Debling, in the same book, had warned that “there is very little expertise in defining explicit national standards” (1989, page 83). Raggatt & Williams summed it up like this:
The attractiveness of the approach notwithstanding, how realistic was it to expect individual lead bodies, even with the assistance of consultants, to be capable of implementing a full-scale functional analysis of the occupations within their respective sectors – or even understanding what it entailed?
(Raggatt & Williams, 1999, page 97)
Raggatt & Williams argued that officials underestimated the complexity and messiness of employment situations and structures (Raggatt & Williams, 1999). Where officials intended a rigid rollout of their highly technical model, employers pushed back. Perhaps inevitably within this largely voluntaristic context – one that was designed to be led by employers – the rollout proved to be very far from rigid. The breadth of the competence model often proved to be a casualty in this struggle between officials and employers, with powerful interest groups exerting a narrowing influence on standards (Callender, 1992).
The shambolic rollout of a confusing model provided much fuel for the fire of critics. Yet, there was still plenty of support for NVQs and the idea of competence standards more generally (FEFC, 1997). Although many employers did reject the NVQ model because of its rigidity, others argued the case for sufficient flexibility to make it work in their own particular circumstances, including the Association of Accounting Technicians (Purcell, 2001). And, while many scholars rejected the NVQ model outright, others felt that it would be wrong simply to abandon NVQs and start again (Hodkinson & Issitt, 1995). Ultimately, the NVQ train kept on rolling.
Rigour-ish
Interestingly, NVQs managed to resist the suggestion that they ought to locate external written exams front and centre. Government asked the Beaumont review to explore options for embracing externality, but no such recommendations followed. Statutory Regulations for the new NQF (QCA, 2000) introduced a requirement for independent assessment, which might include external exams. Yet, a subsequent QCA evaluation indicated the strength of feeling against exams within TCs, and a subsequent SSDA-commissioned evaluation indicated even stronger resistance to external testing of key skills. As TCs became integrated within the QCF, they would have been required to adopt the CASLO approach (as we will see later on). Even by the final edition of the NVQ Code of Practice, the model was still firmly grounded in internal assessment (QCA, 2006). That is not to say that NVQs never included external exams (more frequently known as tests). They sometimes did. Indeed, the original guidance clearly stated that a wide range of techniques could be accommodated, including paper-based questioning, computer-based questioning, and so on (TA, 1989b). But external testing was never central to the NVQ model, as it came to be for other CASLO qualifications, including GNVQs.
The call for more external testing is often associated with concern for greater rigour. Rigour was always a matter of concern for NVQs, which was recognised by Beaumont and echoed by Dearing. Yet, concerns for greater responsiveness – to employers, in particular – tugged in a different direction. For instance, it was clearly not straightforward to rollout a system based on functional analysis. It required considerable technical expertise to apply the technique when developing standards and qualifications. And these standards and qualifications – with their unique logic and grammar – also required a certain amount of technical expertise to facilitate effective use. Where developers saw necessary precision, employers saw unintelligible jargon. Toward the end of the 1990s, it seemed that the employers might have won out, resulting in a dilution of the approach. By 2000, QCA had eliminated the expectation that standards and qualifications should derive from functional analysis. This introduced a new elephant to the room: if not functional analysis, then what? A decade later, though, functional analysis was firmly back on the table, as a requirement of the UKCES quality criteria for NOS (although the UKCES version of functional analysis was not quite as stringent as the version pioneered during the early years).
Responsiveness was always going to be a major challenge for a national qualification that was intended to displace a so-called ‘jungle’ of disparate, bespoke ones. NVQs were designed to be flexible, but neither they nor the apprenticeship frameworks within which they came to be located were seen, by many employers, as flexible enough. In an inherently voluntaristic system where employer engagement was desperately required yet often not forthcoming, it is easy to see how demands for responsiveness may have weighed more heavily than concerns over rigour.
Conclusion
We can safely conclude that some of problems that beset NVQs were due to the particular version of the CASLO approach that was adopted. Although it was quite neat, in theory, the exclusive focus upon competent performance failed to persuade stakeholders. The status of underpinning knowledge and understanding within the NVQ model continued to cause problems for decades. The focus on competence, and the concomitant emphasis on workplace assessment, was meant to provide a welcome antidote to an historical overreliance upon theory and written exams. Ultimately, though, it swung the pendulum too far in the opposite direction, leaving a ‘hole’ that needed to be ‘patched up’ by successive incarnations of knowledge and understanding requirements. It is important to stress that the decision to frame outcomes in terms of occupational competence alone – leaving knowledge and understanding requirements implicit – was unique to the early NVQ model. It is not a feature of outcome-based qualification models, per se, and it was not a feature of many subsequent CASLO qualifications in England.
We can also safely conclude that many of the problems that beset NVQs were due to rushed and poor implementation. In retrospect, the magnitude of the task that faced the NCVQ and associated agencies was mind-boggling. It is hard to imagine how a task of that complexity, in a context as messy as the one in which it was located, could ever have been implemented effectively in the space of just a few years.
What is unclear is the extent to which the problems that beset NVQs were due to adopting the CASLO approach, per se, which would be to question the viability of the approach itself in this context. It may have been a principled decision to base the NVQ model on the CASLO approach, but whether it was an optimal decision under the circumstances is not possible to judge on the basis of historical evidence alone.
GNVQs
The introduction of General National Vocational Qualifications can be understood as a pragmatic response to resistance to the proposal that all vocational and technical qualifications would need to be accredited to the NVQ framework (Sharp, 1998). As such, GNVQs were partly a response to concerns from stakeholders such as the Confederation of British Industry that NVQs were specified too narrowly. But they were also partly a response to concerns from stakeholders such as the Business and Technician Education Council that existing college-based vocational qualifications fulfilled a critical ‘middle way’ function – between general education and technical training – that NVQs were incapable of serving. By accepting the need for both NVQs and general NVQs, the National Council for Vocational Qualifications formally acknowledged the importance of this middle way.
In May 1991, a few years after the introduction of NVQs, the white paper ‘Education and Training for the 21st Century’ announced the introduction of General NVQs, high-quality vocational alternatives that would cater for the increasing numbers of young people who were staying in full-time education.[footnote 33] At Level 3, the system would comprise 3 distinct routes – the A level route, the GNVQ route, and the NVQ route – as well as providing opportunities to combine qualifications across routes.[footnote 34] The white paper explained its reasoning as follows:
Many young people want to keep their career options open. They want to study for vocational qualifications which prepare them for a range of related occupations but do not limit their choices too early. Some want to keep open the possibility of moving on to higher education. Employers, too, want to have the opportunity of developing their young recruits’ general skills, as well as their specific working skills. A range of general qualifications is needed within the NVQ framework to meet these needs. Some already exist which help to meet this need – including some offered by the Business & Technician Education Council (BTEC). But they need to be clearly related to the NVQ framework, to make it easier for people to progress quickly to occupationally specific qualifications.
(DES, DE, & WO, 1991, page 18)
Although this seemed to suggest the possibility of accrediting BTECs (and other vocational qualifications) as General NVQs, this proved not to be the case. As with NVQs, GNVQs ended up being developed from scratch. Indeed, under the auspices of the NCVQ, they were designed to be far more like NVQs than A levels. GNVQs were introduced at 3 levels:
- Advanced – comprising 8 mandatory vocational units, 4 optional vocational units, and 3 mandatory core skills units (at Level 3) – which was usually studied as a 2-year programme
- Intermediate – comprising 4 mandatory vocational units, 2 optional vocational units, 3 mandatory core skills units (at Level 2) – which was usually studied as a 1-year programme
- Foundation – comprising 3 mandatory vocational units, 3 optional vocational units, 3 mandatory core skills units (at Level 1) – which was usually studied as a 1-year programme [footnote 35]
As students were required to pass all units to achieve their GNVQ – mandatory, optional, and core skills alike – this meant that the GNVQ was effectively a grouped award (unlike A levels, for instance, which had delivered qualifications on a subject-by-subject basis since the 1950s).
Pilots for 5 Advanced and Intermediate subjects commenced in September 1992.[footnote 36] These qualifications were formally launched in September 1993, with Foundation GNVQs following a year later. Ultimately, GNVQs were made available in 14 subject areas by 3 awarding organisations – BTEC (later Edexcel), RSA (later OCR), and City & Guilds (later AQA, with whom City & Guilds had formed an alliance). According to statistics provided by the Further Education Funding Council for the 1995 to 1996 academic year, the relative split of GNVQ students across these 3 awarding organisations was: 74% (BTEC), 10% (RSA), and 16% (C&G).[footnote 37]
The Further Education Unit (FEU) prepared a comprehensive manual on how to implement GNVQs, which emphasised just how significant a change they represented, particularly for teachers who came from the A level route. Borrowing certain ideas from the BTEC model, GNVQs embodied a radical student-centred philosophy. The report characterised the ‘spirit’ of the GNVQ as follows (FEU, 1994, page 198):
- learners are responsible for producing and presenting evidence to show that they have met the performance criteria
- an approach to learning and assessment which is based on the application of skills, knowledge and understanding within ‘holistic’ learning experiences
- the concept of not-yet-achieved, rather than failed
- the concept of mastery learning
- an emphasis on the assessment of skills, knowledge and understanding through their application
GNVQs were intended to offer considerable scope for personalisation, including via optional units or additional units. In theory, at least, a GNVQ course was not time limited, suggesting that students could learn at their own pace, with the potential to enter and exit at different points in the year (which is how BTECs typically operated). Personalisation was also embodied in the principle that students should begin with an initial diagnostic assessment of core skills needs, resulting in an individual profile of strengths and weaknesses and an individualised action plan. This diagnostic assessment was also intended to explore the potential for Recognition of Prior Learning in relation to the broader GNVQ programme. Reflecting on the scale of change anticipated, the report observed that: “GNVQs may require a significant culture shift, as well as the development of new staff skills” (FEU, 1994, page 2).
Design
The GNVQ design process was led by Gilbert Jessup, who proposed essentially the same outcome-based approach as had been pioneered within NVQs. NCVQ Chief Executive, John Hillier, once claimed that the GNVQ model represented the: “most extensive application of outcomes-based assessment in the world” (Ecclestone, 2002, page 3). GNVQs were based almost entirely on the CASLO approach.
Standards for all units – both vocational and core skills – were specified via elements (learning outcomes) and performance criteria (assessment criteria), which were related to National Occupational Standards in relevant sectors (Hodgson & Spours, 2003). Figure 8 illustrates an element, associated performance criteria, a range statement, and evidence indicators from the original specification of an Advanced vocational unit in Business (from Allen, 2004, Appendix 1). Figure 9 illustrates an element and associated performance criteria from the specification of a Level 2 communication core skills unit (from FEU, 1994, page 178).
Each element also had a statement of range, which indicated the significant dimensions that had to be covered and evidenced. The kinds of evidence required for each element were described via evidence indicators. According to the GNVQ implementation manual:
Assessors need to develop the ability to apply all the relevant criteria to a piece of candidate’s work as a global judgement, rather than as a process of ticking off disaggregated criteria.
(FEU, 1994, page 209)
Unit 1 Business in the economy
Element
1.1 Explain the purposes and products of business
Performance criteria
1 Demand for goods and services is identified and described
2 Demand in relation to particular product is described
3 Industrial sectors are identified and described
4 The product of businesses in different industrial sectors is identified and described
5 Purposes of selected business organisations are explained
Range
Demand: needs, wants and effective demand, consumption and income, demand and price, elastic and inelastic
Industrial sectors: primary, secondary, tertiary
Product: goods, services
Purposes: profit-making, public service, charitable
Evidence indicators
An analysis of selected businesses with an explanation of why businesses exist, an explanation of their product and an explanation of demand in general and demand in relation to a particular product. Evidence should demonstrate understanding of the implications of the range dimensions in relation to the element. The unit test will confirm the candidate’s coverage of range.
Figure 8. Advanced vocational unit in business
GNVQ units are specified in terms of elements, performance criteria, range, and evidence indicators.
2.2 Prepare written material on routine matters
PC 1 All necessary information is included and information is accurate
PC 2 Documents are legible.
PC 3 Spelling, grammar and punctuation are used correctly.
PC 4 The format used to present material is appropriate and information is ordered appropriately to maximise audience understanding.
Figure 9. Level 2 communication core skills unit
Core skills units list a small number of performance criteria.
Aware of earlier criticism of the NVQ approach, the FEU report was alive to the risk of fragmented learning arising, indirectly, from the manner in which criteria were nested within outcomes that were nested within units. It insisted that neither learning nor assessment needed to be similarly disaggregated, arguing instead for activities and assignments that represented coherent and rounded experiences for students. The report distinguished between unit-based and integrated delivery programmes, noting that unit-based ones were easier to deliver but more likely to lead to fragmented learning and assessment. Integrated programmes – which pulled together elements from a number of units – mitigated this risk, although they did make it harder for students to track their achievements.
Assessment typically involved a combination of extended assessment, with evidence accumulated in a portfolio, plus externally set tests.[footnote 38] Students would complete their portfolio largely on the basis of assignments and activities developed by their teachers. For the purpose of internal and external quality assurance – known as verification – students would index their portfolio to relevant elements and criteria. The portfolio would need to demonstrate how each performance criterion had been satisfied for each element (and therefore how all elements had been achieved for each unit) across all vocational and core skills units. Evidence might take a variety of different forms – assignment reports, artefacts, diagrams, videos, witness testimony, and so on.
To pass their GNVQ, a student would need to satisfy all performance criteria for all elements of each vocational and core skills unit. In addition, they were required to pass an external test for each of the mandatory units (although a few mandatory units were excluded from this requirement where content was thought to be inappropriate for testing). These multiple-choice tests were set by awarding organisations and either externally marked or internally marked with verification. They were intended to supplement the portfolio of evidence, assessing underpinning knowledge and understanding. The tests were designed to confirm broad coverage of range.[footnote 39] Technically, they were based on a compensatory principle, as passing the test depended solely on having achieved the overall pass mark. However, they still embodied a loose conception of mastery:
Because of the concept of ‘mastery learning’ a high pass mark has been set (currently 70%), and it is envisaged that students will repeat the test until they achieve it. Tests are available several times a year to enable students to take the relevant test when they are judged to be ready to do so.
(FEU, 1994, page 213)
The idea of mastery learning was central to the original GNVQ model. It was embodied in the design principle of fusing formative and summative assessment, which was intended to empower students to maximise their achievements (Ecclestone, 2000).
Although individual units were not graded, there was a process for deriving an overall qualification grade, such that students would receive either a pass, a merit, or a distinction grade overall. Grades were based on the same portfolio of evidence that determined whether or not a student passed, although different criteria were used for awarding merit and distinction grades. Originally, the Advanced GNVQ had 6 grading criteria – that is, a set of 6 for merit and a parallel set of 6 for distinction – grouped within 3 themes. Focused primarily on processes, it was soon recognised that another theme would be required to recognise the overall quality of work produced. So, by September 1994, there were 8 grading criteria, grouped within 4 themes, as presented in Figure 10 (from NCVQ, 1994, page 27). As described in the booklet that specified these new criteria:
The grading criteria focus on students’ performance because GNVQs are designed to encourage active approaches to learning; how students tackle their work; how much responsibility they take for planning it; how they decide what information they need; how well they review and evaluate their performance; and the overall quality of the work they produce.
(NCVQ, 1994, page 8)
Recognising that a student would acquire these skills gradually throughout their course, and acknowledging that not all assignments would elicit evidence of higher-level performance, the NCVQ specified that criteria for merit or distinction grades only needed to be demonstrated across one-third of the portfolio of evidence. Thus, students could be awarded a merit grade if one-third or more of their evidence met all of the merit grading criteria, or a distinction grade if one-third or more of their evidence met all of the distinction grading criteria.[footnote 40] Again, the requirement that all criteria needed to be satisfied for the award of a higher grade was consistent with the idea of mastery, albeit operationalised pragmatically rather than absolutely.
Theme | Aspect | Merit criterion | Distinction criterion |
---|---|---|---|
Planning | 1. Drawing up plans of action | Student independently draws up plans of action for a series of discrete tasks. The plans prioritise the different tasks within the given time period. | Student independently draws up plans of action for complex activities. The plans prioritise the different tasks within the given time period. |
Planning | 2. Monitoring courses of action | Student independently identifies points at which monitoring is necessary and recognises where revisions to courses of action are necessary. Appropriate revisions to plans are made with guidance from teacher/tutor. | Student independently identifies points at which monitoring is necessary and recognises where revisions to courses of action are necessary. Appropriate revisions to plans are made independently. |
Information seeking and information handling | 3. Identifying and using sources to obtain information | Student independently identifies, accesses and collects relevant information for a series of discrete tasks. Student identifies principal sources independently and additional sources are identified by the teacher/tutor. | Student independently identifies, accesses and collects relevant information for complex activities. Student uses a range of sources, and justifies their selection. |
Information seeking and information handling | 4. Establishing the validity of information | Student independently identifies information which requires checking for validity. Student checks validity of information using given methods. | Student independently identifies information which requires checking for validity. Student independently selects and applies appropriate methods for checking validity. |
Evaluation | 5. Evaluating outcomes and alternatives | Student judges outcomes against original criteria for success; identifies alternative criteria that can be applied in order to judge success of the activities. | Student judges outcomes against original criteria for success and identifies and applies a range of alternative criteria in order to judge success of the activities. |
Evaluation | 6. Justifying particular approaches to tasks/activities | Student justifies approach used; indicates that alternatives were identified and considered. | Student justifies approach used, basing justification on a detailed consideration of relevant advantages and disadvantages. Alternatives and improvements are identified. |
Quality of outcomes | 7. Synthesis | Student’s work demonstrates an effective synthesis of knowledge, skills and understanding in response to discrete tasks. | Student’s work demonstrates an effective synthesis of knowledge, skills and understanding in response to complex activities. |
Quality of outcomes | 8. Command of language | Student’s work demonstrates an effective command of the language of the GNVQ area at Advanced level. | Student’s work demonstrates a fluent command of the language of the GNVQ area at Advanced level. |
Figure 10. Advanced GNVQ grading criteria
GNVQ grading grids were complex, with multiple criteria at both merit and distinction.
Evolution
The GNVQ model was both novel and complex. The fact that it was designed and implemented at speed resulted in numerous problems, and the model had to be reconfigured several times over a period of just a few years. In his Foreword to the GNVQ manual, FEU Chief Officer, Geoff Stanton, emphasised the “hectic pace” at which GNVQs had been introduced, adding that there was no good time at which to produce guidance because the continual revision of approaches and specifications quickly rendered any such guidance out of date (FEU, 1994, page 1).
It is hard to say exactly how many discrete models were implemented between its conception and demise – because it is hard to say which changes were mere refinements and which were substantive reforms, especially as implementation was staggered across subject areas – but Ecclestone (2002) proposed that there were 4 distinct models, with the following dates corresponding to their first teaching:
- September 1993 model
- September 1995 model
- September 1996 model
- September 2000 model
The final model involved a fully-fledged reform process, giving birth to a new qualification that was to become known as the Advanced Vocational Certificate of Education (AVCE).
The first model was heavily criticised from the outset, including within the high profile critique of the NVQ-GNVQ system mounted by Alan Smithers, which we considered in the last section (Smithers, 1993). Further Education Funding Council (FEFC) inspectorate surveys subsequently identified concerns over an unwieldy assessment system, inappropriate and unclear external test questions, poor teaching of key skills, inadequate internal and external verification, and some low completion rates (FEFC, 1994b; 1995). Office for Standards in Education inspection reports also identified similar problems (Ofsted, 1993; 1994). Independent evaluations identified further concerns (Wolf, Burgess, Stott, & Veasey, 1994; Wolf, Scharaschkin, Meade, & Pettitt, 1994). These reports differed in their appraisal of how serious the challenges were. The FEFC, for instance, referred to “a number of teething problems” (FEFC, 1994b, page 5), whereas Wolf – focused specifically on grading – outlined the need for “major reconceptualisation and reform and not simply fine-tuning” (Wolf, Burgess, Stott, & Veasey, 1994, page 1).
As early as March 1994, Under-Secretary of State for Further and Higher Education, Tim Boswell, had set out a 6-point agenda for action for the NCVQ, to ensure quality and rigour in GNVQs. This was the beginning of an extended period of refinement, review, and reform. The 2 most important reviews of this period were the ‘GNVQ Assessment Review’ (Capey, 1995) and the overlapping ‘Review of Qualifications for 16-19 Year Olds’ which had a broader remit (Dearing, 1996). The Capey review was particularly significant for the future of GNVQ. It contained both immediate and longer term recommendations. The longer term ones, once piloted, resulted in GNVQ model 4 (the AVCE).[footnote 41] Its immediate recommendations influenced the design of GNVQ model 3.
The report of the Capey review strongly supported a number of GNVQ design features, including the specification of learning outcomes, the unit-based structure, the emphasis on active learning, and the inclusion of core skills. However, it also took issue with various aspects of the assessment regime, as this was configured in the September 1995 model. Changes for September 1995 had already included reducing assessment documentation to a minimum, with recording at the element level not at the performance criterion level, and no longer requiring recording of range coverage. The report identified 4 concerns that were common to most submissions to the review:
- the (continuing) very serious burden of assessment and recording on teachers and students
- interpretation and application problems associated with the grading criteria
- difficulties in teaching and assessing core skills
- uneven quality and demand of the external tests
The problem of burden was related to a heavy continuous assessment load, associated with the CASLO approach. A survey by the Association for Colleges suggested that, while there was support for criteria-based assessment, there was also concern that too much time was spent assessing, which took time away from teaching and learning.
Of particular relevance to the CASLO approach, Capey recommended that:
- GNVQ assessment should (in the longer term) move from element-based to unit-based assessment
- NCVQ should investigate the feasibility of external tasks for core skills units
- the purpose of the tests should be reviewed, along with the 70% pass mark, and whether they should contribute to grading
A key consideration addressed by the review was the role of mastery in a general qualification such as the GNVQ. For instance:
The fundamental issue was whether the GNVQ model could move away from the conventional mastery model to one which would identify and assess the key knowledge and skills within a unit. The attendant problems of any such moves could be to reduce the transparency of the required outcomes for both students and users (ie what will the student have covered from the range?) The advantages are that the unproductive work of superficially covering all the range could give way to more in-depth work on a more focused range of skills and knowledge.
The group agreed that the move towards sampling rather than exhaustive coverage, which began in the September 1995 changes, should continue. However, this would not resolve the problem of the large number of evidence indicators prescribed for each unit (up to 50 in some cases). This could be eased by assessing at unit, rather than element level, leaving the content unchanged but reducing the number of evidence indicators by adopting a more integrated approach. The group thought that this might also lead to more effective learning. One implication is that assessments would not necessarily cover all aspects of the elements, performance criteria and, particularly, range. Another implication is that assessors would be making more generalised judgements about performance (see the level descriptions in National Curriculum assessment) when unit-referenced evidence indicators replaced element-referenced indicators.
(Capey, 1995, page 24)
The subsequent report of the Dearing review was less detailed but more blunt:
A qualification like the NVQ should not be granted unless a candidate has demonstrated all the competences necessary to provide a reliable service to a client. But the GNVQ is not a professional qualification: it covers a broad area of knowledge and understanding which underpins a range of trades and professions, and provides a basis for a practical education. The mastery model is therefore inappropriate to the GNVQ, so I welcome the proposal in the Capey Report that assessment should no longer cover every detail, but it should be based on an overall assessment of performance in defined areas known as units. This will reduce workload and avoid the risk that assessment may become a burdensome series of atomistic assessments.
(Dearing, 1996, page 77)
It is interesting to note how Dearing, in particular, dismissed (or simply overlooked) the potential significance of mastery learning in general education, as though the concept of mastery was relevant only to occupational qualifications as a certification requirement. We will return to this point later.
Completion rates were problematic from the outset, and continued to be so from one model to the next. For instance, although the majority of full-time students in further education colleges who completed GNVQ courses passed them, pass rates were not particularly high. For the 1995 to 1996 academic year, these ranged from 62% for Foundation GNVQs to 74% for Advanced GNVQs (FEFC, 1997). This FEFC report also raised concern over large numbers of students dropping out before even taking their final assessments. Commenting on this issue, Wolf (1998) argued that an important factor in explaining non-completion was the failure of many students to keep up a steady rate of portfolio completion. Echoing an earlier evaluation, she concluded that the GNVQ approach relied considerably on the ability of tutors to organise and manage student learning.
Standards remained an issue for GNVQ delivery, as discussed in some detail by the 1997 FEFC report, as well as by a 1998 Ofsted report on Advanced GNVQs (relating to the 1996 to 1997 academic year). While both the FEFC and Ofsted identified plenty of good practice, they both expressed concerns related to how consistently standards were being applied. The Ofsted report concluded that most students had enjoyed their courses, estimating that the majority had achieved more in GNVQ than they might reasonably have been expected to achieve at A level. However, the report also expressed continuing concern over the grading system and verification procedures, concluding that the:
most serious weakness which must be addressed is that the lack of clearly defined standards results in over-generous assessment and grading by some teachers and verifiers.
(Ofsted, 1998, page 5)
It added that external verification by non-subject specialists contributed to this unreliability.
Ofsted summarised what seemed to be a promising situation just prior to the introduction of model 4 in its 1999 Annual Report (Ofsted, 1999). GNVQs had improved, the new assessment regime was more rigorous, and standards of performance had improved accordingly. Students on Foundation and Intermediate courses were particularly motivated by the links with vocational sectors, although achievement was more consistent on Intermediate courses, where the highest attaining students benefitted from the independent style of learning promoted by the course. Good planning was judged to be an essential component of effective provision – where planning was poor, pupils frequently failed to complete their portfolio work. In school sixth-forms, the large majority of Advanced GNVQ students were committed and conscientious, with frequent contact with the world of work promoting a high level of self-reliance and interpersonal skills. The model 4 pilot had gone well: the new assessment approach was easier to manage and appeared to be encouraging greater rigour, leading to improved performance standards.
AVCEs
Following this period of piloting, the model 4 Advanced GNVQ was rolled out in September 2000, now with a new name, the Advanced Vocational Certificate of Education (AVCE). According to the QCA (2003), key changes involved separating out key skills, so that candidates no longer failed the GNVQ if they failed to achieve the key skills, and introducing a compensatory approach to assessment, such that candidates were no longer required to pass every unit to achieve the qualification.
The AVCE was introduced in tandem with new A level qualifications, as a key part of the Curriculum 2000 reform programme, which arose from the 1997 Department for Education and Employment ‘Qualifying for Success’ consultation. A key intention underlying this reform programme was to address undue narrowness and lack of flexibility in the post-16 curriculum (QCA, 1999c). Students would be encouraged to study for both GNVQ and A level within the same Level 3 programme, facilitated by comparably sized units and the new GNVQ grading scale (A to E).
The AVCE was made available in 3 sizes: 12 units, 6 units, and 3 units. The 12-unit AVCE, known as the double award, was intended to be of a standard equivalent to 2 A levels, comprising a minimum of 6 and a maximum of 8 compulsory units, with a maximum of 6 optional units (Ofsted, 2004).
According to the QCA (1999b), the new assessment arrangements were designed to increase rigour and manageability. One-third of the assessment would typically be external (involving set assignments or tests), while two-thirds would typically be internal with moderation (based upon a portfolio of evidence).
In relation to the CASLO approach, it is important to emphasise that AVCE units no longer specified learning outcomes as elements. Like their A level counterparts, unit content was specified primarily through syllabus content. In short, the new AVCE-GNVQ was no longer outcome-based.
Having said that, there were still echoes of the CASLO approach in the specification of unit standards via grading criteria. Grading was no longer a simple process of ticking off ‘the bullets’ as per previous models (see Ecclestone, 2002). Instead, points were available for performances within each unit, and these points were aggregated across units to form an overall point total, from which the qualification grade was derived. This also removed the formal requirement for students to pass all AVCE units for the award of an overall qualification grade, just as long as their overall point total exceeded the grade E threshold. In short, the new AVCE-GNVQ was no longer mastery-based.[footnote 42]
The process for deriving points for each centre-assessed unit was complex, and did actually retain an element of mastery. Criteria were specified for unit grades E, C, and A, and whether or not these criteria had (all) been satisfied determined the range of unit points available to students. For example, the following criteria were specified for an early Advanced business unit:
E1 classify the business according to its product or service
E2 describe and explain the objectives of the business
E3 describe the functional areas that exist in the business, and explain how they help the business to meet its objectives
E4 describe the management styles and cultures present within the business
E5 identify communication channels used by the business
E6 explain how the production process and quality assurance/control system used by the business helps it to add value to its product or service
(Allen, 2004, Appendix 2)
If a student failed to satisfy all of these criteria, then their unit points were limited to 0 to 6, depending on how many of the grade E criteria had been satisfied. Conversely, if the student had achieved all of these criteria, then additional points were available to them. If they had achieved all of the grade E criteria, but had not achieved all of a separate set of grade C criteria, then points 7 to 12 would be available to them, depending on how many of the grade C criteria had been satisfied. For a full description of this process, see Greatorex (2001). The main point to note is the echo of the CASLO approach, as captured in the following evaluation conclusion:
On the positive side, a minority thought that the specifications were clearer and that the external assessment was fairer and more rigorous than the type of assessment in GNVQ. On the negative side, the majority of teachers found that the AVCE combined the worst of both worlds – external tests that demotivated learners and put them under pressure, combined with echoes of an NVQ assessment methodology which insisted on coverage of grade-related criteria and extensive portfolio evidence.
(Hodgson & Spours, 2003, page 112)
Sharp characterised the transition from model 1 to model 4 as “a continual series of attempts to escape from the constraints of the original” model, moving gradually toward a “more conventional approach to content, knowledge, assessment and curriculum structure” (Sharp, 1998, page 309). This gave him hope for the long-term future of the GNVQ. Ultimately, though, the AVCE model failed. A damning Ofsted report concluded that:
The AVCE is not well designed. It is neither seriously vocational, nor consistently advanced. The aims of the AVCE are not clearly understood by many teachers and students. We observed a good deal of work that was trivial, as well as some that was excessively demanding.
(Ofsted, 2004, page 5)
Even after almost all of the remnants of the CASLO approach had been removed, problems persisted:
Teachers were particularly constrained by the AVCE assessment requirements. Both they and students regard the assessment regime as excessively complex, bureaucratic and hard to understand. They are right. The Qualifications and Curriculum Authority (QCA) has on several occasions attempted to address this issue, but teachers still spend too much of their time assessing, rather than teaching, students. For their part, students spend too much time completing assessments rather than learning.
(Ofsted, 2004, page 5)
AVCEs were quickly replaced by Applied A levels, which were introduced for first teaching in September 2005. As the name suggests, they were even more closely allied to A levels than AVCEs. Put simply, they were A levels. Subsequently, an entirely new vocational qualification, the Diploma, was introduced in September 2008. Like the original GNVQ, this was effectively a grouped award, which required successful completion of all of the components within the programme. Unlike the GNVQ, the CASLO approach was not a significant feature of vocational units within the Diploma programme.[footnote 43] A relatively small suite of Applied A levels continued alongside the Diploma.
Conclusion
Just as for NVQs, GNVQ rollout was highly problematic. This is perhaps not surprising, given that they were part of the same reform programme, and particularly given that GNVQs were something of an afterthought in this process. GNVQs transitioned through a succession of models until they were eventually withdrawn. But, to what extent can we lay the blame for their ultimate demise at the feet of the CASLO approach?
Well, there is certainly some truth in Sharp’s observation that successive models represented a retreat from the CASLO approach to a more classical one. Under the CASLO approach, the assessment system was often felt to be unwieldy, if not unmanageable, and problems of inconsistently applied standards were frequently identified. This was particularly true of early implementation, leading Wolf, Burgess, Stott, & Veasey (1994) to some very extreme conclusions, such as:
The unmanageability of the system is not simply a transitional phenomenon but derives from the nature of the assessment process.
(Wolf, Burgess, Stott, & Veasey, 1994, page 2)
The current system is so complicated that it is virtually impossible for all the players in the system – NCVQ, Awarding Bodies, their external verifiers and centres’ internal verifiers – to reach consistent conclusions and interpretations.
(Wolf, Burgess, Stott, & Veasey, 1994, page 2)
Overall, we conclude that current grading procedures are generating very different judgement and grading levels between project centres in at least some vocational areas.
(Wolf, Burgess, Stott, & Veasey, 1994, page 5)
On the other hand, it is also fair to conclude that some of the earliest problems would certainly have been due to rushed implementation, combined with insufficient piloting, and insufficient attention to upskilling centres and other participants.
In addition, as for NVQ rollout, it seems safe to conclude that some of the problems that beset GNVQs were due to the particular version of the CASLO approach that was adopted. The complex CASLO-based grading scheme would certainly fall into this category.
Finally, it is important to stress that – despite all of the challenges associated with the rollout of GNVQs across multiple models – the qualification was reasonably popular with both teachers and students. Inspection reports consistently concluded that students who adjusted to the GNVQ approach actually performed very well, which included Advanced GNVQ students performing at a level comparable with their A level peers. Furthermore, it was not until almost all of the remnants of the CASLO approach had been eliminated from the model that the qualification ultimately failed. Its assessment regime was still judged to be excessively complex, bureaucratic, and hard to understand, yet its purpose was no longer clear, and its popularity had declined. This certainly points to wider problems for which it would be unfair to blame the CASLO approach.[footnote 44]
BTECs
The Haslegrave report anticipated that a single national council would be established at some point in the future, and this happened in 1983 when the TEC and the BEC merged to form the Business and Technician Education Council (which we shall refer to as ‘the Council’ in subsequent sections).[footnote 45] BTECs were to become the largest and for many the most familiar brand of CASLO qualification.[footnote 46]
In the autumn of 1984, following a public consultation that had been conducted during the spring, the Council published a document that set out its ‘Policies and Priorities into the 1990s’ (BTEC, 1984). It proposed to consolidate the strengths of TEC and BEC provision and to eliminate any weaknesses.
The Council would continue to exercise its authority by establishing qualifications and qualification standards, by approving centres, and by validating courses – just as the TEC and the BEC had done previously. However, there would be a number of evolutionary changes reflecting developments in industry and commerce, new approaches to pedagogy and assessment, and lessons learnt from monitoring, evaluation, and review. Five principles that underpinned existing TEC and BEC qualifications were formally endorsed, including:
(a) that one role of a validating body is to promote the important partnership between education, employers and professional interests in designing, developing and reviewing courses and units;
(b) that the main value of vocational study is demonstrated by what a student can subsequently do and achieve: this requires BTEC to specify the curriculum in terms of the intended outcomes of students’ learning;
(c) that the expectation is that a student who is recruited with integrity to a course should, with diligent study and application, attain a qualification;
(d) that the assessment of a student is subordinate to, but supportive of, the purpose of a course, so that assessment confirms achievement of learning;
(e) that the appropriateness of a programme of study lies principally in its relevance as a preparation for success at work, with progression to other studies being important but normally subordinate.
(BTEC, 1984, page 10)
The Council confirmed that units would continue to be designed around “knowledge and skills which the student must attain” (BTEC, 1984, page 13), although it committed to reviewing the way that learning outcomes were articulated.
Without wanting to impose uniformity for its own sake, the Council committed to exploring options for improving synergy across its commercial and industrial qualifications. This included moving towards a single unit and course structure, and adopting a common grading structure across National and Higher National qualifications (which involved grading units but not the overall qualification). Indeed, BTEC had already decided that all higher qualifications would reclaim the Higher National Certificate (HNC) and Higher National Diploma (HND) nomenclature, which had been associated with earlier (superseded) qualifications (Bourne, 1984).
The following assessment policies are worth highlighting:
82 Assessment is part of the learning process. It should be related both to the aims of the course as a whole and to the objectives of the course’s individual components. Assessment confirms the outcome of learning and is the professional responsibility of the teacher.
83 The assessment methods most appropriate to the units and courses are to be used. These may include examinations, tests, vivas, practical work, projects and assignments.
84 The assessment should relate to the student’s work throughout the course and should cover all the main elements of study.
85 There should be a sensible balance between intermediate and final assessment, and between formal examinations and other approaches.
(BTEC, 1984, page 19)
The centrality of real world skills development was underlined by BTEC Chairman Neale Raine when he explained that: “the test of vocational education must be what a person can do as a result – not just what can be repeated in a written examination” (Raine, 1984, page 74). He lamented the tradition of forcing learners to acquire knowledge that could only be justified on the basis that it happened to appear in the syllabus of an examination course.
In 1986, the Council published general guidelines on ‘Assessment and Grading’, ‘Teaching and Learning Strategies’, and ‘Common Skills and Core Themes’. Principles outlined in these documents captured the emerging philosophy of BTEC qualifications, which embraced heavy use of projects and assignments, team working, work-related problem solving, active involvement in own learning, plus stimulating and personalised teaching and learning strategies.[footnote 47]
Useful insights into these approaches were provided by an NFER evaluation of BTEC Nationals in Business and Engineering that was conducted during 1987 (FEU & BTEC, 1990). The study found much good practice in adapting to this relatively new philosophy, but also many areas for concern. For instance, although the outcome-based approach to qualification design foregrounded competence, applied knowledge, and skill – facilitating an assignment-led approach and discouraging overload with non-essential knowledge – there was still a marked tendency for tutors to fall back on more traditional didactic approaches:
‘Getting through the course’ and giving ‘needed’ theoretical knowledge were often viewed as paramount, leaving little time for activity-based work. There was some evidence that theory is often included through habit rather than because it is essential for competence. This seems to be, at least in part, a habit inherited from past course objectives and difficult to break.
(FEU & BTEC, 1990, page 3)
The BTEC qualification suites proved to be very popular. Tables published in ‘BTEC Bulletin No. 3’ (BTEC, 1985) showed that BTEC registrations had increased at all 3 levels – General, National, and Higher National – from 1981 (151,660) to 1983 (181,513).[footnote 48] During the 1983 academic year, just over a sixth of registrations were for Generals, just under a third were for Highers, and just over a half were for Nationals. Across the sectors, 40% of registrations were in Business & Finance, 29% were in Engineering, 8% were in Construction, and the remaining sectors accounted for no more than 6% of registrations each. The 16 to 19 General award was soon to be replaced by the BTEC First. Registrations for Firsts, Nationals, and Higher Nationals continued to rise throughout the 1980s, and by the end of the decade their influence was such that it could be said that they: “formed the heart of the curriculum of most further education colleges” (Higham, Sharp & Yeomans, 1996, page 83).
The RVQ, NVQs, and GNVQs
Just 7 months after the Council had set out its stall on the future of BTEC qualifications, ministers announced a wide-ranging Review of Vocational Qualifications, which would address what was described as a qualification ‘jungle’ (currently presided over by a large number of examining and validating bodies, professional bodies, and other standards-setting bodies, including the BTEC). The scope of this review would range from qualifications that targeted 16-year-old school leavers to qualifications pitched at Higher National level.
Although, in theory, the review was intended to span all vocational qualifications in this range, the inclusion of BTECs was not uncontroversial. This appeared to reflect tension between the Department of Employment (which was driving the review) and the Department of Education and Science (which oversaw the Council). The chair of the review, Oscar De Ville, later suggested that the DES had influenced the Council to steer clear of this employment-driven initiative (Hargraves, 2000). Bear in mind that the TEC and the BEC had been established specifically to co-ordinate and sustain subsequent reforms on a national basis, so the very idea of an independent Review of Vocational Qualifications might have seemed like a vote of no confidence in the newly merged Council (Raggatt & Williams, 1999).
Once responsibility for developing the National Vocational Qualification framework had passed to the NCVQ, it became increasingly clear that existing qualifications would have to be radically reformed if they were to be accredited. Although, in theory, the NVQ model was not wedded to a specific delivery approach, in practice, it was clearly aligned to workplace training and assessment (as envisaged by Gilbert Jessup). As BTECs were normally provided in colleges rather than workplaces, this immediately put them under pressure. Was the NVQ framework designed to exclude or occlude the college-based space (midway between academic A levels and occupational apprenticeships) that BTEC Nationals had so effectively occupied in recent years? It was unclear.
In 1989, BTEC chief executive, John Sellars, was openly critical of early NVQ implementation efforts, which he claimed had resulted in narrow and mechanistic qualifications (Sharp, 1998). The Council was not entirely persuaded by the reform of vocational qualifications, despite other awarding organisations, including City & Guilds and the RSA, having largely bought into it. Nevertheless, the NCVQ and the BTEC published a joint statement in September 1988 explaining how they would work together. The Council then began submitting BTEC Firsts, Nationals, and Higher National awards in Business and Finance and Public Administration for conditional accreditation, explaining that:
Although the representation of course structures will obviously be affected by the project it is not expected to change the major principles and methods by which courses leading to BTEC awards are delivered.
(BTEC, 1989, page 2)
Unfortunately, these early awards were not reaccredited and relations between the Council and the NCVQ remained strained. Under political pressure, a formal agreement was reached in October 1990 whereby the Council agreed to revise its awards to satisfy NVQ accreditation criteria, while still retaining broader, more educational content characteristic of BTECs (Raggatt & Williams, 1999).
As it turned out, the middle route was not eliminated by the NVQ framework, but reinforced, with the introduction of General National Vocational Qualifications. Yet, this established a different kind of threat. GNVQ specifications were to be drawn up by the NCVQ, and it was expected that the BTEC, City & Guilds, and the RSA would award them (Sharp, 1998). Although the expediency of conditional accreditation had given the impression that the BTEC model might continue under the new framework – and although early conversations around ‘general’ NVQs seemed to have left space for integrating BTEC Nationals within the framework (Raggatt & Williams, 1999) – it soon became clear that the BTEC model would have to be replaced by either the NVQ model or the GNVQ model. Indeed, the Council was required to report to the NCVQ annually on its progress in removing BTEC awards (Sharp, 1998).
The NCVQ had no formal powers to require awarding organisations to replace their existing qualifications with new ones that satisfied their accreditation criteria. But the Secretary of State for Education was able to exert pressure via funding authorisation. The 1992 Further and Higher Education Act instructed the Further Education Funding Council to support courses leading to NVQs and GNVQs. Having said that, Schedule 2A of this act permitted the Secretary of State to authorise funding for other vocational qualifications too. During the mid-1990s, this included BTEC awards, City & Guilds awards, RSA awards, Pitman awards, and others too. An extended quotation from Raggatt & Williams provides useful insights:
The awarding bodies found that their non-N/SVQ and GNVQ products attracted a considerable and continuing demand, and accordingly continued to offer them. […] BTEC had agreed to phase out its First and National Diplomas (and the part-time Certificates) in favour of GNVQs. Yet demand for these products remained high; indeed it was rising for the National awards. Perhaps unsurprisingly, then, BTEC, which had become independent from the government in October 1993 and was now operating as a more explicitly commercial organization, chose to keep its National Diplomas and Certificates as well as GNVQs. It put up various justifications for this reversal of policy, including the need for more time to effect the changes needed and the claim that GNVQs were not adequate replacements for some of its products […]
While the government could have made greater use of its powers to remove qualifications from the FEFC’s approved list, it was recognized that, because the market was expressing a clear demand for other awards, they clearly possessed attributes that N/SVQs and GNVQs did not.
(Raggatt & Williams, 1999, page 152)
Sharp (1999) noted that, by the mid-1990s, GNVQs and BTECs were attracting students with somewhat different aspirations, quoting statistics that showed that Advanced GNVQ students were considerably more likely than National Diploma students to stay in education, while Diploma students were considerably more likely to progress directly into employment. This provided an impetus to retain the BTEC route. Into the 2000s, the introduction of GNVQ model 4, the AVCE, provided further impetus to continue developing the BTEC National approach, particularly as the style of the new AVCE was now very much closer to the A level approach.
Whereas, at the outset of the 1990s, the NCVQ anticipated that all technical and vocational qualifications would eventually be subsumed within the NVQ framework, by the mid-1990s it was clear that distinctive awards from the major players – including the BTEC, City & Guilds, the RSA, and others – were continuing to attract large numbers of candidates. During the early-2000s, the 2002 BTEC National suite proved to be very popular, attracting many students back from AVCEs, particularly in curriculum areas like Creative Arts & Media, and Sports (Hodgson & Spours, 2003), as well as in Business (Torrance, Colley, Garratt, et al, 2005). The flight from AVCE also created a new client base for BTEC awards, comprising schools that had never offered vocational courses prior to the introduction of GNVQs.
Evolving qualification models
BTEC awards were conceived very differently from NVQs, being underpinned by a distinctive philosophy of teaching and learning. The following sections explain how TEC and BEC precursors to the CASLO approach were to morph into CASLO qualifications as the BTEC model evolved.
It is important to remember that the BTEC was established as a validating body, to provide national credibility for the work of local colleges. It achieved this through 3 main activities: validation, moderation, and monitoring (see BTEC, 1989). Validation established a ‘contract’ between centre and the Council, to provide reassurance concerning delivery quality. It considered course aims and structure, how the course would be managed, run, and evaluated, and the involvement of local employers and organisations. Moderation was designed to observe this contract in action, providing a quality control function as well as a quality assurance function with effective feedback at its heart. It considered course management, teaching and learning strategies, assessment, course review and evaluation. Finally, monitoring was conceived as a quality audit of processes and decisions. This included analysis and evaluation of the work of centres, and of student achievement, both via random and non-random sampling.
In short, while the Council was responsible for the BTEC qualification model – its structure, content, standards, and approach – centres were responsible for developing BTEC programmes, including both teaching and assessment. In theory, BTECs offered considerable potential for creative course design. In practice, however, even during the early days, it is unclear the extent to which this flexibility was capitalised upon. Fisher went so far as to claim that the opportunity for creativity and freedom may actually have been less than for a typical A level syllabus of its time:
Centres did not at the National level, at least in any meaningful way, have freedom regarding the content and implementation of the curriculum but merely explained to BEC/BTEC how they intended to operationalise that that was very clearly laid down. While centre-devised modules could be written, these were rare and had to be produced in accordance with specified formats (BEC, 1977c) and were often substantially amended by BEC/BTEC before receiving approval.
(Fisher, 2004, page 248)
Roy Fisher was a history graduate with considerable experience as a lecturer and curriculum developer in post-compulsory education. He was awarded a PhD in 1999 for his investigation into how the BEC-BTEC model evolved over time, studied through the lens of the National award in Business (see Fisher, 2003). His analysis focused upon 3 ‘generations’ of the model, from 1979 to 1992, just prior to the arrival of the Advanced GNVQ in Business. Table 2 is adapted from the appendix to this article and from a similar table in his PhD thesis (Fisher, 1999, page 196).
BEC National Diploma in Business Studies | BTEC National Diploma in Business and Finance | BTEC National Diploma in Business and Finance |
---|---|---|
Introduced Sept 1979 | Introduced Sept 1986 | Introduced Sept 1992 |
Module-based | Unit-based | Module-based |
General Objectives + Learning Objectives | General Objectives + Indicative Content | Outcomes + Performance Criteria + Range Statements + Evidence Indicators |
6 core modules + 6 option modules | 5 core units + 7 option units + business-related skills | 8 core modules + 8 option modules + common skills |
In-course assignments + internally set end-of-module exams (BEC moderated) |
In-course assignments + internally set end-of-unit ‘final assignments’ (BTEC moderated) |
In-course portfolio building recording evidence of achievement, including assignments, simulations, work placement tasks, etc. (BTEC verified) |
Table 2. Evolution of the BTEC Business National (from Fisher, 2003)
Fisher commented positively on the transition from Generation 1 (BEC) to Generation 2 (BTEC), which reduced the prescriptiveness of the specified learning outcomes (see also Ellison, 1987). He provided an example of an information technology outcome from a Generation 1 module, which listed specific outcomes below a general one:
-
C Understand the importance of the computer as an information tool and be aware of its impact on administrative operations
-
C1. describe the main characteristics of the computer, including both hardware and software, recognizing the special need for relevant and accurate input data;
-
C2. identify the main commercial applications of computers from routine data processing to the provision of management information;
-
C3. outline the way in which specific administrative procedures have changed in response to the introduction of computer systems.
-
He contrasted this with essentially the same outcome from the Generation 2 version of the reconfigured module:
- D Assess the uses of electronic technology as a means of communication
This single, general outcome from the 1986 module was supplemented by 5 areas of ‘indicative content’ such as ‘main commercial applications of computers for routine data processing’ (Fisher, 2003).
Although all 3 generations were structured around learning outcomes of one sort or another, it was the 1992 model that unambiguously embraced the CASLO approach, influenced, of course, by the new NVQ regulations and negotiations with the NCVQ. Fisher’s comparable Generation 3 example now read like this:
-
Outcome 7.3 Assess the applicability of, and where appropriate use, relevant technology in the operation of administrative procedures and systems
Performance criteria
-
a major applications of technology in administrative operations identified and classified
-
b factors affecting efficient and effective use of technology assessed
-
c technology required for particular administrative operations recognized and, where appropriate, used effectively
-
d introduction of new technology examined and evaluated
-
Fisher argued that Generation 2 represented the ‘Golden Age’ of BTECs, with broader objectives and merely ‘indicative’ content permitting an element of flexibility for staff and students. His analysis of the transition to Generation 3 was damning:
Between 1979 and 1992 further education colleges experienced a period of curriculum implementation and development when learning and teaching styles were transformed from the ‘chalk and talk’ model to student-centred approaches that were integrated and coherent (for a discussion of this see Fisher, 2003). The influence of the NCVQ in enforcing the instrumentalism of competence would replace this with a fragmented portfolio culture that buckled under the weight of its own monitoring and recording fetish
(Fisher, 2004, page 252)
Carter (2012) described a similar cross-generational progression within Engineering qualifications. This included an extract from a 1986 BTEC National Science Level II unit which echoed the transition to general objectives and indicative content in the Business National. The unit was intended to occupy 60 hours in a part-time course (complemented by work-based learning) or up to 90 hours in a full-time course. The unit was intended to be delivered primarily through practical laboratory experimentation and modelling. It was suggested that the unit might be assessed by practical assignments (50%) and other types of test (50%).
The first element of the unit (which comprised 5%) was specified as follows (the numbers indicated general objectives, and the letters indicated indicative content):
- Organise elements and information relating to engineering problems by identifying internal and external systems
a. system boundary
b. sub-system
c. interactional paths
d. effect of component interaction, eg interpretation of symptoms in fault diagnosis
2. Identify the significant features of systems and represent them by block diagrams
a. inputs and outputs
b. directions of signal flow
c. concept of signal modification and conversion
Systems referred to might include, for example, diesel engine-generator set, air compressor, machine tool, robotic arm, manufacturing coil, or vehicle drive system.
(Carter, 2012, page 226)
BTEC Firsts and Higher Nationals also came to adopt the CASLO approach. This can be illustrated using information from the BTEC Higher National Mechanical Engineering specification that was published for first teaching in September 2000, to be regulated under the new National Qualifications Framework.
These qualifications were offered as either Higher National Certificates or Higher National Diplomas, and both of these courses were designed to be taken over 2 years. Built from units of approximately 60 Guided Learning Hours each, the part-time HNC required 6 mandatory units plus 4 optional ones, while the full-time HND required the same 6 mandatory units plus 10 optional ones. HNCs were designed for those in work, whom (it was assumed) would be gaining extra experience that would roughly equate to the extra learning time in HNDs (Judith Norrington, personal communication). All units were structured according to the CASLO approach, in terms of both learning outcomes and assessment criteria, as illustrated in Table 3.
Learning Outcomes | Content | Assessment Criteria |
---|---|---|
1. Select and apply costing systems and techniques | Costing systems: job costing, process costing, contract costing Costing techniques: absorption, marginal, activity-based Engineering business functions: design, manufacturing, engineering services Measures and evaluation: break-even point, safety margin, profitability forecast, contribution analysis, ‘what if’ analysis, limiting factors, scarce resources |
Identify and describe appropriate costing systems and techniques for specific engineering business functions Measure and evaluate the impact of changing activity levels on engineering business performance |
2. Analyse the key functions of financial planning and control | Financial planning process: short, medium, and long-term plans, strategic plans, operational plans, financial objectives, organisational strategy Factors influencing decisions: cash and working capital management, credit control, pricing, cost reduction, expansion and contraction, company valuation, capital investment Budgetary planning: fixed, flexible and zero-based systems, cost, allocation, revenue, capital, control, incremental budgeting Deviations: variance calculations for sales and costs, cash flow, causes of variance, budgetary slack, unrealistic target setting |
Explain the financial planning process Describe the factors influencing the decision-making process during financial planning Examine the budgetary planning process and its application to financial planning decisions Apply standard costing techniques and analyse deviation from planned outcomes |
3. Apply basic project planning and scheduling methods to a specified project | Project resources and requirements: human and physical resource planning techniques, time and resource scheduling techniques, Gantt charts, critical-path analysis, computer software packages, work breakdown structure, precedence diagrams | Establish the project resources and requirements Produce a plan with appropriate time-scales for completing the project identify human resource needs Identify approximate costs associated with each stage of the project |
Table 3. Outcomes from the 2000 Mechanical Engineering Higher National Unit 1 (Business Management Techniques)
For each core and optional unit, a content list was provided for each learning outcome. Although units were designed to be free-standing, centres were encouraged to be innovative in designing programmes that enabled integration and flexibility within and across outcomes from different units. All of the units were designed to recognise the importance of skills development through the integration of Common Skills.
Again, in accordance with the CASLO approach, students were required to achieve all of the specified learning outcomes (and assessment criteria) to be awarded a unit pass. Each unit also awarded higher grades, albeit using generic grading criteria rather than unit-specific ones. These were applied to the totality of assessment evidence provided for the unit. Centres were encouraged to incorporate a variety of traditional and innovative assessment methods, including case studies, assignments, time-constrained assessments, and work-based projects.
Whereas qualifications in the Higher National suite tended to grade units on the basis of generic criteria, qualifications from the National suite tended to take a different approach, specifying unit-specific criteria for pass, merit, and distinction. This is illustrated in Table 4 with a grading grid from Unit 1 from the BTEC Edexcel Level 3 in Business, which had been introduced for first teaching from September 2007. To be awarded a merit grade on this unit, students would need to have satisfied all 3 merit criteria and all 5 pass criteria. If they also satisfied the single distinction criterion, then they would be awarded a distinction grade.
Just like the 2000 Higher National in Mechanical Engineering, this 2007 National in Business also included a content list, bespoke to each learning outcome of each unit. Its specification also provided detailed unit-specific guidance for tutors on:
- delivery approaches
- assessment approaches (with criterion-specific tips) [footnote 49]
- links to occupational standards, other BTEC units, and other qualifications
- essential delivery resources (access to computers, books, and so on)
- indicative reading for students (textbooks, journals, websites)
- links to key skills
Learning outcomes | Pass criteria | Merit criteria | Distinction criteria |
---|---|---|---|
1. Understand the different types of business activity and ownership 2. Understand how the type of business influences the setting of strategic aims and objectives 3. Understand functional activities and organisational structure 4. Know how external factors in the business environment impact on organisations |
P1 describe the type of business, purpose and ownership of two contrasting organisations P2 describe the different stakeholders who influence the purpose of two contrasting organisations P3 outline the rationale of the strategic aims and objectives of two contrasting organisations P4 describe the functional activities, and their interdependencies in two contrasting organisations P5 describe how three external factors are impacting upon the business activities of the selected organisations and their stakeholders |
M1 explain the points of view from different stakeholders seeking to influence the strategic aims and objectives of two contrasting organisations M2 compare the factors which influence the development of the internal structures and functional activities of two contrasting organisations M3 analyse how external factors have impacted on the two contrasting organisations. |
D1 evaluate how external factors, over a specified future period, may impact on the business activities, strategy, internal structures, functional activities and stakeholders of a specified organisation. |
Table 4. Grading grid from the 2007 Business National Unit 1 (Exploring Business Activity)
Finally, it is worth noting how the idea of locally-devised programmes – which was the principle that led to the TEC and the BEC being established as validating bodies – gradually fell out of favour over time. For instance, the specification for this 2007 BTEC Business National explained that centres would normally be able to meet local needs by selecting the most appropriate of the specialist optional units on offer. In certain circumstances, they might be able to make a case for incorporating units from other BTEC National specifications. But only in exceptional circumstances would they be permitted to develop their own units. Permission would only be granted on the basis of strong evidence that local needs could not be met using standard units.
Evaluations
The QCA (2005) investigated standards in the 2002 suite of Nationals, focusing on awards in Media, Business, and Maintenance and Operations Engineering. It noted evidence of good and poor practices, but concluded that national standards were being maintained overall with the transition to the new qualifications. The QCA report commented on 2 aspects of qualification design that set BTECs apart from NVQs in their adoption of the CASLO approach. The first was the inclusion of an Integrated Vocational Assignment, an holistic unit that aimed to synthesise learning from multiple units, to help students appreciate “the seamless relationship between units in an applied vocational context” (QCA, 2005, page 17). As for all of the other units, this was assessed by the centre against specified outcomes and criteria, and quality assured via internal and external verification. The second was unit grading, which required criteria to be specified at multiple levels for each unit outcome – pass, merit, and distinction – in the form of a grading grid. In theory, higher criteria were intended to reflect a qualitative improvement in performance, although the report noted that they sometimes required additional tasks to be undertaken, which was not intended. The report recommended a number of steps that (BTEC owner) Edexcel could take to improve the suite, including:
- clearer guidance on grade differentiation, together with a review of units to ensure qualitative rather than quantitative reward of performance
- regional events for internal and external verifiers, to standardise and maintain national standards and to provide contextualised guidance (focused on sufficiency of evidence, assessment design, grading and differentiation)
- good quality exemplar assignment material and specific guidance on how centres can develop the assignment writing skills of their teaching staff [footnote 50]
Challenge in interpreting, and differentiating between, grading criteria was a consistent theme in the QCA report, across all 3 qualifications. The percentages of centres experiencing difficulties of this sort were 68% for engineering, 31% for business, and 13% for media. Problems included business centres interpreting the terms ‘analyse’ and ‘evaluate’ in different ways, engineering centres expressing concern over vague or unclear criteria, and media centres noting repetition and overlap of criteria across units.
Challenge in interpreting, and differentiating between, grading criteria was also a theme in Ofqual’s follow-up monitoring report on the Edexcel Level 3 BTEC National Certificate in Manufacturing Engineering, which had been introduced in September 2007 prior to the introduction of the QCF (Ofqual, 2010a). Improvements were built into revised unit specifications that were being prepared for accreditation to the QCF. They provided a clearer structure to the units, and demonstrated more clearly what was required for learners to achieve a pass, merit, or distinction.
Conclusion
By the turn of the millennium, the CASLO approach had become the high-level design template for BTEC qualifications across all 3 of its principal suites: First, National, and Higher National. Unlike the NVQ approach, however, the ‘BTEC way’ paid more than lip service to curriculum issues, and BTEC programmes were associated with a strong philosophy of teaching and learning that elevated the role of projects, problem solving, team working, and student ownership of the learning journey.
Although bearing more than passing similarity to the GNVQ model, the BTEC model proved to be far more successful. Ironically, while successive iterations of the GNVQ model became less committed to the CASLO approach, successive iterations of the BTEC model became more committed. When the non-CASLO AVCE was finally withdrawn, much of its market share went to the full-CASLO BTEC National, which then went from strength to strength.
This is not to say that the BTEC model was immune to CASLO-related problems of the sort that beset NVQs and GNVQs. For instance, case studies of particular BTEC Nationals have raised concerns related to the risk of poor-quality teaching and learning associated with the detailed specification of outcomes and criteria (see, for example, Ecclestone, 2010; Hobley, 2016; Carter & Bathmaker, 2017). Likewise, QCA evaluations have raised concerns related to the risk of BTEC standards not being applied consistently. However, by and large, BTECs have not received anywhere near the level of public critique as NVQs and GNVQs received in relation to their adoption of the CASLO approach. On the one hand, this may be at least partly due to a perception that the TEC, the BEC, and the BTEC councils were keen to collaborate with the educational establishment, while the opposite perception seems to have been true of the NCVQ, particularly in relation to NVQs. On the other hand, it is also probably at least partly due to their rollout never having been as shambolic as was the case for both NVQs and GNVQs.
Genesis
The CASLO approach took root in England as the NVQ framework was rolled out. Initially, it was anticipated that all technical and vocational qualifications would be accredited to the new framework and would come to embrace this new approach to qualification design. Indeed, the principal architect of the NVQ system, Gilbert Jessup, anticipated a time when all qualifications would adopt this new approach – technical, vocational, and general alike – which would embed it at the heart of education and training in England.
Although the CASLO approach was ultimately deemed unsuitable for general qualifications, the NCVQ did locate the approach at the heart of a new, middle route qualification, the General NVQ, or GNVQ. Rollout was highly problematic for both GNVQs and NVQs, yet with different consequences. The NVQ model evolved over time, but it remained firmly grounded in the CASLO approach. The GNVQ model also changed over time, but more radically. Having become increasingly general, it was ultimately replaced by the AVCE, which was then replaced by the Applied A level, and all vestiges of the CASLO approach were jettisoned.
The integration of the CASLO approach within BTECs provides for a more subtle and intriguing story. Both the TEC and the BEC had pioneered an outcome-based approach to qualification design during the 1970s. When the BTEC was established, in 1983, still prior to the introduction of NVQs, the new Council continued to promote an outcome-based approach, recognising this as a solution to problems identified with qualifications of the past, including some arbitrariness of syllabus content.
Despite pioneering outcome-based approaches, neither the TEC nor the BEC appeared to require stringent application of the mastery principle. The general idea of mastery was felt to be important, but tricky to operationalise. They also grappled with the challenge of how to pitch learning outcomes at an appropriate level of generality. The earliest TEC specifications presented fairly specific outcomes, while the earliest BEC outcomes were pitched at a slightly higher level of generality.
The most general outcomes appeared within the 1986 BTEC specifications, which applied a common approach across both technician and business awards. Outcomes were now specified at a high level, with each outcome linked to indicative content. The 1986 model also emphasised the importance of criterion-referencing, which seemed to recommend applying the mastery principle more stringently than in previous years. As such, the essence of the 1986 BTEC model was quite similar to the CASLO approach. However, as with the earlier TEC and BEC models, the 1986 model still appeared to permit a certain amount of flexibility in the approach that a college might adopt to criterion-referencing (see BTEC, 1986). For reasons of this sort, we decided not to describe BTECs as the first CASLO qualifications of national prominence.
We do, however, see the CASLO approach unambiguously embedded within the 1992 BTEC model, with its centrally specified learning outcomes and performance criteria, and its stringent application of the mastery principle. Influenced, of course, by NVQ framework accreditation criteria, these BTEC specifications would seem to be the first to bear all of the hallmarks of the CASLO approach. The CASLO approach would soon became one of the defining features of the ‘BTEC way’ across all 3 principal suites.
-
The idea of ‘standards of a new kind’ was first mooted in the 1977 MSC report ‘A Programme for Action – Training for Skills’ (Raggatt, 1991). A response to criticism of the extant system of apprenticeship-by-time-serving, the new approach would be based on clearly defined and testable standards of occupational competence, which would help to improve access for both young people and adults. Plans were laid out in more detail in ‘A New Training Initiative: A Consultative Document’ (MSC, 1981a). ↩
-
The MSC had been influenced by work on modularisation and credit undertaken by the Council for National Academic Awards and by the Further Education Unit, which had been influenced by developments in the USA (Tim Oates, personal communication). ↩
-
Williams & Raggatt described the Department of Education and Science as a “minor ‘bit player’ in the development of vocational education and training during the 1980s” (Williams & Raggatt, 1999, page 83). ↩
-
Later, we will see that a third dimension was subsequently specified for NVQs, to capture the expected range of application of competence. ↩
-
The nomenclature can be confusing, particularly when switching contexts from NOS to NVQ. In NOS nomenclature, elements of competence were nested within functional units. In this context, each element of competence was an occupational standard in its own right, as the terms were used interchangeably (Mansfield & Mitchell, 1996). In the qualification context, awards are often made at the unit level, which disposes us to think of the standard as a unit-level concept. Technically, though, NVQ standards were defined at the level of each individual element of competence, in terms of performance criteria. ↩
-
Consequently, the awarding organisations had no ownership of these standards and each National Occupational Standard was available to be incorporated within (equivalent) NVQs offered by multiple organisations. Wolf concluded that this led the ‘big 3’ vocational awarding organisations of the time – BTEC, City & Guilds, and RSA – to expand their activities, competing more overtly for trade instead of operating in a semi-monopolistic way (Wolf, 1995). ↩
-
By 1994, there were 160 Lead Bodies and 114 awarding organisations (FEFC, 1994a). ↩
-
Jessup (1991) identified 11 areas of competence, each subdivided into multiple subsectors: tending animals, plants and land; extracting natural resources; constructing buildings, highways and related structures; engineering; manufacturing processes; transporting; distributing and selling; providing leisure, accommodation and catering services; providing health, social and protective services; providing administrative and business support; developing and managing human resources. ↩
-
This proved to be a particular challenge for the Business and Technician Education Council, which sought accreditation for many BTEC qualifications (Raggatt & Williams, 1999). ↩
-
Mansfield & Mitchell (1996) drew a clear distinction between ‘range indicators’ (specified in occupational standards) that illustrate coverage for training purposes and ‘range statements’ (specified in qualification standards) that require coverage for assessment purposes. ↩
-
It is worth noting that this debate over the role of underpinning knowledge and understanding was distinct from a broader debate concerning the breadth of the competence model, which had already been a site of conflict between the Manpower Services Commission and the Further Education Unit. The FEU was prepared to support MSC endeavours to move towards a competence-based system, but only on the understanding that education should have a central role, requiring a broader competence model: “The quid pro quo of this is a wider definition of competence than that associated with working life: embracing formal and informal learning, and extending beyond occupational skills into life skills.” (FEU, 1984, i). ↩
-
Consequently, the elements of competence were intended to be general rather than highly specific, for example: “Edit existing text in a text processor.” or “Write a report which evaluates potential solutions against known technical limitations and user’s criteria.” (both taken from TA, 1988b, page 6). ↩
-
The place of knowledge and understanding in NVQ development is discussed thoroughly in Employment Department (1994). ↩
-
For instance: “But, in general, we believe that ‘competence’ is the embodiment of a mechanistic, technically-oriented way of thinking which is normally inappropriate to the description of human action, or to the facilitation of the training of human beings.” (Ashworth & Saxton, 1990, page 24). Or: “More specifically, in the English NVQ system, competence is understood as the performance of a narrow set of tasks to a defined standard, and is thus bound to and reflects particular outputs.” (Brockmann, Clarke, & Winch, 2009, page 790). Or: “Since learning outcomes are, by their nature, narrowly conceived, what they measure is also narrowly conceived. It follows that there are difficulties in specifying learning outcomes for activities that, by their nature, are broad in scope, require underpinning knowledge for their performance and more complex personal characteristics than simple, visually observable, skills.” (Brockmann, Clarke & Winch, 2008, page 106). ↩
-
For instance: “All this confusion and equivocation seems to be the outcome of attempting to capture and describe, in behaviourist terms, something which is essentially non-behaviouristic, namely the development of knowledge and understanding.” (Hyland, 1993, page 61). ↩
-
For instance: “Gilbert Jessup, the chief architect of NVQs, proudly wrote of doing away with ‘the syllabuses, the courses or the training programmes […]” (Smithers, 1997, page 56). ↩
-
Note that the idea of ‘qualification flexibility’ described in this section is different from the idea of ‘workforce flexibility’ that had featured heavily in the New Training Initiative reports of the early-1980s, which argued that workers and companies of the future needed to be flexible and adaptable to cope with the uncertainties of a rapidly changing world. Mansfield & Mitchell (1996) argued that the key to workforce flexibility was effectively specified training standards, based on broad role specifications rather than narrow task specifications. ↩
-
It is worth noting that flexibility was also consistent with certain educational movements of the period, including the drive for increased personalisation of learning and campaigns for retaining teacher control of the curriculum. ↩
-
Given the speed with which the NVQ system was up-and-running, it is interesting to note that the NCVQ still ended up being criticised for how slowly NVQs were being made available during the early years (Sharp, 1999). Williams (1999) explained that early expectations concerning rapid rollout, including among policy makers, were based on the assumption that many existing qualifications could be ported straightforwardly into the new framework, which proved not to be the case. ↩
-
Different sources record different values for the numbers of certificates awarded, while still indicating the predominance of certifications at levels 1 and 2 (see, for example, Field, 1995). ↩
-
The prevalence of external assessment within early NVQs is worthy of note in this respect. Steadman (1995) cited a study of the situation for NVQs as at October 1990. It indicated that almost all NVQs used supplementary evidence beyond observation of performance, and almost a quarter of unit assessments involved an externally set written test or exam. ↩
-
Note that Ron Dearing – in his ‘Review of Qualifications for 16-19 Year Olds’ (Dearing, 1996) – had echoed Gordon Beaumont’s expectation that there “must be an over-riding requirement to demonstrate rigour” in NVQ assessment (Beaumont, 1996, page 19). ↩
-
Note that the ‘core skills’ nomenclature formally morphed into ‘key skills’ in 1997 (Mansfield, 2004). ↩
-
The regulator subsequently published ‘Operating rules for using the term ‘NVQ’ in a QCF qualification title’ (Ofqual, 2008b) to explain how the transition from NQF to QCF regulation ought to be managed. ↩
-
The ‘Apprenticeship, Schools, Children and Learning’ Act of 2009 led to the Specification of Apprenticeship Standards for England (SACE) in 2010, without radically affecting this structure. ↩
-
It was certainly intended that: “performance criteria should be capable of distinguishing between satisfactory and unsatisfactory performance in the function covered by the NOS.” (UKCES, 2011b, page 13). ↩
-
Indeed, NOS were now defined as: “A statement of the standard of performance an individual must achieve when carrying out a function in the workplace, together with a specification of the underpinning knowledge and understanding.” (UKCES, 2011b, page 27). ↩
-
The authors noted that: “If qualifications are developed from NOS, it is necessary to insert certain verbs such as ‘explain’, ‘describe’, ‘list’ etc., but this is not required for the NOS itself.” (Carroll & Boutall, 2011, page 68). Yet, the simple addition of command verbs still does little to identify a boundary between having acquired the necessary level of knowledge and understanding and not having done so. This helps to bolster Gealy’s argument. ↩
-
The regulatory framework that underpinned NVQs (including the ‘Criteria for National Vocational Qualifications’ – Ofqual, 2011a) was withdrawn in 2015 and remaining NVQs were then regulated under Ofqual’s General Conditions of Regulation. Even prior to that, though, the NVQ system was gradually being dismantled, and many NVQs had already been replaced by or transformed into QCF qualifications. NVQ certifications in England fell from 77,580 in the 12 months up to quarter-4 of 2012 to 2,710 in the 12 months up to quarter-4 of 2015 (data taken from the Ofqual Analytics website). Ofqual still regulates a relatively small number of qualifications with ‘NVQ’ in the title, although NVQ is no longer recognised as a distinct qualification type. ↩
-
After all, colleges were not really in control of the curriculum before the introduction of NVQs, as syllabuses were traditionally specified by awarding organisations in partnership with multiple stakeholders, including employers as well as colleges. Indeed, even when empowered by the course validation model of TEC, BEC, and later BTEC awards, colleges often preferred to use centrally developed units (Cantor, et al, 1995). ↩
-
The suggestion that the NVQ model was ‘behaviourist’ seems to have become a matter of TVET dogma over the decades. See, for instance: Field (1991); Marshall (1991); Norris (1991); Ashworth (1992); Hodkinson (1992); Hyland (1993); Jones & Moore (1995); Tarrant (2000); Elliott (2001); Grugulis (2002); Halliday (2004); James (2006); Brockmann, et al (2009); Wheelahan, 2016; Murtonen, Gruber, & Lehtinen (2017). ↩
-
This is a curious literature, which is illuminated by an exchange between Hlebowitsh (1992), Kliebard (1995) and Hlebowitsh (1995), concerning an influential critique of the Tyler Rationale by Kliebard during the 1970s. ↩
-
Thus, GNVQs followed in the wake of pioneering programmes – such as the Certificate of Pre-Vocational Education and the Technical and Vocational Education Initiative – which had attempted to target ‘less academic’ young people who wanted to remain in education but who were not particularly well suited to A levels (Sharp, 1998). These qualifications, in turn, were influenced by an early report from the Further Education Unit (see Pring, 1995), which had been established in 1977 by the Secretary of State for Education and Science to facilitate a co-ordinated and cohesive approach to curriculum development in further education. Two years later, the FEU published a report on pre-employment courses for young people entering further education at 16. It was titled ‘A Basis for Choice’ because it aimed to develop a design template for pre-employment courses that would help young people to make an informed and realistic career choice (FEU, 1979). This would be facilitated by courses based upon a common core of learning – guaranteeing competence in basic skills – alongside vocational and job-specific studies. The report specified aims for this common core in terms of learning outcomes and learning experiences. Linked to a profile approach to reporting, this pre-employment course design template can also be seen as a precursor to the CASLO approach. ↩
-
In his foreword to the white paper, Prime Minister John Major also announced the creation of an overarching Level 3 diploma: “With the introduction of a new Advanced Diploma, we will end the artificial divide between academic and vocational qualifications, so that young people can pursue the kind of education that best suits their needs. While A levels will remain the benchmark of academic excellence, we will raise the standard of vocational qualifications.” (DES, DE, & WO, 1991). Jessup (1993) explained that the purposes of the diploma were to signal parity of esteem for the vocational route and to broaden the post-16 curriculum more generally. Ultimately, the idea of an overarching diploma was abandoned. ↩
-
Advanced GNVQs – more specifically, their 12 vocational units – were intended to be equivalent to 2 A levels (or 1 Level 3 NVQ). Intermediate GNVQs were intended to be equivalent to 4 or 5 higher-grade GCSEs (or 1 Level 2 NVQ). Foundation GNVQs were intended to be equivalent to 4 lower-grade GCSEs (or 1 Level 1 NVQ). In September 1995, Part One GNVQs were introduced (not to be confused with Part One NVQs). They were designed to provide a 2-year course for key stage 4 students or a 1-year to 2-year course for post-16 students. They required less teaching time than Foundation and Intermediate GNVQs, but shared many characteristics in common (Frankland & Ebrahim, 2001). ↩
-
Art & Design, Manufacturing, Leisure & Tourism, Business, and Health & Social Care. ↩
-
This contrasted with essentially the same data for NVQs: 8% (BTEC), 24% (RSA), and 68% (C&G). Note that data for RSA and C&G related to entries, whereas data for BTEC related to awards (FEFC, 1997). ↩
-
The NVQ model was not actually based upon continuous assessment, per se, because there was no formal link between the chronology of learning and the chronology of assessment. Indeed, it would have been entirely in keeping with the model for all of the assessment to have occurred after all of the learning had been completed. However, it left open the possibility of a more extended, or continuous, assessment process where certain outcomes were mastered earlier or later than others. ↩
-
Wolf (1998) described these simple tests (which played no part in grading) as little more than a concession to ministers, noting that the original NCVQ model for GNVQs was entirely portfolio based. ↩
-
The idea of one-third of the evidence meeting criteria was not intended to be interpreted mechanistically, but heuristically. ↩
-
The pilot of the fully revised GNVQ model began in September 1997 and reported 2 years later (FEFC & Ofsted, 1999). ↩
-
Although not discussed in detail here, it is worth noting that Foundation and Intermediate GNVQs were not reformed on the same timescale, which meant that consultation exercises could draw upon experiences of AVCE implementation. In a report on exploratory work of this sort, the QCA noted: “There was a large body of opinion in favour of replacement qualifications having a compensatory assessment structure. It was regarded as a positive development within the 2000 GNVQ specifications. Many consultees were most concerned about having assessments that were fit for purpose as well as realistic and achievable. Some providers were concerned that too much teacher-led assessment would be unmanageable and that an appropriate balance between teacher assessment and external assessment would need to be achieved. A unitised approach was valued.” (QCA, 2003, page 12). Note that this work was undertaken in the context of a decision, in July 2000, by the Secretary of State for Education, to replace Part One, Foundation and Intermediate GNVQs with ‘GCSEs in vocational subjects’. Following concerns that these might not cater adequately for post-16 students, the decision to withdraw GNVQs was delayed until appropriate replacement qualifications could be identified (QCA, 2003). ↩
-
The overall Diploma grade was determined on the basis of performance across Principal Learning components and the Project (Ofqual, 2008c). This involved aggregating points derived from marks, that is, each component involved numerical marking, which proscribed the direct grading of components. So, although there was a nod to the mastery principle in the requirement for candidates to pass all components of the Diploma, the CASLO approach did not feature in this model. ↩
-
It is fair to say that, while the foregoing account has explained how the GNVQ-AVCE model changed over time, its account of why the model ultimately failed is limited. Responding to an early draft of this report, both Tina Isaacs and Barry Smith emphasised 2 key issues: debate over the nature of Part One (key stage 4) GNVQs, and the merger between the NCVQ and SCAA. Both of these were associated with classical views on qualification design becoming more forceful, resulting in the GNVQ model being bent into the shape of a traditional school-based qualification (see also Oates, 2010). In the case of Part One GNVQs, this ‘academic drift’ was linked to the need to comply with GCSE statutory orders, and to the inevitable challenge of qualifications being taught by school teachers who lacked experience in industry or commerce. ↩
-
In 1991, this name was changed to the Business and Technology Education Council, at which point it was still a non-departmental public body. The BTEC subsequently became wholly independent of government in October 1993 (Smith, 1994). In 1996, it merged with London Examinations to become Edexcel, which was later acquired by Pearson. The BTEC brand was retained throughout. ↩
-
It is worth noting that only a few awards (including BTEC and NVQ) have ever entered the public vernacular as shorthand for a certain kind of vocational or technical qualification. ↩
-
By 1986, the Council had established a Staff Development Unit to support colleges in coming to grips with these innovative teaching and learning approaches, which emphasised the role of teacher as facilitator and the role of student as explorer (Judith Norrington, personal communication). ↩
-
These figures included both Certificate and Diploma registrations at all 3 levels. ↩
-
For example: “In P4, explanations can be diagrammatic with suitable annotations as well as written or orally presented. The functional activity should be selected so that learners are able to demonstrate an understanding of the complexity and interdependency of functional areas and how they function in contrasting organisations.” ↩
-
Activities and resources of this sort might have been more prevalent within earlier iterations of the BTEC model (Judith Norrington, personal communication). ↩