Research and analysis

Origins and Evolution of the CASLO Approach in England - Chapter 2: Pre-history

Published 18 November 2024

Applies to England

The National Vocational Qualification (NVQ) framework was developed during the late-1980s as a mechanism for ‘rationalising’ the so-called ‘jungle’ of Technical and Vocational Education and Training courses and providers. We will argue that NVQs were the first CASLO qualifications of national prominence in England. To understand why the NVQ framework was introduced – and to understand the genesis of the CASLO approach more generally – we will need to appreciate the state of TVET provision during the 1960s and 1970s, and the role that TVET qualifications played within it.

The TVET training landscape changed significantly during the 1960s, as did the TVET qualifications landscape during the 1970s. Subsequently, the relationship between training and qualifications evolved significantly during the late 1980s, with the introduction of NVQs. This zeitgeist of change reflected increasing recognition that neither work-based (on-the-job) training systems nor college-based (off-the-job) qualification systems were functioning adequately, and that both needed to be fixed.[footnote 1]

Although training in England had traditionally been seen as a matter for industry and commerce, it had become clear by the end of the 1950s that this voluntaristic system – based largely on agreements between employer associations and trade unions (Hansen, 1967) – was not working, and that government would need to intervene to orchestrate a national solution (Sheldrake & Vickerstaff, 1987). Intervention in training was most evident during the 1960s as the 1964 Industrial Training Act rolled out. Intervention in qualifications was most evident during the 1970s as the Technician Education Council and the Business Education Council were established, and during the 1980s as the National Council for Vocational Qualifications was established.

Just as today, a distinction was drawn between professional qualifications and technical and vocational ones. Professional groups such as engineers, accountants, physicians, and so on – often with Royal Charters – had well-established entry requirements and strong links to universities that provided high-level qualifications tailored to those requirements. The qualifications at the heart of the present report, however, are the technical and vocational ones that were not provided by universities.

1960s

The following subsections describe the most prominent pre-university TVET qualifications of the 1960s, explaining how they were delivered and assessed. To help understand the origins of the CASLO approach, particular attention will be paid to problems that beset these qualifications. We will begin with a short discussion of the training context during the early 1960s, to which we will return at the end of the 1970s section because it constitutes a critical piece of the CASLO jigsaw puzzle.

Training

During the 1960s, many young people left school and entered the workforce at the age of 16, some having successfully completed General Certificate of Education Ordinary levels (GCE O levels) and some not. Many young people left school even earlier, at 15, typically with no formal qualifications.

Cantor & Roberts estimated that only a small proportion of school leavers entered some form of skills training during the early 1960s, perhaps fewer than 20% of boys and 3% of girls (Cantor & Roberts, 1972). Many trainees received no off-the-job training, and even those formally designated as apprentices often received inadequate training, as many employers saw apprenticeship as little more than a source of cheap labour, and as small firms were reluctant to invest in apprentices for fear of them leaving for bigger firms once qualified. In short, industrial training “was, to a large extent, obsolete and out-of-date” and “failing to produce the goods” (Cantor & Roberts, 1972, page 81).

These problems were recognised during the late-1950s and early-1960s within a series of policy reviews (including the Carr report: Carr, 1958) and policy statements (including the ‘Industrial Training’ white paper: Ministry of Labour, 1962). In response to this pattern of unco-ordinated and reluctant training provision by individual companies, the 1962 white paper identified 3 objectives for a national training policy (Sheldrake & Vickerstaff, 1987):

  1. to link training provision to wider economic and technological needs and developments
  2. to improve standards of training provision
  3. to spread the costs of training more evenly across companies

The subsequent ‘Industrial Training Act’ was enacted in March 1964, establishing a Central Training Council and associated Industrial Training Boards (ITBs). By the end of 1970, 27 ITBs had been created, representing industries as diverse as engineering, shipbuilding, agriculture, printing, catering, and knitting (Cantor & Roberts, 1972). The Boards comprised employers, trade union representatives, and educationists. Under, the Act, the ITBs were required to:

  • publish recommendations on the nature, content, and length of training suitable in their industry, including associated further education
  • ensure that adequate facilities were available for the training required

Given the nature of the levy system set up to support this remit, Cantor & Roberts suggested that the function of the ITBs was, in effect, to persuade industry to train its employees better (Cantor & Roberts, 1972). This remit embraced industries with an established tradition of training (including engineering) and industries that did not have this tradition (including construction).

Although it has been criticised, and judged by some to have failed (see, for example, Sheldrake & Vickerstaff, 1987), the Act signalled a critical change of direction for intervention in training, and facilitated an increase in the quality and quantity of industrial training (Cantor & Roberts, 1972; Wheatley, 1976; Huddleston & Unwin, 2024). In subsequent sections, we will consider how the work of the ITBs related to the origins of the CASLO approach.

Apprentices

During the 1960s and 1970s, apprenticeship was the normal means of training for employment, based upon a more or less formal contract with an employer. Apprenticeship was the mechanism by which employers in many occupations exercised their generally accepted responsibility for training employees.[footnote 2]

Wheatley (1976) identified a variety of different forms of apprenticeship in England:

  • craft apprenticeships – for skilled manual occupations
  • technician (or technical) apprenticeships – for technician level occupations in industry
  • commercial apprenticeships – similar to technician apprenticeships, but in commerce rather than industry
  • student apprenticeships – for students at university or professional institution level, linked to sandwich courses (to qualify for membership of a professional institution)
  • graduate apprenticeships – for holders of a university degree or comparable qualification who were training in an industrial, commercial, or professional field (to qualify for membership of a professional institution)

A very large majority of apprenticeships were taken up as first employment by 16- and 17-year-old school leavers, and craft apprenticeships dominated the landscape. The apprenticeship model involved a combination of on-the-job training (provided by employers) and off-the-job training (provided by further education colleges). Off-the-job training mainly involved apprentices attending part-time day courses or block-release courses during working hours without loss of pay.

Colleges

Although the college system had already been ‘rationalised’ by the Ministry of Education by the 1960s, it was still possible to identify at least 5 broad categories: Colleges of Advanced Technology, Regional Colleges, National Colleges for specific industries, Area Colleges, and Local Colleges. These were in addition to Evening Institutes and English as a Foreign Language establishments (Pedley, 1964).

Those who attended college for off-the-job training tended to be classified into 1 of 4 categories, which reflected the ‘grade’ of their job – operative, craftsman, technician, or technologist (Peters, 1967).

Operatives were semiskilled workers who carried out specific operations using machinery or plant. Aspiring operatives would typically have left school at 15 and would generally not be employed as apprentices. They might be following a college course, but not necessarily one that led to a nationally recognised qualification.

Craftsmen were manual workers who carried out skilled practical tasks (for example, an engineering fitter, or a maintenance electrician).[footnote 3] Aspiring craftsmen would often be apprentices, attending college on a day-release scheme to study for a craft qualification.

Technicians were specialist assistants to technologists, requiring not only practical aptitude but also a good knowledge of relevant mathematics and science (for example, an assistant designer, an instrument artificer, or a skilled lab worker). Aspiring technicians would typically be apprentices. After having successfully completed a 5-year course of secondary education, they would attend college on a day-release, sandwich, or full-time scheme to study for a National award.

Technologists would have studied the fundamental principles of their subjects. They would be capable of initiating change, accepting a high degree of responsibility, and potentially pushing forward the boundaries of knowledge (for example, a university graduate in engineering). Aspiring technologists would typically be studying for a university degree, a Diploma in Technology, or a related qualification in a university or College of Advanced Technology.

Qualifications

Very many different kinds of qualification were offered during the 1960s.[footnote 4] In 1959, D.E. Wheatley, who was a Deputy Director at City & Guilds, published an authoritative overview of its qualifications, which he classified into 12 categories:

  1. Plant Operative
  2. Plant Operative, Higher Grade
  3. Junior Craftsman
  4. Average Craftsman
  5. Craftsman, Higher Grade
  6. Technician
  7. Technician, Higher Grade
  8. Technologist
  9. Threshold of Management
  10. Extension Subject
  11. Teacher’s Certificate
  12. Domestic Subject

Craft and technician qualifications were the pillars of TVET provision during the 1960s, particularly the Craft Certificates awarded by City & Guilds and the Ordinary and Higher Nationals awarded by Joint Committees. We will consider these 2 qualification suites shortly.

Entry to craft and technical courses was heavily triaged – to ensure that all students achieved as much success as their talents and inclinations permitted – with different pathways open to students entering after 4 versus 5 years of secondary education, preparatory courses where necessary, and ample opportunity for transfer across courses during the early stages (Pedley, 1964). Students embarking on the craft route after 5 years of secondary education might study for 3 years (part-time) before taking their Intermediate Certificate exams. If successful, they might then study for a further 2 years (part-time) before taking their Final Certificate exams. Similarly, students embarking on the technician route after 5 years of secondary education (and gaining 4 GCE O level passes) might study for 2 years (part-time) before taking their Ordinary National Certificate exams.[footnote 5] If successful, they might then study for a further 2 years (part-time) before taking their Higher National Certificate exams.

Major players

Although a large number of awarding organisations offered qualifications of this sort, some were more influential and dominant than others. City & Guilds of London Institute (hereafter City & Guilds) was the most important awarding organisation of this period, although other major players included:

  • the Regional Examining Unions (REUs) later known as the Regional Examining Boards (REBs)
  • the Royal Society of Arts (RSA)
  • the London Chamber of Commerce (LCC)
  • Pitman’s Institute

A book entitled ‘A Parent’s Guide to Examinations’ by F.H. Pedley (1964) is an excellent source of information on the range of TVET qualifications available during the 1960s, some of the most prominent qualification suites being the:

  • City & Guilds Intermediate, Final, or Advanced Certificate
  • City & Guilds Full Technological Certificate
  • Ordinary National Certificate (and Diploma)
  • Higher National Certificate (and Diploma)
  • Diploma in Technology

Pedley distinguished technical exams from non-technical ones, including exams in commerce and art. This resonates with the distinction between industry (technical) and commerce (business) that was later to structure recommendations from the Haslegrave committee, which we will consider shortly. Some of the most prominent non-technical exams included the:

  • Certificate in Office Studies
  • Certificate in Business Administration
  • Diploma in Management Studies
  • Diploma in Art and Design

Even today, it is notoriously hard to describe the TVET qualification landscape, which has evolved to cater for a wide variety of learners, with a wide range of learning needs, in a wide variety of learning contexts, across a wide range of sectors. The landscape of the 1960s was even more disparate and even harder to describe. Having said that, we can get a sense of the lie of the land by considering some of the most prominent TVET qualifications of the time and the learners that they catered for. Specifically, we will focus upon the:

  • very wide range of awards for craftsmen provided by City & Guilds, and the
  • more restricted range of national awards for technicians, including the National Certificates and National Diplomas at Ordinary and Higher level

Together, these qualification suites constituted the major pillars of college-based TVET provision in England during the 1960s. They were also the focus of much debate. The following sections describe these 2 major qualification suites and the debates that engulfed them.

Craft Certificates

During the 1960s, City & Guilds was the principal awarding organisation for craft qualifications. It was best known for its Intermediate Certificates and Final Certificates – which corresponded to basic and advanced craft syllabuses – although it also offered full Technological Certificates, which were at technician level, as well as lower-level courses for operatives.[footnote 6] Cantor & Roberts described City & Guilds courses as the “meat” or the “staple diet” for the average student in further education (Cantor & Roberts, 1972, page 67).

City & Guilds qualifications were designed to respond to the distinct needs of each industry or setting, which meant that different courses might specify different entry standards, different course lengths, different numbers of grades, different amounts of practical work, and so on. Accordingly, there was no such thing as a ‘typical’ City & Guilds craft qualification (Wheatley, 1959; Cantor & Roberts, 1972).

City & Guilds differentiated between ‘basic’ courses (typically lasting 3 years) and ‘advanced’ courses (extending to 5). Learners who wished to follow an advanced course would first need to pass the Intermediate Certificate, before being allowed to work towards their Final Certificate. Peters (1967) noted that most craft courses were based on a time allowance of one day plus one evening per week, which amounted to around 240 hours per year (8 hours x 10 weeks x 3 terms).

Although City & Guilds awarded the Final Certificate in craft – and was the only organisation to do so – end of year exams leading up to the Final Certificate tended to be arranged by individual colleges or (where they existed) by Regional Examining Unions (REUs). To ensure consistency of approach, the colleges and REUs liaised with City & Guilds over entry requirements, course content, and exam standards. In fact, only a minority of craft apprentices progressed to the Final Certificate. Most finished their training at Intermediate Certificate level, after 4 or 5 years of study, to become competent ‘journeymen’.

City & Guilds believed that its primary purpose in holding exams was to promote the establishment of courses of study appropriate to the needs of industry. Wheatley proposed that the:

objective is to design a course in which the average student will find interest and stimulation and be able to make steady progress, so that he will be beneficially exposed to educational influences throughout the course and profit on both the technical and the general educational sides.

(Wheatley, 1959, page 41)

City & Guilds relied heavily upon advisory committees for guidance on preparing course syllabuses. These committees were widely representative of the education service (at national and regional level), employer associations, trade unions, and ITBs (Wheatley, 1976). They provided a forum for co-operation between industry and the education service in defining, monitoring, and developing further education courses. Syllabus content typically included: craft theory, practical workshop or laboratory activities, allied subjects, industrial studies, and general studies. Pedley noted that syllabuses were available for one shilling, providing colleges with details of content, expectations, recommended text books, teacher guidance, and so on (Pedley, 1964).

On assessment, Wheatley (1959) noted the importance of assessing practical ability. Practical tests were provided in addition to written exams. For instance, the Heating and Ventilating Operatives’ Practical Course (course 179) culminated in one written paper of 3 hours’ duration and one practical test of 5 hours’ duration. Pedley provided further insight into the assessment of craft courses by reproducing in full the 2 written papers from a City & Guilds (Intermediate) Craft Certificate in Plumbers’ Work. The first 2-hour paper comprised 25 questions, from which candidates were to answer 20. The first 3 questions read:

  1. State the normal height:
    (a) to the top front edge of a kitchen sink in a sink unit;
    (b) to the top front edge of a pedestal wash basin;
    (c) of a W.C. pan.
  2. If a rectangle is 4 ft by 3 ft, calculate the length of the diagonal.
  3. What is the difference between a “separate” and a “combined” system of underground drainage?
    (Pedley, 1964, page 164)

For the second 2-hour paper, candidates were provided with a drawing board, a sheet of paper, and logarithmic tables. They were required to answer 5 questions from 8, and the first 2 read:

  1. Describe, with the aid of sketches, the action of an automatic flushing cistern for use on a range of urinals.

  2. Define FOUR of the following terms: static head, vacuum, latent heat of fusion of ice, induced siphonage, specific gravity, maximum density of water. (Pedley, 1964, page 166)

An historical account of City & Guilds prepared (largely) by a former Secretary to the Institute, Peter Stevens, provides additional insight into its approach to grading craft exams (Stevens, 1993). He noted that, following the introduction of computer processing in 1968, a new grading scale had been adopted for City & Guilds certificates: passed with distinction, passed with credit, passed, or failed. Performance on individual papers was also recorded on a scale from grade 1 (high) to grade 8 (low), with grades 7 and 8 counting as fail grades.

It is worth noting the attention paid to tailoring syllabuses and assessments to the differing needs of different learner groups. For instance, when designing courses for low-level operatives (for example, boiler operatives) Wheatley noted that special attention should be given to:

(a) the provision of a course of limited duration (i.e. with the objective clearly visible ahead) and with a character and tempo attuned to the interests and capacity of unselected secondary modern school leavers and men who may have been away from any form of education for many years; […]

(d) strict limitation of mathematics and science; that which is included must have immediately obvious relevance to practical operation and usually be presented as an integral part of the technical syllabus;

(e) examination papers – these should not call for lengthy written answers nor deal with matters outside the candidate’s experience or responsibility, though they do not by any means require to be all of the ‘yes/no’ type;

(Wheatley, 1959, pages 38 to 39)

Ordinary and Higher Nationals

Following the first world war, a system of National awards was created on an industry-by-industry basis in response to perceived inadequacies of traditional ‘Science and Art’ technical exams (Foden, 1951). During the 1920s, arrangements were confirmed for the Institution of Mechanical Engineers, the Institute of Chemistry, the Electrical Engineering Institution, and the Institute of Gas Engineers. Schemes for other industries followed in the 1930s and 1940s. National awards in Business Studies were introduced during the early-1960s to replace earlier schemes that had not been successful, including Commerce. By the mid-1960s, there were around 15 National awards or similar schemes (Foden, 1966).

Most schemes awarded 4 distinct types of qualification:

  1. Ordinary National Certificate (ONC) – roughly A level standard
  2. Ordinary National Diploma (OND) – roughly A level standard
  3. Higher National Certificate (HNC) – roughly pass degree standard
  4. Higher National Diploma (HND) – roughly pass degree standard

Certificate and Diploma courses were based upon similar syllabuses, although the Certificate route was only open to part-time students and the Diploma route was only open to full-time students. The Diploma route therefore provided for a much broader treatment of the learning domain. Certificate courses required at least 2 years of study and at least 150 hours of study per year. This typically meant studying at least one afternoon and 2 evenings per week, or one day and one evening (Peters, 1967).[footnote 7]

By 1969, around 3,000 of the approximately 5,600 candidates that entered HND exams were in engineering subjects, with the next largest entry being for Business Studies, with around 1,900 candidates. From 1960 to 1969, numbers of full-time HND entrants had grown substantially, more than 5-fold, while numbers of part-time HNC entrants – although slightly higher in 1969 (around 18,000) than in 1960 (around 16,000) – were actually declining from a high (of nearly 22,000) in 1968. Cantor & Roberts explained that this decline was likely to continue due to a change in policy regarding the currency of HNC exams with professional bodies (Cantor & Roberts, 1972). Similar patterns of uptake were evident for ONDs and ONCs.

National awards were administered under the authority of a Joint Committee that represented the relevant professional institution(s), the Ministry of Education, and teacher associations. For instance, National awards in business studies were made by the Joint Committee for National Awards in Business Studies, with representation from the major professional institutions in advertising, accounting, banking, secretarial work, building societies, sales managers, purchasing officers, and other bodies including the Ministry of Education (Pedley, 1964). Each Joint Committee was responsible for entry requirements, syllabuses, and national standards. Business studies exams, however, were developed and delivered jointly by the RSA and the LCC.

For other industries, it was often REUs or individual colleges that developed and delivered the exams. So, for most schemes, this was a system of local development and delivery, linked to national entry requirements, content, and standards. National standards were ensured via moderation:

The examinations are set and marked by college teachers or (commonly in smaller colleges) by a regional union, but they are assessed by an external examiner appointed by the professional institution concerned and acting under the control of the joint committee. Homework, drawings, notebooks and so on may be called for by the assessors.

(Peters, 1967, page 98)

Montgomery noted that end-of-year exams were governed largely by individual colleges (or REUs where they operated) although external assessors were appointed by the Joint Committees for the final exams (Montgomery, 1965). These assessors scrutinised draft exam papers before they were administered and scrutinised the marks that were awarded after exams had been sat.[footnote 8] Assessors might also request sight of classwork, including notebooks, drawings, and suchlike. Montgomery explained that this had the beauty of devolving considerable responsibility for assessment processes to colleges, which enabled local interests to be catered for while also giving the awards a national currency.

Although written exam papers were a very important part of the assessment process, other evidence was also taken into account for each award:

It is a most important feature of all technical college examination work that homework, class work and practical work are taken into account, as well as satisfactory attendance. Indeed in all these aspects the college must be satisfied before entering a candidate. For example, in the case of O.N.C. courses in engineering, certificates are awarded only to those who (a) have passed the examination in all subjects in each year of the course; (b) have made at least 60 per cent of the total possible attendances in each year and in each subject; (c) have obtained at least 40 per cent in homework, laboratory work and drawings separately in each subject and in each year of the course; and (d) have reached an overall average of 50 per cent of the marks.

(Pedley, 1964, pages 154 to 155)

Different accounts of the awarding process emphasise different details. For example, Montgomery noted that:

Students had to score at least 40 per cent of the total possible number of marks in each subject at the ‘finals’, and do likewise for homework in the last year. Course work and examination marks were to count towards the ultimate total in the ratio of 30 per cent to 70 percent. 50 per cent of the total possible number of marks were to earn a pass, 85 per cent would win a distinction. Such were the arrangements in a typical scheme, but it should be borne in mind that the system was extremely flexible, and catered for different courses in different parts of the country.

(Montgomery, 1965, page 215)

The written exam format was highly valued. For example, the Business Studies committee was particularly insistent that even ONC students should demonstrate “logical thought and correctness in writing” and required them to produce an extended essay of 3,000 to 5,000 words, prepared under the guidance of their tutor, in addition to their final exams (Pedley, 1964, page 174).

Problems

The radical changes that occurred during the 1970s and 1980s – which included the introduction of an outcome-based NVQ model that emphasised workplace learning and assessment – were a response to earlier problems, some of which had come to a head by the end of the 1960s. The following discussion of challenges facing TVET qualifications during the 1960s is selective, but it identifies certain key issues, and a changing zeitgeist, which resulted in the development of outcome-based approaches to qualification design.

Crowther report

In March 1956, the Central Advisory Council for Education (England) was asked to advise the Minister of Education on the education of boys and girls between 15 and 18. Chaired by Sir Geoffrey Crowther, the committee reported 3 years later (Crowther, 1959), addressing a wide range of issues, which included:

  • why the school leaving age should be raised from 15 to 16 (which eventually became law in 1972)
  • the case for a lower-level examination below the O level (which was to become the Certificate of Secondary Education)
  • the sixth-form and problems of university entrance

Of particular relevance to the present report, it also discussed the neglected educational territory of around a quarter of the national cohort of boys and girls who finished school at 15 or 16, but who continued to spend a significant part of their time in further education, training, or instruction. These were typically technical apprentices and trainees who studied part-time in technical colleges. The report identified 2 great challenges for this sector, which the committee believed ought to be solved in tandem:

  1. many more skilled craftsmen and technicians were required to support the needs of industry and agriculture (via an alternative route to the grammar schools)
  2. there were far too few young people who stayed in full-time education from 16 to 18 (only 1 in 8)

The committee expressed particular concern over the effectiveness of the part-time courses that lay at the heart of provision for craftsmen and technicians. First, 270 hours per year (which they associated with one day release and one evening class per week) was insufficient to cover essential ground for the technical exams, let alone to enable additional studies. This tended to focus teaching narrowly on preparation for exams. The committee argued that aspiring technicians, in particular, should spend more time in college obtaining a deeper and broader education, ideally on ‘sandwich’ courses (Crowther, 1959).

Second, the committee was worried about failure and drop out rates in technical education, particularly for ONC and Intermediate Crafts courses, both of which (during the late-1950s) were 3-stage courses. The report provided evidence on the percentages of students successfully completing each stage, which were roughly similar for both courses. Only around two-thirds of those who started an ONC or Intermediate course completed their first stage successfully. Only around a half completed their second stage successfully. Less than a third completed their final stage successfully and were awarded a certificate.

Third, they were worried about ‘retardation’ problems beyond non-completion. The stages just mentioned were intended to correspond to a year of study, but “a good many students spend a good many years” on a single stage (Crowther, 1959, page 360). The report noted that, of those who were eventually successful on the 5-stage HNC, 19% had spent 7 or more years studying for it.

Finally, the committee believed that the answer to re-engaging boys and girls “who lose their intellectual curiosity before they have exhausted their capacity to learn” lay in an alternative approach to knowledge, a more ‘practical’ approach (Crowther, 1959, page 391). The idea of ‘practical’ education was not very popular during the late-1950s, but the committee wished to rehabilitate it. Education should include both practical and theoretical elements. Yet, young people of 15 or 16 ought to be given a choice between a predominantly ‘academic’ route or a predominantly ‘practical’ one. The committee ended by identifying 2 as-yet-unsolved problems:

First, how is the programme of practical work to be designed so that the intellectual stimulus and the theoretical knowledge arise out of it? We suspect that too often, even when both elements are present, they remain separate. Secondly, how can the practical work and the intellectual value deriving from it best be assessed? Any education in England which aims at equipping its pupils for a professional status has to conform to an examination system designed in relation to an educational curriculum of which both main subjects and the approach to them are academic. It will be apparent that it is not always easy to reconcile this parallel road with traditional examinations. Sometimes it can only be done with undesirable distortion. Some of the most valuable aspects of the education it can give would, we suggest, more naturally be tested by a scrutiny of work done during the course and by an oral examination upon it.

(Crowther, 1959, page 399)

Wastage, retardation, failure

The Crowther report highlighted very serious problems for TVET qualification systems that revolved around Craft Certificates and National awards. Many students dropped out before taking their exams. Others took their exams, failed, and then dropped out. Yet others spent many years repeating exams before succeeding. These problems were certainly not new. Indeed, Foden had discussed them in 1951, suggesting that many students on such courses were simply not suited to them, often repeating their experiences of failing many times before ultimately giving up. The problem, he suggested, was that students were not being selected effectively (Foden, 1951). The Crowther committee observed that problems of failure were exacerbated by the requirement to pass National awards in all subjects each year before being allowed to progress (Crowther, 1959). Taylor & Beaumont (1967) echoed this concern, noting that this was why O levels had been transformed into single-subject awards, in contrast to the School Certificate grouped award that preceded them.

The 1961 white paper ‘Better Opportunities in Technical Education’ (Ministry of Education, 1961) set out to address problems of ‘wastage’ (drop out), ‘failure’, and ‘retardation’ (delay) by placing far more emphasis upon preparation and selection of students for courses, particularly for ONC and OND courses (Peters, 1967). This involved specifying the direct entry requirements mentioned earlier – including 4 O level passes – as well as preparatory courses for those who had not met such requirements. Thus, in 1962, the 3-stage (3-year) structure was replaced with a 2-stage (2-year) structure, preceded by a one-year or 2-year General course for anyone who did not meet the direct entry requirements. These General (G) courses were intended to be ‘diagnostic’ and provided a foundation for the careful triaging mentioned earlier (Morrison, 1966).

Unfortunately, reforms instigated by the white paper did not straightforwardly solve problems of wastage, retardation, and failure. Focusing on the situation for Engineering exams, Taylor & Beaumont observed that overall percentage pass rates for City & Guilds exams had hardly changed from 1956 to 1966 (Taylor & Beaumont, 1967). Pearce echoed this concern, noting that pass rates had “flickered upwards by only a few per cent” since the introduction of the revised schemes (Pearce, 1975, page 54).

The practical approach

Even during the early-1950s, it was often said that courses for National awards were “unduly academic” (Foden, 1951, page 43). It was entirely possible for a student to obtain a National Certificate in Engineering, for instance, without ever having been inside an engineering works. Courses for National awards tended to be “stereotyped and narrow” and concerned “mainly with the theory rather than the practice of industrial processes” (Foden, 1951, page 43). Part of the reason for their narrowness was the lack of time available to learners studying part-time (Peters, 1967). However, part of the reason was by design. It was recognised that employees often lacked the underpinning knowledge and understanding that had become increasingly critical to an effective workforce. So, this was what the part-time, day-release courses came to cater for, while practical ‘trade knowledge’ was generally assumed to be picked up on the job (Clegg & Jones, 1970; Unwin, Fuller, Turbin, & Young, 2004).

If the Crowther committee aspiration for far more technicians to begin their training on full-time college courses was to be realised, then this situation would need to change. Part of the explanation for their desire to rehabilitate the ‘practical’ approach would surely also have related to this.

In addition to courses for National awards being unduly academic, the main method of examining for National Certificates and Diplomas continued to be the written exam (Foden, 1966). Foden noted that “the only evidence of practical competence required […] is the record of laboratory work” (Foden, 1951, page 43).

The situation for craft certificates appears to have been somewhat less biased to the written exam. City & Guilds had grappled with the development of practical exams since the 1890s. Despite often being difficult to administer and invigilate, as well as often being expensive and inconvenient, they had become a significant feature of craft subjects by the 1910s (Foden, 1966). In a section on courses for ‘average craft apprentices’ Wheatley explained that:

It is regarded as important that success in the examination at the end of the course should have correlation with the student’s potentiality as a craftsman; this enhances the importance of proper assessment of practical ability by means of a practical examination or other methods, e.g. course work and specimen work. This practical element also has considerable importance in demonstrating to industry the value of a co-ordinated scheme of industrial training and related further education, especially as no scheme of part-time further education can attempt to provide a substitute for skill training, which is an industrial responsibility.

(Wheatley, 1959, page 42)

The 1964 Industrial Training Act was soon to raise awareness of the importance of effective training, providing a new impetus for integrating the practical approach (Clegg & Jones, 1970).

Continuous assessment

Although it is clear from Foden (1951) that students’ homework and classwork marks during their final year were already an important consideration in awarding National certificates, it is also clear from Foden (1966) that innovations during the 1960s embedded these ideas further, including the adoption of new methods of continuous assessment of practical work.

The incorporation of continuous assessment within national certification schemes had been discussed widely for some time. Arguing for reform of the School Certificate exam system, the Norwood report was very positively disposed towards greater reliance upon teacher assessment (Norwood, 1943). Relying on those who knew their students best would enable a much more comprehensive certification process, to paint a broader picture of attainment than was possible with exams. This would be good for teachers (in terms of professional development) and for students (in terms of fostering a more student-centred approach). Although these ideas did not take root with the introduction of the General Certificate of Education Ordinary level (GCE O level) in the early-1950s, they were a central feature of the Certificate of Secondary Education (CSE) that was introduced during the mid-1960s.

Many assumed that devolution of responsibility for the assessment of student performance was at least as necessary in further education as in secondary schools (for example, Leese, 1966). Although the National award schemes were consistent with this devolutionary approach, Leese claimed that they had ossified, becoming just another externally controlled exam. Whereas, in theory, the REUs might prefer to base results on regionally moderated college assessments, in practice, they ended up paying examiners to set papers and do the marking. Leese argued that this situation ought to be reversed.

Like many others during this period, Leese argued the case for teacher assessment largely in terms of the benefits for teachers and students of following a locally-relevant, teacher-devised syllabus.[footnote 9] In a subsequent paper, Bacon (1969) also argued the case for increasing reliance on continuous teacher assessment within national certification schemes, albeit with less emphasis on the issue of syllabus control. Instead, he emphasised that continuous assessment ensures that certification is based upon work from across the entire syllabus and not just across the small sample of the syllabus that features in an exam, thereby echoing the Norwood report. Bacon noted that City & Guilds was already committed to extending the contribution of continuous teacher assessment to their Craft Certificates, on the basis of a wealth of experience with practical exams.

Haslegrave report

In May 1967, the Secretary of State for Education and Science invited the National Advisory Council on Education for Industry and Commerce to review the national pattern and organisation of technician courses and exams. The Council appointed a Committee on Technician Courses and Examinations, chaired by Dr H L Haslegrave, which reported 2 years later (Haslegrave, 1969). The committee had a fairly broad remit, which extended beyond technicians to “comparable occupations” (Haslegrave, 1969, page 4). This meant that it covered qualifications designed for commerce as well as for industry, although industry qualifications tended to predominate.

The review provided an opportunity to consider the success of reforms arising from the 1961 white paper.[footnote 10] This included changes to the structure of National awards, the introduction of pre-technician General (G) courses, an increase in the number of available subjects for National awards, and a substantial increase in the number of bespoke City & Guilds Technician (T) courses, which had become very popular.[footnote 11] Alongside National awards, the new T courses offered a less academic route to a technician qualification. G courses assumed a “prognostic” function, routing students one way or the other (Haslegrave, 1969, page 30).

Critical to the context of the review were training reforms that followed the white paper, which had been crystallised in the 1964 Industrial Training Act. These presented the “distinct possibility of profound changes over the whole field of further education” (Haslegrave, 1969, page 20). Indeed, the committee envisaged far greater collaboration between industry (including commerce) and further education, in producing the workforce that the country needed.

Even during the late-1960s, many (if not most) technicians had no relevant qualifications. The committee envisaged a future in which this would be unthinkable, which meant that appropriately specified qualifications would need to be developed. The committee acknowledged the work of the recently established Industrial Training Boards (ITBs) in specifying the kinds of skills that the country needed:

Clearly, a considerable task of job analysis must be undertaken as a first step towards this ideal state of affairs. In our view the ITBs must accept the major responsibility for seeing that this is done. […] We hope that in doing it, there will be full co-operation with the further education service, both in prescribing suitable courses for technicians of various kinds, and in analysing jobs and devising training programmes.

(Haslegrave, 1969, page 6)

Joint planning, the committee argued, could produce courses that were not only educationally sound, but also reflected the latest training needs. Technical developments in industry – which meant that products were more complex and that production required new applications of science and technology – underscored the importance of this forward-thinking approach. So, too, did the huge impact that computers were having on clerical and administrative jobs.

Although the committee believed that the new G and T courses had been fairly successful, it was less positive about the reformed National awards. Various adaptations to National courses had improved students’ chances of success, including: options for ‘lateral’ transfer to a course of a different standard without having to completely start again, no longer requiring students to pass exams at the end of each year of their course, and some relaxation of the requirement to pass in all subjects at one sitting (Haslegrave, 1969). Unfortunately, though, drop out and failure rates were still far too high.

The committee raised questions concerning the evolving purposes of these qualifications, it asked whether Ordinary Certificate standards were too high, and it suggested that the Higher Diploma might not even be necessary. Slightly different concerns were raised for the still relatively new Business Studies Nationals. Here, the Ordinary courses were felt to be satisfactory, but they tended to be used as a route to a specialised professional qualification rather than to a Higher course. In fact, the Business Studies OND had proved to be one of the most successful ONDs, appealing to many students, particularly girls, as an alternative to the A level route.

Before drawing conclusions, the committee returned to the continuing problem of drop out and failure (particularly for ONCs) and various potential solutions that had been proposed. These included more time for study, changes in the frequency and type of exams, and the creation of more opportunities for transfer. The committee accepted that students on part-time ONC courses needed more time to complete their studies satisfactorily, albeit also acknowledging that this could be achieved by rationalising the current courses. It also accepted that there should be even more flexible arrangements for transfer across courses, where students had found them to be either too difficult or too easy.

Importantly, the committee noted substantial criticism of the frequency of exams and of some of the methods used. These criticisms emphasised both the “confining influence on the teaching” and the belief that they provided a “false picture of the student and his real achievement” (Haslegrave, 1969, page 42):

Many suggestions were made about different kinds of examination techniques that might be adopted in connection with the examination of technicians. In general, there was a strong trend of opinion towards greater participation by the students’ teachers. Most of the following examination components suggested in the evidence give the opportunity for such participation, although the first and most commonly used does not do so:

(i)     a written paper externally set and marked;

(ii)    a written paper set and marked internally;

(iii)   a written paper set and marked internally, with external assessment;

(iv)   an objective test, externally set and marked;

(v)    practical or oral examination, or both, dealt with externally or internally with external assessment;

(vi)   course work assessment by the student’s teacher; and

(vii)  project work, internally or externally assessed.

The general view was that technician examinations should include some or all of these components, as appropriate to the case, but that in all cases more weight should be given than at present to assessment by the teacher. In fact, there was a body of opinion that the teacher’s assessment should constitute the most important single component in the system of student testing, with other components, externally set or moderated, used as an independent check on the validity of the assessment. This, it was thought, would give the most reliable view of the student’s performance and ability.

(Haslegrave, 1969, pages 42 to 43)

The committee noted that students who were better at passing exams were not necessarily those who proved subsequently to be better technicians. Furthermore, technicians needed to be able to solve real-world problems that involved extracting information from multiple sources, analysing it, acting on it, reflecting on actions, and adjusting those actions as necessary. Yet:

The traditional external examination was an unsatisfactory way of testing ability of this kind. What should ultimately be aimed at was an end-of-course “profile” of a student by reference to which both his academic and industrial ability could be gauged.

(Haslegrave, 1969, page 43)

Finally, the committee argued that the weight of evidence from both industry and commerce indicated that the TVET qualification landscape was too complex and insufficiently standardised, being driven by a plethora of controlling bodies that operated without effective co-ordination.

Ultimately, the committee concluded that the reforms set in motion by the 1961 white paper had proved to be insufficient. Moreover, they had facilitated an unco-ordinated proliferation of new courses, which had made the system much harder to understand, let alone to plan, and had frustrated the development of effective educational resources (see also Bell, 1968). In short, more radical reforms were now required.

Summary of the 1960s landscape

Although we argue that the first CASLO qualifications of national prominence were introduced during the late-1980s, the following section will explain how the shift towards outcome-based and mastery-based qualifications actually occurred during the 1970s. So, although the CASLO approach crystallised during the 1980s, and was strongly influenced by the socioeconomic challenges of this period, its roots were grounded in attempts to solve a variety of problems that had become endemic within the TVET landscape by the end of the 1960s. These included:

  • the need for more young people to spend more time in post-compulsory education and training (to support the need for skilled craftsmen and technicians)
  • serious problems of wastage (drop out), retardation (delay), and failure associated with traditional TVET qualifications
  • a desire to rehabilitate the idea of ‘practical’ education and training as a valued (and motivating) alternative to ‘academic’ education
  • a belief that traditional TVET qualifications (notably Nationals and Higher Nationals) were insufficiently focused on practical skills, and were too dominated by the written exam format
  • a belief that greater reliance upon continuous assessment had the potential to improve the comprehensiveness and authenticity of TVET assessment, especially the assessment of higher-level cognitive competencies

1970s

The Haslegrave report laid foundations for radical change in the landscape of TVET qualifications in England. This was led by 2 new bodies, but also by the existing awarding organisations, all of which were exploring new approaches to qualification design. These included outcome-based and mastery-based approaches that foreshadowed the arrival of the CASLO approach (without yet quite constituting it).

Haslegrave recommendations

The principal solution to problems identified in the Haslegrave report was the establishment of administrative machinery capable of co-ordinating and sustaining subsequent reforms on a national basis. Although the committee anticipated that a single national council would be established at some point in the future, it recognised the utility of establishing 2 separate councils in the first instance – one for industry and one for commerce – a Technician Education Council (TEC) and a Business Education Council (BEC). These bodies would be responsible for planning, co-ordinating, and administering technician and technician-level courses, exams and educational qualifications of a national character:

The TEC would, as soon as possible after its appointment, assume policy and planning responsibility for examinations and qualifications in the whole of the technician field at present covered by joint committees, the CGLI and the REBs. In due course, it would become responsible for syllabuses, assessment, and the award of educational qualifications. It would therefore require a suitable sub-structure of advisory committees to do the detailed syllabus and assessment planning for particular subjects or groups of cognate subjects.

(Haslegrave, 1969, pages 53 to 54)

The committee anticipated a similar role for the BEC, albeit with some difference in emphasis regarding its functions. It recommended that strong consultative and operational links should be established between the 2 councils, and that common assessment policies should be adopted where appropriate. On assessment, the committee stated that:

The award of a technician or comparable qualification, whatever the level, should never depend solely on the student’s performance in a formal examination. […] we would assume that some sort of formal test, externally set or assessed, would continue to feature in all cases. It should not, however, be accorded the same degree of importance as in the past.

(Haslegrave, 1969, page 79)

we think the time has come to move away from the present emphasis on external examinations in favour of the introduction of continuous assessment and other internally applied techniques, with external checks kept to the minimum consistent with the attainment of broad national standards. We wish to see the TEC and BEC give a lead in this direction.

(Haslegrave, 1969, page 61)

Thus, while the councils would have responsibility for national standards, programme design and assessment delivery could largely (or at least partly) be devolved to local providers. In particular, the committee recommended exploring the modular, or credit-based, approach that had recently been developed by City & Guilds:

the Councils might consider in appropriate cases basing the grant of technician and comparable qualifications on studies undertaken under the “credit” system, i.e. the gradual accumulation of passes in subjects which have been studied separately and not as one of several forming a grouped course.

(Haslegrave, 1969, page 79)

Finally, in response to concerns over failure rates and unduly high standards, the committee stated that:

we are strongly of the opinion that any student who fulfils the entry requirements for his particular technician course, and works reasonably hard and well during the course, should be entitled to expect that he will pass the examination.

(Haslegrave, 1969, page 79)

The TEC and the BEC

At the heart of the Haslegrave recommendations lay concern over a lack of national co-ordination, which had led to the proliferation of courses and committees, and had resulted in a “bewildering picture of complexity to employers and students” to the point where even those concerned with providing and administering courses and exams found the system very hard to understand (Haslegrave, 1969, page 44). The TEC and the BEC were established, as independent bodies, to make and direct national policy in their respective fields, to rationalise and simplify provision.

Haslegrave recognised that these organisations would need administrative support, and proposed that City & Guilds should be invited to support them both. This was partly because City & Guilds already had exactly the right kind of machinery in place, dealing with many of the technician candidates who were to fall within the TEC ambit. But it was also so that the kudos of City & Guilds might rub off on the new-style national qualifications. This arrangement might also help to support progression from lower-level City & Guilds craft courses to higher-level technician ones, which had been recognised as a problematic transition for some time.

The TEC was established in March 1973, and the BEC in May 1974 (Birbeck, 1980). Set up as registered companies limited by guarantee, they were originally funded by a grant from the Department of Education and Science (DES), although registration fees would mean that they were able to become self-funding before too long (Roberts, 1988). In 1976, the BEC and the TEC formalised their close relations by constituting a joint committee. City & Guilds initially accepted the invitation to support both organisations, although the BEC severed contractual relations in 1980, and the TEC did likewise in 1981. Stevens stated that the relationship had not been constructive in terms of co-ordinating progression opportunities (Stevens, 1993).

TEC awards

The TEC aimed to replace around 90 City & Guilds or Joint Committee advisory committees with around 22 TEC Programme Committees, to achieve a major rationalisation of the technical qualification landscape (Wheatley, 1976). It developed a suite of awards based upon:

  • a TEC Certificate programme (around 12 to 15 units, and 900 hours, typically studied over 3 years by part-time day release) and
  • a Higher TEC Certificate programme (around 8 units, and 600 hours, typically studied over 2 years by part-time day release)

This roughly corresponded to the structure and standards of the ONC and HNC that were being replaced.[footnote 12] To indicate that learners could study at their own pace, each notional year was defined as a level – with 5 levels spanning the 2 programmes.[footnote 13] Diploma and Higher Diploma programmes extended the Certificate and Higher Certificate programmes, respectively, providing additional units of similar technical depth. This meant that a Certificate could be converted to a Diploma by completing additional units. Diploma awards comprised roughly twice as many units as Certificate awards.

The idea of a unit was an important feature of TEC awards. Bear in mind that drop out and failure had been huge problems for the outgoing Ordinary and Higher awards. Thus, unitisation was intended to help address this, by replacing an overarching course with a programme of study built from self-contained units, each of which could be passed in their own right on a unit-by-unit basis. The potential of unitisation to help improve completion rates – by flexing to meet the needs of individual learners – was viewed very positively (Like, 1986).

Central to the idea of a TEC award was that it should be designed to meet local needs to satisfy local industries with specific technician jobs in mind. This rolled out as a model in which programmes were intended to be developed locally and validated nationally. Local programme development was supported by national Programme Committees, co-ordinated by broader Sector Committees.

Programme Committees also created ‘standard units’ (that is, off-the-shelf units) that could be incorporated into a locally developed programme. This was particularly useful for units that were common across programmes, like mathematics and electronics.[footnote 14] As they were intended to be national awards, the TEC was responsible for setting and calibrating standards, which was achieved by validation and monitoring. Each programme was planned and developed by a college working in partnership with local industry, and then submitted to the TEC for validation. Once validated by the relevant Programme Committee, it could then be delivered and assessed locally. TEC-appointed, regionally-based moderators would visit each college to approve the quality of assessment materials and the standard of student work.

A validated programme might comprise units written exclusively by a local college (or group of colleges) although it might also include standard units. Equally, it might include adapted standard units, or units that were written to include options for students with different progression needs. Units were written with progression in mind, such that, to study a Level 3 unit in a particular topic, a student would be expected to have achieved its Level 2 counterpart. Units were graded pass or merit, but the overall qualification was not graded. Certificates identified individual units by name.

Of particular relevance to the current report, all TEC programmes were specified in terms of learning objectives:

TEC believes that the specification of subject material by specific behavioural objectives gives validating committees information concerning not only the topics to be studied but also the depth to which they are to be studied, and thus gives them more information than the conventional syllabus on which to determine the validity of a proposed programme.

(Bolton, 1978, page 33)

These are written in the form of learning objectives, i.e. they specify exactly what the student should be able to do on completion of the unit, e.g. ‘State Ohm’s law’ or ‘Deduce the equivalent resistance of two known resistors connected in parallel’. In other words it is quite clear to the student, the lecturer and the employer what the student is expected to achieve in order to pass the unit. In many respects this changes the emphasis in the role of the teacher from someone helping the student to beat the system – through ‘question spotting’, etc – to someone working with the student to achieve the specified objectives.

(Riches, 1980, page 365)

An appendix to Hunter (1985) contains the specification for a modified standard unit in Electronics (Level 2) of 60 hours duration. Its content was specified in terms of 6 unit topic areas, each one of which was specified in terms of a small number of ‘general objectives’ (16 in total across the 6 topic areas). Each of these general objectives was associated with a small set of ‘specific objectives’. According to the accompanying guidance, general objectives specified teaching goals, while specific objectives specified the means by which a student should demonstrate their attainment. These objectives were referred to as ‘expected learning outcomes’.

The following example illustrates how the second topic area of this unit was specified:

B CATHODE RAY TUBE

7 Knows the principles of operation of a cathode ray tube.

7.1 Labels a diagram of a C.R.T.

7.2 Explains the functions of the following:

(a) electron gun

(b) focus control

(c) intensity control

(d) blanking pulses

7.3 States that deflection can be produced by electric and/or magnetic fields.

7.4 Demonstrates the use of timebases and of vertical and horizontal deflection controls.

(Hunter, 1985, page 285)

Colleges were responsible for developing an assessment plan and for assessing students. They were encouraged to use a variety of methods, including tests within units, end of unit tests, and more extended coursework and assignments. Each assessment would test the set of learning objectives that had been studied in the period since the last one.

As discussed in detail by Halliday (1981), TEC guidance on assessing learning objectives suggested that colleges could adopt one of two approaches:

  • design the assessment to show mastery of each objective
  • design the assessment to show adequate achievement averaged over a block of objectives [footnote 15]

The guidance noted that most colleges adopted the latter approach. Further guidance suggested that students should be achieving around 50% to pass a unit and around 65% for a merit. In short, although the TEC approach paid more than lip service to the idea of mastery, its stipulations were malleable, to say the least (see also Carter, 2012). As such, we might think of these TEC awards as directly prefiguring the CASLO approach, without quite embodying it.

BEC awards

With a slightly narrower remit, the Business Education Council was established to professionalise the less developed sectors of business and public administration, where the demand for further education and training was less well defined (Field, 2018). As explained by its chief officer, John Sellars (1977), BEC engaged a wide variety of stakeholders – further and higher education college staff and students, employers and trade unions, professional bodies, and others – and launched its plans in stages via an initial ‘Consultative Document’ (June 1975), a ‘First Policy Statement’ (June 1976), and ‘Initial Guidelines on the Implementation of Policy’ (May 1977), alongside detailed specifications of core studies for courses leading to BEC General and BEC National awards (October 1977).

The first new BEC awards were introduced in September 1978, with the full suite developed for 16 to 21-year-olds comprising:

  • BEC General Certificates – 1 year part-time
  • BEC General Diplomas – 1 year full-time or 2 years part-time
  • BEC National Certificates – 2 years part-time
  • BEC National Diplomas – 2 years full-time or 3 years part-time
  • BEC Higher National Certificates – 2 years part-time
  • BEC Higher National Diplomas – 2 years full-time or 3 years part-time

According to deputy chief officer, Janet Elliott, the new BEC General awards were designed “primarily as a ‘second chance’, to meet the vocational needs of 16 and 17 year olds, who did not excel in the school examination system and who have not more than three ‘O’ levels” (Elliott, 1979, page 227). This included students who might previously have attempted a Certificate in Office Studies. BEC National Certificates and Diplomas were phased in from 1978, as the ONC and OND in business studies and the ONC in public administration were phased out. Similarly, BEC Higher National Certificates and Diplomas were phased in to replace the old HNCs and HNDs. It is worth noting that these replacement HNCs and HNDs retained the same nomenclature, despite being completely new awards.[footnote 16] It was originally proposed to drop ‘National’ from the title, to avoid confusion, but this policy was reversed to help retain a level of recognition for the new awards (Hannagan, 1978).

Whereas course development for General and National awards was largely centralised – BEC published compulsory core modules for each course and an extensive range of optional modules – Higher awards were essentially designed by colleges and validated by BEC (Elliott, 1979). Having said that, colleges were required to follow BEC guidance on core content and course design. Elliott described as one of the “most interesting features” of the new awards:

the extent to which BEC has required all those involved in teaching and planning business studies courses in the non-degree sector to review their teaching methods and student-learning objectives

(Elliott, 1979, page 227)

More frankly, Morris (1977) characterised BEC policy as the enumeration of a radical change in educational philosophy, which involved:

  • manifesting a distinctly vocational purpose for these awards, inviting increased participation from employers, and identifying a prominent role for work experience (new vocationalism)
  • structuring all courses in terms of modules, none of which corresponded to traditional subject areas (modularisation) [footnote 17]
  • ensuring that all courses required students to integrate knowledge, skills, and understanding from across a range of disciplines (integration)
  • locating 4 themes at the heart of all courses – money, people, communication, a logical and numerate approach to business problems – to be developed throughout (thematic)
  • a move to student-centred, enquiry-based learning (progressivism)

Following a similar path to that trodden by the TEC, the BEC specified module content in terms of learning outcomes. For instance, the following administration in business module was issued in 1977, comprising a single general objective and 3 specific ones:

C Understand the importance of the computer as an information tool and be aware of its impact on administrative operations

[This was supported by 3 learning objectives as follows:]

C1. describe the main characteristics of the computer, including both hardware and software, recognizing the special need for relevant and accurate input data;

C2. identify the main commercial applications of computers from routine data processing to the provision of management information;

C3. outline the way in which specific administrative procedures have changed in response to the introduction of computer systems.

(Fisher, 2003, page 258)

Discussing the development of a communication module, Pearce suggested that the BEC had adapted its approach to specifying outcomes following criticism of early TEC units. Its objectives were effectively “one step down” in terms of specificity, meaning that the BEC specific objectives were more like TEC general ones (Pearce, 1978, page 7).

BEC policy firmly insisted upon a combination of in-course assessments and terminal exams, although arrangements differed at different levels. For instance, at General level, each core module was assessed by in-course assessment and by an externally set (national) exam paper. Optional modules involved in-course assessment only (Davies, 1981).

At National level, Fisher (2003) characterised assessment arrangements for a National Diploma as follows:

  • all general objectives, across all core and optional modules, had to be assessed via in-course assignments
  • a student would typically face 9 exams (3 at the end of year 1, 6 at the end of year 2)
  • exams were internally set, with approval from an external moderator, and extended case-study exams were encouraged [footnote 18]
  • exams were internally assessed, with external moderation
  • there was a strong commitment to criterion referencing – across both in-course assignments and external exams – with a focus on evidencing learning outcomes rather than awarding marks
  • modules were graded using a wide range of grades (A to E, or F) but the qualification was graded using only pass and distinction

In-course assignments were fundamental to the new BEC philosophy. They were expected to draw upon abilities developed by the objectives from 2 or more modules of the course (Sellars, 1977), and therefore came to be known as Cross-Modular Assignments. CMAs were specifically designed to help students to integrate knowledge, skills, and understanding from their core studies by applying them to practical business problems. As such, assignments were intended to function both as assessments and as sites of learning.

It is important to recognise how the emphasis on modularisation and integration presented a particular challenge to traditional approaches to teaching for business awards. These had traditionally been delivered on a disciplinary basis, with separate inputs from specialists in economics, mathematics, law, and so on. The new modules incorporated content from different disciplines and the new philosophy invited a quite different approach to teaching and learning that was explicitly premised upon cross-disciplinary integration across modules.

Reception

Both the TEC and the BEC championed radically new approaches. Ellison described the introduction of BEC awards as “a root and branch destruction of the old order” (Ellison, 1987, page 105). Some teachers welcomed these changes. Others bemoaned them. The opinions of scholars also varied. For instance, Franklin, Rawlings, & Craven presented results from a survey of college course leaders, which seemed to indicate that the new awards were failing to achieve their aims and objectives:

we argue that in reality, the old national certificate and diploma courses are being taught in the colleges with a thick BEC veneer applied upon them for external appearances

(Franklin, et al, 1983, page 54)

In response, le Roux (1983) argued that it was unrealistic to expect these aims and objectives to be achieved, in full, so quickly. Conversely, to the extent that ‘real change’ was occurring, even if somewhat slowly, the aims and objectives were genuinely being achieved.

In retrospect, it seems fair to conclude that both the TEC and the BEC seriously underestimated the challenge of bringing teachers up to speed with radically new approaches to curriculum, pedagogy, and assessment, particularly given the expectation that colleges would be responsible for developing, not simply delivering, the new programmes (Morris, 1977; MacRory, Beaumont & Taylor, 1977; Pearce, 1978; Humphreys, 1981; Lysons, 1982; Wilson, 1983; Anderson, 1984; Bourne, 1984; Hunter, 1985). Colleges would inevitably have struggled to catch up, and it should not have been surprising if traditional teaching approaches lingered within the constraints of the new model (O’Sullivan, 1987; Stevens, 1989). The scale of change must have been daunting, if not overwhelming. Recalling his own experiences of the introduction of BEC awards, Fisher noted that:

Those with a fondness for formal lecturing were appalled and, over the next few years, many would opt for early retirement. Leaving speeches would often include a side swipe at the new courses which had, it would be claimed, lowered academic standards and ushered in a new kind of student who would never have been allowed near college in the “good old days”.

(Fisher, 1999, page 24)

It seems that the TEC approach may have been closer than the BEC approach to prefiguring the CASLO approach, as TEC objectives were specified more tightly than BEC objectives, and the TEC strongly promoted the mastery principle even though it resisted insisting that it had to be applied stringently. The TEC also seems to have been more heavily criticised for embracing an outcome-based approach:

We make a plea for a more complete conception of curriculum development and for the need to find an appropriate role for objectives, where the approach can be regarded as one aid (among others) to design, rather than a strait-jacket on the teacher’s perception of what technician education is about.

(MacRory, et al, 1977, page 6)

It is fair to say, however, that similar criticisms were levelled at the new BEC awards:

Objectives like ‘list the main reasons why organisations are formed’ (objective A1) and ‘define the concept of “cost” distinguishing between different types of cost’ (objective E5) encourage an emphasis on description and lower-level cognitive skills [which is] found to encourage rote learning and to provide an inadequate basis for further study.

(Mace, 1980, page 65)

That said, it is clear that the impact of TEC and BEC awards on the further education sector – including the beginning of a shift towards learning outcomes – was profound and long lasting. Evans (2009) described this as “possibly one of the most significant developments” in the sector, providing a massive impetus to staff development.

City & Guilds

In the late 1960s, City & Guilds established an advisory committee that would meet from 1968 to 1969 to advise on a number of issues arising from the Industrial Training Act and the work of the Haslegrave committee (Stevens, 1993). One issue was the conduct of tests of practical competence (see Jones, 1971). Toward the end of 1969, City & Guilds established an Examination Techniques Development Unit, and a consultancy service in competence testing known as The Skills Testing Service, which developed new approaches to testing industrial skills including the idea of phased testing, which was judged to be particularly important for serving diagnostic and formative purposes.

Longbottom, et al (1973) described a programme of phased testing for trainee craftsmen in the shipbuilding industry, which had been developed in collaboration with City & Guilds. The development process began by using task analysis to identify what was involved in the normal course of production work for each of the main shipbuilding trades. For each identified task, a set of assessment points were then specified, to indicate critical features of effective task performance. This detailed procedural scaffolding helped to ensure that foremen would be able to assume the role of assessors, by observing trainees in action and putting a tick or cross against each of the specified assessment requirements. With the expectation that trainees ought to be able to perform satisfactorily across all of the important features, this was essentially a precursor to the CASLO approach.[footnote 19]

New approaches of this sort were described in detail in a book titled ‘Testing Industrial Skills’ written by 2 former members of the City & Guilds Skills Testing Service, Alan Jones and Peter Whittaker (1975).[footnote 20] Although most of their examples incorporated a classical approach based upon mark aggregation (in contrast to the CASLO mastery approach) the book emphasised the importance of basing test development upon a clear specification of behavioural objectives. It noted the inadequacy of relying upon ‘course content’ lists, explaining how they needed to be redescribed, first, in terms of a ‘statement of skills’ (much like CASLO learning outcomes) and, second, in terms of a ‘behavioural specification’ (much like CASLO assessment criteria).

During the late 1970s, City & Guilds formulated a new policy on training schemes, based upon:

the move towards a process-competence based approach to technical education as an alternative to the traditional subject-based approach. By September 1980 most of the existing Engineering Craft Studies schemes adopted as much as 12 years previously had been re-stated in terms of learning objectives

(Stevens, 1993, page 143)

City & Guilds continued to implement this policy into the 1980s, and from 1984 to 1985 collaborated with the Chemical Industries Association training organisation (with financial support from the Manpower Services Commission) on the development of new standards of competence.[footnote 21] Quoting a City & Guilds broadsheet, Raggatt & Williams explained that these standards attested to:

the technical performance expected on the completion of training; the precise criteria by which attainment of performance can be assessed; [and] the conditions under which the performance must be carried out

(Raggatt & Williams, 1999, page 38)

In an article entitled ‘Training for Competence’ the development officer at City & Guilds, Rob Christie, described an emerging zeitgeist:

Fortunately, it is becoming increasingly common practice for the designers of education and training events to specify in clear, behavioural, terms the outcomes which they intend to achieve by the event. Moreover, these outcomes are increasingly likely to be expressed as the results which a worker’s behaviour achieves rather than just the activity exhibited. This is an important point for the conception of competence. And particularly if the intended outcomes are skills – cognitive or physical – they are now more likely to be expressed in such a way that they indicate the degree of skill – or the level of competence – expected.

(Christie, 1985, page 30)

Not only did this article emphasise the detailed specification of outcomes, it also stressed the importance of performance testing – either practical or cognitive – and the importance of certifying total mastery of the specified domain. Just a few years later, this mastery-based conception of (training and) certification was to become the foundation for NVQ development. Note, in particular, the reference to how learning outcomes were increasingly being expressed in terms of what the worker’s behaviour would achieve rather than just the activity being undertaken. This approach was to become fundamental to the development of National Occupational Standards (Norman Gealy, personal communication).

The point of this section is to emphasise that the outcome-based approach to qualification design that was to become the template for building NVQs – which we identify as the first CASLO qualifications of national prominence – was not without precedent. Quite the opposite. The TEC, the BEC, and City & Guilds had all been developing similar approaches during the 1970s. Indeed, the Further Education Unit report ‘Assessment, Quality and Competence’ (FEU, 1986) noted that the BTEC, City & Guilds, and the RSA were all heavily invested in developing outcome-based qualifications during this period, to represent competence more comprehensively and authentically than had been the case in previous decades.

Summary of the 1970s landscape

At this point, it is useful to stand back and survey the Technical and Vocational Education and Training landscape towards the end of the 1970s. Perhaps the most important thing to emphasise is that, while formal qualifications played an important role during the 1970s, they were not as ubiquitous or as important as they are today. The school leaving age had been raised to 16 in 1972, but many young people still entered the job market with few (if any) academic qualifications. Furthermore, many became employed in jobs that provided little (if any) systematic education or training.

Wheatley (1976) emphasised that although all craft apprenticeship college courses led to final exams – notably City & Guilds Craft Certificates – apprentices were generally not required to pass these qualifications to complete their apprenticeship.[footnote 22] Traditionally, apprentices merely had to participate in training activities and ‘serve their time’ in order to be considered a craftsman. Other than within a small number of schemes, the overall apprenticeship was not assessed.[footnote 23] This lack of formal recognition had a negative backwash impact on industrial training, resulting in a situation in which the quality of training was extremely variable, both within and across industries (Wheatley, 1976).

The situation began to change when Industrial Training Boards (ITBs) were introduced to the training landscape. The ITB approach to improving the quality and efficiency of training included identifying training needs and training standards as a basis for designing and validating training programmes. This involved specifying: jobs (title, job description, job title), training programmes (with implications for work-based and college-based provision), and assessment procedures.

Nearly all industrial craft occupations had been catered for by 1971, and nearly all involved some form of phased (or staged) testing for diagnostic and formative purposes. To facilitate this, training objectives were “defined in behavioural terms” that specified what the trainee should be capable of, and these specifications led “to corresponding objectivity in the drawing up of phased tests” (both quotations from Wheatley, 1976, page 22). These specifications were developed on the basis of task or skills analysis, which involved deconstructing each craft into a series of component tasks or skills. The phased tests were developed by colleges, by employers, or by awarding organisations such as City & Guilds.

Prior to the ITB schemes being developed, apprentices were unlikely to undergo any systematic programme of on-the-job training. The ITB schemes changed this situation, specifying both training needs and training standards. Inevitably, these new schemes also required awarding organisations to undertake a major programme of syllabus redevelopment for their off-the-job training courses. Wheatley explained that:

In principle, the syllabus content of a course of associated further education is derived mainly from the job specification for the occupation concerned and, more directly, is based on the training and skill specification and the training programme developed from it. […] this was only rarely possible before the implementation of the Industrial Training Act 1964

(Wheatley, 1976, page 88)

It is important to note how the new ITB schemes incorporated an outcome-based approach to specifying training requirements. As we saw in the previous section, City & Guilds supported this approach, even for theory courses:

Syllabuses in craft theory have normally been set out in traditional ‘content’ form (e.g. ‘Principles of basic woodwork joints’). Increasingly in recent years they are coming to be expressed in behavioural terms (e.g. ‘The student should be able to explain and illustrate the principles involved in the construction of basic joints’). In the case of the relatively new schemes for the building crafts, syllabuses in traditional ‘content’ form are preceded by statements in behavioural terms of the ‘course objectives’; on the other hand, the most recent schemes – for printing crafts – have syllabuses exclusively in the form of ‘course objectives’, i.e., in behavioural terms. This is still an area of experimentation and development and there is a good deal of variation in style.

(Wheatley, 1976, page 89)

In short, the roots of the CASLO approach were already deeply embedded in the TVET landscape by the end of the 1970s, led by advances in the training field.

The influence of the ITBs extended beyond craft occupations. For instance, Wheatley noted how most ITBs also published technician training recommendations, providing examples of how firms could prepare their own technician job descriptions, based on task analysis, and then go on to develop suitable training programmes. The ITBs supported higher-level training too. However, whereas the new ITB schemes had begun to certificate the completion of craft apprenticeships – to recognise a satisfactory level of performance across their training programme – this tended not to be extended to higher-level apprenticeships, where the relevant educational qualifications (and training records) assumed greater significance.[footnote 24]

Wheatley ended his review of the state of apprenticeships in England by reflecting upon the many school leavers who did not join apprenticeship schemes. He made particular reference to an influential report on ‘Vocational Preparation for Young People’ which had been published by the Training Services Agency (of the Manpower Services Commission) in 1975.

The report argued that the current training system was failing in 2 respects. First, there was insufficient investment in training for craft and technician skills. Second, there was inadequate vocational preparation for the 300,000 young people who entered the labour market each year and received little or no training for their work. This included semi-skilled occupations, clerical, commercial, administrative, distributive, and services fields, and a high proportion of occupations that were dominated by women. These were fields where the ITBs had had least impact. Wheatley noted that legislation had provided for further education by day-release for all young people below the age of 18 since 1918. Yet, this had not been implemented. This meant that many young people in employment received little or no systematic on-the-job training, and no systematic off-the-job training.

The landscape within which TVET qualifications were situated during the 1970s can be summarised as follows. First, a substantial amount of technical and vocational education and training occurred through apprenticeships, although numbers had been declining since the late 1960s and this was a cause for concern. The quality and effectiveness of apprenticeship training continued to be highly variable, although the situation had improved through the work of the Industrial Training Boards. This was particularly important for improving the quality and effectiveness of on-the-job training, where the use of task and skills analysis had made training needs and standards far clearer. In the wake of these developments, outcome-based specifications became increasingly popular as the foundation for off-the-job college courses, pioneered by major providers including the TEC, the BEC, and City & Guilds. Off-the-job training courses – delivered primarily by further education colleges – provided the underpinning knowledge and understanding for apprenticeships. Yet, their importance should not be overstated. They were certainly very valuable in the labour market, but apprentices were generally not required to pass them in order to complete their apprenticeships, particularly within craft industries.

Second, many young people who had left school and entered work had limited or no access to education or training, let alone to qualifications. This theme will be developed toward the end of the next section.

1980s

The previous 2 sections (1960s and 1970s respectively) have explained:

  • the landscape of TVET qualifications during the 1960s
  • problems that plagued this landscape, and
  • how the landscape began to change during the 1970s

The present section provides broader and deeper insights into circumstances surrounding qualification developments in England from the late-1980s onwards. This includes insights into the influence of various North American educational movements, and insights into the sociopolitical context of qualification and assessment policy making. Although the educational movements influenced practices prior to the 1980s – including TEC and BEC initiatives, of course – it was during the 1980s that their influence peaked, as the principle of criterion-referencing became embedded in policies and practices across the board.

Roots in North American scholarship

In the following subsections, we will consider 3 educational movements that originated in the USA but that also became influential in England: the Objectives Movement, the Mastery Movement, and the Criterion-Referenced Measurement Movement. Having explained how these movements influenced adoption of the CASLO approach in England, we will then consider the wider sociopolitical context in England during the 1980s prior to the introduction of NVQs.

It is hard to characterise movements like the following, which have all been influenced by scholars from a variety of backgrounds, working in a variety of contexts, and which have been operationalised in a variety of different ways, including very badly! However, the 3 movements discussed below are interrelated, and the links between them are significant. The following subsections capitalise on this, highlighting some of the most influential thinkers in each movement, as well as how each of these movements impacted on the next.[footnote 25]

Understanding these movements is critical to answering 3 fundamental questions concerning the genesis of the CASLO approach in England:

  1. where did the idea of specifying ‘learning outcomes’ originate?
  2. where did the idea of ‘mastering’ learning outcomes originate?
  3. how did both of these ideas take root in England during the 1970s and 1980s?

Objectives

The roots of the Objectives Movement are often traced back to seminal publications by Franklin Bobbitt (1918; 1924). Yet, the most lucid and straightforward account of the importance of objectives was provided a decade or so later by Ralph Tyler (Stenhouse, 1975). His book entitled ‘Basic Principles of Curriculum and Instruction’ (Tyler, 1949) has been described as the classic statement of the objectives approach (Kelly, 1982).

Tyler believed that effective instructional planning could not begin until a clear account had been provided of what the instructional process was intended to achieve, in terms of how a student was supposed to change as a result of the instruction. He observed that this critical first step of clarifying purposes (educational objectives) was typically sidestepped.

Tyler

To understand the significance of his contribution to curriculum and instruction, it is important to recognise that Tyler’s background lay in assessment, or ‘evaluation’ as he preferred to describe it (Newton & Shaw, 2014). During the early 1930s, his publications focused on the limitations of objective tests in educational contexts. The technology of objective testing had been honed during World War One as a practical tool for allocating recruits to roles in the armed forces. Owing to the simplicity of these tests (including, for example, multiple-choice tests) responses to objective test items could be marked objectively, in contrast to the traditional essay exam, which had been shown to have highly subjective marking. According to Tyler, the problem with applying this format to educational contexts related to the construction of test items, which would typically be derived from a topical outline – a content list – and not from an outline of objectives (Tyler, 1931). Tests constructed on the basis of a content list tended to end up measuring the acquisition of information, but little else:

Often, without recognizing it, test-makers have assumed that all the content treated in a course is to be remembered and that a test of the amount of this material which is remembered by the student is an adequate test of the subject. When the instructors of any college subject formulate their objectives, it is quickly evident that there are other mental processes which students are intended to develop.

(Tyler, 1932a, page 256)

Nowadays, we would refer to this limitation as construct underrepresentation. A ‘construct’ is how we define what our assessment needs to measure, and ‘construct underrepresentation’ indicates that the assessment measures only part of what ought to be measured. What we need is comprehensive, authentic assessment, which is faithful to the entirety of the construct, that is, to all intended learning outcomes. This idea of comprehensive authenticity was central to Tyler’s definition of validity:

the usefulness of the test in measuring the degree to which the pupils have attained the objectives which are the true goals of the subject

(Tyler, 1932b, page 374)

To capture educational objectives comprehensively and authentically, Tyler argued that it was essential to define them, not just in terms of content, but also in terms of behaviour, which Tyler interpreted in a broad sense “to mean any appropriate reactions, physical, mental, emotional, and the like” (Tyler, 1936, page 151). Hence, the idea of behavioural objectives. That Tyler described his approach in terms of behaviour is consistent with his background in assessment, which is concerned with criteria for establishing whether or not educational objectives have been achieved. These criteria always, ultimately, relate to performances – observable behaviours of one sort or another – which might include oral responses, physical demonstrations, written accounts, or suchlike.

Characterising cognition in terms of observable behaviours runs the risk of sounding reductive. In fact, nothing could be further from the truth as far as Tyler was concerned. The whole point of Tyler’s mission was to ensure that high-level objectives – including the least tangible and hardest to describe – were represented as comprehensively and authentically as possible from the outset, as a point of reference for comprehensive and authentic instruction, as well as for comprehensive and authentic assessment:

These educational objectives become the criteria by which materials are selected, content is outlined, instructional procedures are developed and tests and examinations are prepared.

(Tyler, 1949, page 3)

Nutrition Digestion Circulation Respiration Reproduction
Understanding of important facts and principles Yes Yes Yes Yes Yes
Familiarity with dependable sources of information Yes No No No Yes
Ability to interpret data Yes Yes Yes Yes Yes
Ability to apply principles Yes Yes Yes Yes Yes
Ability to study and report results of study Yes Yes Yes Yes Yes
Broad and mature interests Yes Yes Yes Yes Yes
Social attitudes Yes No No No Yes

Figure 1. Use of 2-dimensional chart to represent biological science objectives

Figure 1 adapts part of a table from Tyler (1949, page 50), to demonstrate how objectives can be represented more comprehensively and authentically by identifying the kind of behavioural change that is anticipated for each element of content identified. In this figure, the rows and columns have been reversed to save space. The content aspects of the objectives (within the subdomain ‘functions of human organisms’) are presented as columns, while the behavioural aspects are presented as rows.

What is clear from using a chart like this is that each area of content can be (and often will be) associated with a wide range of behavioural objectives. Indeed, the process of constructing a chart like this forces its developer to think long and hard about the kind of objectives that really do need to be included (marked by an X) and those that might legitimately be excluded. This becomes the focus for effective curriculum planning, pedagogical planning, and assessment planning.

Classical approach in England

The dangers of construct underrepresentation of the sort identified by Tyler had been recognised in England for as long as exams had been in widespread use (see Latham, 1886, for example). By the 1940s, reform of the School Certificate and Higher School Certificate system was on the cards. The Norwood report, which led to the new General Certificate of Education Ordinary and Advanced level system, recounted concerns such as the following:

The subjects themselves are handled too rigidly; they make little contact with each other or with life or reality or future occupation or interests; examination requirements cast their shadow over all; the acquisition of information is given undue importance; a premium is put on memorisation; power of judgment remains untrained; second-hand opinions pass for knowledge.

(Norwood, 1943, page 10)

Although problems such as these echoed concerns expressed by Tyler in the USA, the Objectives Movement does not appear to have influenced qualification development in England during the 1940s and 1950s. Qualifications continued to be specified only partially, in terms of syllabus content complemented by the exam papers that were released each year. This partial specification of a qualification, in terms of syllabus content and past exam papers, reflects what we refer to as the ‘classical’ approach to qualification design. Rather than specifying educational objectives in terms of both content and behaviours, only content was specified.

Cambridge University Press & Assessment has published a useful archive of past exam material, which illustrates what early O and A level syllabuses and exam papers looked like. During the 1950s, the syllabus for a more technical subject, like O level physics, would simply have listed content. Table 1 reproduces an extract from the Cambridge 1957 O level physics syllabus, which was 9 pages long, listing 79 items of content plus notes on the scope of each item.[footnote 26]

Syllabus Notes
1. Measurement of length and of volume. Both f.p.s. and c.g.s. systems are expected. Candidates will not be asked to describe a vernier or a screw-gauge, but may be expected to use them in the practical examination.
2. Measurement of time by use of the simple pendulum. A knowledge of the formula relating periodic time to length of the pendulum will not be expected; if required in the practical examination, it will be given.
3. Densities of solids and liquids. Experimental determination of densities, e.g. by density bottle or by weighing and use of a measuring cylinder, is expected.
4. Pressure in liquids and gases; transmission of fluid pressure; the hydraulic press. Quantitative formulae required.
5. Boyle’s Law. Experimental demonstration for air is included.

Table 1. Cambridge 1957 O level physics syllabus

By the 1970s, little had changed. The Cambridge 1974 O level physics syllabus was now 12 pages long, but was laid out in essentially the same way. It was more clearly delineated into sections and subsections:

Section A (Items 1 to 20) Mechanics, Hydrostatics, Heat

Section B (Items 21 to 32) Waves, Optics

Section C (Items 33 to 51) Magnetism, Electricity, and Modern Physics

But it was still just a list of 51 items of content. It included a short description of the structure of the exam papers, with a brief introductory section that issued a warning that seemed (ironically) to hint at the perils of not stating objectives clearly:

The syllabus is not intended to be used as a teaching syllabus, or to suggest a teaching order. It is expected that teachers will wish to develop the subject in their own way.

In the examination, questions will be aimed more at testing the candidates’ understanding of fundamental physical principles, and the application of these principles to problem situations, than to their ability to remember a large number of facts and to perform numerical exercises. Some questions will, however, include appropriate calculations.

(UCLES, 1972, page 37)

This lack of detail was characteristic of qualification specification in England during the 1970s and 1980s, including for vocational qualifications (Blakey & Stagg, 1978; Black & Wolf, 1990). This is an important part of the context for the introduction of the CASLO approach, which was intended to help rectify problems associated with the classical approach, most notably its under-specification of educational objectives.

Bloom’s Taxonomy

Benjamin Bloom was a student of Tyler. He is most famous for developing and promulgating the behavioural objectives approach, through a book that was to become known as ‘Bloom’s Taxonomy’ (Bloom, Englehart, Furst, Hill, & Krathwohl, 1956).[footnote 27] The idea for a classification system of this sort had emerged in 1948, as a basis for facilitating communication among examiners. Again, this link to assessment is important: the framework was conceived as a means of facilitating the exchange of test items among university teachers (Krathwohl, 2002).

A core feature of their classification scheme for the cognitive domain was its representation of levels of cognitive complexity, which reflected an assumption that simple behaviours become integrated to form more complex ones. They ordered 6 major classes of behaviour from least to most complex:

  1. knowledge
  2. comprehension
  3. application
  4. analysis
  5. synthesis
  6. evaluation

The authors of the Taxonomy treated these high-level classes as ‘descriptive’ rather than ‘explanatory’ constructs (Bloom, Hastings, & Madaus, 1971, page 24).[footnote 28] This oriented them towards further behavioural deconstruction. For instance, they deconstructed the ‘comprehension’ category into subcategories, describing the behaviours associated with each subcategory, thereby helping to render them (and the higher-level category) less covert:

2.00 Comprehension

This represents the lowest level of understanding. It refers to a type of understanding or apprehension such that the individual knows what is being communicated and can make use of the material or idea being communicated without necessarily relating it to other material or seeing its fullest implications.

2.10 Translation

Comprehension as evidenced by the care and accuracy with which the communication is paraphrased or rendered from one language or form of communication to another. […]

2.20 Interpretation

The explanation or summarization of a communication. […]

2.30 Extrapolation

The extension of trends or tendencies beyond the given data to determine implications, consequences, corollaries, effects, etc., which are in accordance with the conditions described in the original communication.

(Bloom, et al, 1956, pages 204 to 205)

The intention underlying this process of deconstruction was to unpack the meaning of the higher-level constructs by explaining what they were likely to entail in practice. This was facilitated by the use of “point-at-able” verbs (Bloom, et al, 1971, page 33), such as: to state, to match, to predict, or to compute. Following in the tradition pioneered by Tyler, these verbs explained what students needed to ‘do’ with the syllabus content they were studying. Also following his lead, the most common use of this framework was to secure comprehensive authenticity: to shift curricula and tests away from less complex categories and towards more complex ones (Krathwohl, 2002).[footnote 29]

Behaviourism

Before explaining how education scholars in England reacted to the growing influence of the Objectives Movement during the 1970s, it is important to consider how the movement may (or may not) have been influenced by behaviourism. The Pan ‘Dictionary of Philosophy’ (Flew, 1979) characterises behaviourism as the theory that psychological functioning is definable in terms of observed behavioural data, citing the North American psychologist John B Watson as its progenitor (Watson, 1925).[footnote 30] Although, as a paradigm, its time has now passed, it impacted widely, influencing both philosophy and education in the USA and internationally.

It is important to consider the (alleged) influence of behaviourism because – from the outset and to the present day – critics in England have panned the use of objectives, as though the Objectives Movement was, as a matter of principle, self-evidently misconceived: guilty by association with behaviourism. An early example of this comes from a paper by Bull, who maligned the use of objectives by the TEC and the BEC as a “manifestation of behaviourism” (Bull, 1985, page 74), going on to explain that:

The behaviourist approach which underlies the use of objectives is very suspect. It is based on experiments with animals and the last thing the behaviourists came to study was the actual behaviour of man. Fundamentally, in any case, the behaviourists were not really interested in explaining behaviour, or even learning: they were basically interested in conditioning – and it is debateable whether even animals learn much by conditioning in their normal, natural environment.

(Bull, 1985, page 80)

Hyland concurred, claiming that the competence-based approach was “founded on dubious and largely discredited behaviourist principles” (Hyland, 1993, page 66), and stating that:

This specific (and seemingly simple) conception of competence is founded squarely on behaviourist learning principles and suffers from all the weaknesses traditionally identified with such programmes

(Hyland, 1993, page 59)

There are, in fact, 2 important associations between the Objectives Movement and behaviourism, related to measurement and instruction, respectively.

The first association concerns the link between the Objectives Movement and a particular approach to the philosophy of science. We have already seen how behavioural objectives were fundamental to the Objectives Movement.[footnote 31] Yet, this insistence upon specifying objectives in ‘behavioural’ terms is sometimes taken to imply that the Objectives Movement is underpinned by a ‘behaviourist’ philosophy (see Melton, 1997, for instance).

A more accurate explanation is that both behaviourism and the Objectives Movement embraced the concept of operationalism, which had been introduced by the physicist Percy Bridgman (1927).[footnote 32] This philosophical claim insisted that, when speaking of measuring a particular construct, the construct is synonymous with the operational procedure that is used to measure it (Briggs, 2022). By the end of the 1930s, the principle of operational definition had become a matter of dogma within psychology (Leahey, 1992). What are we measuring when we measure, say, intelligence? Nothing more than the behaviour tapped by the particular intelligence test that we happen to be using.

The high-level ‘behavioural’ objectives described by both Tyler and Bloom were explicitly cognitive – ‘knowledge of’, ‘understanding of’, ‘mature interests’ – which implied that they were not directly observable and had to be inferred. This left them open to conflicting interpretation. By deconstructing and explicating these constructs in more overtly behavioural terms, they became communicable, and therefore assessable. This was a natural route for assessment specialists such as Tyler and Bloom to have pursued, as all assessments bottom-out in performances, or behaviours, of one sort or another. In other words, because they wanted to draw higher-level inferences concerning competence, they needed to elucidate the lower-level performances that might warrant inferences of that sort.

It was exactly this pragmatic stance that led Bloom and colleagues – in accordance with the zeitgeist of the day – to treat behavioural objectives as though they operationally defined the constructs that they were interested in measuring:

Using operational rather than nominal definitions will make statements of educational objectives clear and easier to communicate to others. Words like “understanding,” “comprehension,” and “appreciation” will take on more precise behavioral meanings and will not be open to various interpretations.

(Bloom, et al, 1971, page 24)

In short, rather than reflecting a deeper commitment to a (now outmoded) philosophy of science, the Objectives Movement adopted the ‘behavioural’ stance for largely pragmatic reasons, to provide a solid foundation for assessment.[footnote 33] The Objectives Movement was never logically bound to the concept of operational definition. But, for a period of time, at least, it did lend it some credibility.

The second, and more significant, association concerns the link between the Objectives Movement and Programmed Instruction, which was an approach that gained traction in the USA during the 1950s and 1960s. The roots of Programmed Instruction can be traced back to World War 2 (and the Korean conflict) when the military looked to North American personnel psychology for inspiration (Bloom, et al, 1971). Soldiers needed to be trained to perform fairly straightforward activities, like assembling and disassembling a rifle, as quickly and efficiently as possible. Psychologists approached this challenge by breaking these macro performances (the activities) down into structured sequences of micro performances, which became the building blocks for instruction. This was known as task analysis.

The renowned behaviourist B F Skinner was highly influential in the Programmed Instruction Movement, particularly through his controversial publication ‘Teaching Machines’ (Skinner, 1958). Reflecting on his own daughter’s experience of education, Skinner concluded that schools paid too little attention to principles derived from the scientific study of learning. So, he set out to identify a more effective instructional approach. Programmed Instruction capitalised on the potential of technology to deliver a structured sequence of instructional units, which a learner could progress through at their own rate, involving active engagement with little or no external assistance, providing the learner with immediate feedback concerning the accuracy of their responses (Lockee, Moore, & Burton, 2004). Early incarnations, consistent with Skinner’s approach, presented learners with units that were so short, clear, and simple that the probability of error was extremely low, thus facilitating the delicate process of shaping appropriate behaviour (Klausmeier & Goodwin, 1966; Sutherland, 1988).

The link to the Objectives Movement should be fairly obvious. Necessarily, the first step in developing a sequence of programmed instruction involves specification of the intended outcomes of instruction. Robert F Mager was a leading figure in the movement, and his short, engaging, book on ‘Preparing Instructional Objectives’ (Mager, 1962) was to become the bible for writing objectives (Lockee, et al, 2004).

So, does this provide evidence that the Objectives Movement was based upon dubious and largely discredited behaviourist principles? No. It simply indicates that a particular behaviourist approach to instruction, Programmed Instruction, was premised upon a precise specification of behavioural objectives. In fact, the kind of objectives required for a behaviourist instructional approach were very “specific ones, very numerous and of the nature of specific habits” (Tyler, 1949, page 42). They bore little resemblance to the generalised objectives preferred by Tyler and Bloom, or to those produced during the 1970s by the TEC and BEC.[footnote 34]

It is worth noting that, although Mager’s book was originally published under the title ‘Preparing Objectives for Programmed Instruction’, this was changed within a year to remove reference to Programmed Instruction. Likewise, a preface that originally read “It is assumed that you are interested in preparing materials for auto-instructional presentation” (Mager, 1962, page x) had evolved into “It is assumed that you are interested in preparing effective instruction” by the revised second edition (Mager, 1984, page vi).[footnote 35] After all, the book was about how to write objectives, not about how to implement a particular instructional approach derived from behaviourism.

At the heart of Mager’s simple proposal was the idea that objectives should be stated in terms of a desired behaviour, that, is, what the learner will be doing when they demonstrate their learning. According to the revised second edition, this provided answers to 3 questions (see Mager, 1984, page 87): [footnote 36]

  1. What do I want students to be able to do?
  2. What are the important conditions or constraints under which I want them to perform?
  3. How well must students perform for me to be satisfied?

Without regard to subject matter or grade level, be able to describe ten examples of school practices that promote learning and ten examples of school practices that retard or interfere with learning.

(Mager, 1984, page 108)

In this example, the performance element concerned ‘describing’, the conditions involved ‘any subject and any grade level’, and the criterion for both categories was ‘ten examples’. Although disassociated from the conceptual baggage of Programmed Instruction, it is clear that objectives of this sort were still far more specific than Tyler’s generalised ones.

In conclusion, although there are links between the Objectives Movement and behaviourism, they have certainly been overstated and overgeneralised. The idea that the movement is somehow fundamentally undermined by association with behaviourism is misguided.

Academic debate in England

When the TEC, the BEC, and (later) the National Council for Vocational Qualifications (NCVQ) began to apply principles from the Objectives Movement to the specification of VTQs in England, they immediately became the target of heavy criticism from scholars of education. MacRory criticised early TEC innovations, warning of the “incomprehensible” “mystique” of objectives (MacRory, et al, 1977, page 4). Bull claimed that objectives of the sort adopted by the TEC and the BEC were “inimical to the real structure of knowledge” (Bull, 1985, page 77). Norris, questioning the new emphasis upon competence – in the wake of the De Ville report, which led directly to the development of NVQs – argued that competence models “distort and understate the very things they are trying to represent” (Norris, 1991, page 334).

Significantly, all 3 of these early critiques referenced Lawrence Stenhouse. The point, here, is that – even before the TEC, the BEC, and the NCVQ began to apply principles from the Objectives Movement to qualification design – the movement had already received substantial criticism from education scholars in England, who seemed intent on heading the Objectives Movement off at the pass in its march from the USA to England. The book ‘An Introduction to Curriculum Research and Development’ (Stenhouse, 1975) was particularly influential in this respect.

Kelly (1982) provides an informative overview of this period, explaining that there had been little interest, in England, in specifying objectives until the mid-1960s. Yet, as problems of curriculum planning came to the fore, interest began to grow, particularly under the aegis of the Schools Council, which was established in 1964. Most of its projects began with the development of clear course objectives.

Although the objectives approach had its supporters in England, including Hirst (1969), other scholars were more critical. They included Pring (1971), Sockett (1971), and Ormell (1974), all of whom criticised Bloom’s Taxonomy specifically.[footnote 37] Wesson (1983a; 1983b) later criticised the use of behavioural objectives more generally, referencing TEC developments specifically. Kelly (1982) concluded that criticisms of the objectives model were “as strong as, if not stronger than” the case for its use (Kelly, 1982, page 108). In fact, to many scholars of education, the case against the objectives model seemed incontrovertible.[footnote 38]

An article by Christopher Ormell provides an interesting perspective on this period, written by one of the original critics, albeit some decades later (Ormell, 1992). He described how the Stenhouse critique became the “official story” among progressive academic educationists, such that opposing behavioural objectives became, after 1975, the “badge of progressive educationalism” the world over (Ormell, 1992, page 23). By the 1990s, however, the tide had turned. The progressive principles of the 1970s – open problem solving, value free approaches, creativity, optionality, child-centred work, culturally permissive approaches – now seemed out of date. What was needed was a new case against Bloom, as the old case, the Stenhouse case, was totally ineffective now.

Ormell directly challenged both of Stenhouse’s principal objections to behavioural objectives. First, Stenhouse claimed that objectives provided a straightjacketed account of knowledge, as though knowledge could only be demonstrated in a discrete, pre-specified, manner. Ormell replied that this critique overstated the significance of creative (unpredictable) performances. Second, Stenhouse claimed that objectives provide a straightjacketed account of education, as though we should be able to specify in advance, with some clarity, what students ought to learn. Ormell replied that, nowadays, it seemed inconceivable that we should not even try to clarify educational objectives. In short, according to Ormell, the Stenhouse critique embodied values “possibly accepted in the 1970s, but certainly out of favour now” such that there is “no mandate today for unpredictable students and obscurantist teachers” (Ormell, 1992, page 27).[footnote 39]

The purpose of this section is not to try to do justice to the arguments for and against the Objectives Movement, whether prior to the 1970s, during it, or subsequent to it. The purpose is simply to illustrate the nature of academic debate among academic educationists in England during the 1970s and 1980s regarding the Objectives Movement. This provides an important backdrop to the adoption of the CASLO approach by assessment organisations from the 1970s to the 1990s, and to how the NVQ model, in particular, was received by many scholars of education.

Mastery

The Mastery Movement became influential in the USA during the 1970s, as both a philosophy of, and a methodology for, teaching and learning. Two parallels with the history of the Objectives Movement are worth noting. First, the roots of mastery learning can be traced back to influential North American scholars working during the early decades of the 20th century. Second, mastery learning was adopted as an organising principle of the Programmed Instruction Movement. The Mastery Movement itself, however, began during the 1970s, in the wake of a report written by Benjamin Bloom, entitled ‘Learning for Mastery’ (Bloom, 1968). According to Gagne – who had previously been associated with the Programmed Instruction Movement and who later developed his own version of mastery learning – Bloom raised the idea of mastery “to a new level of generality” (Gagne, 1988, page 108).

Bloom

The idea of mastery represented a new philosophy of teaching and learning because it rejected the standard assumption that: for each new cohort of students, only about a third will adequately learn what has been taught, about a third will learn a good deal but not enough to be considered good, and a third will fail or just get by (Bloom, et al, 1971).[footnote 40] This assumption was embodied in the standard practice of grading on the normal curve, which led to the highest achieving students receiving the highest grades, and to the lowest achieving students being failed. This was not so much a problem with assessment – the lowest achieving students might legitimately have been categorised as having failed. Instead, what was at fault was the expectation of failure, which created a self-fulfilling prophecy of failure. Bloom argued that this assumption was not simply wasteful and destructive but unnecessary. Conversely, he claimed that:

Most students (perhaps more than 90 per cent) can master what we have to teach them, and it is the task of instruction to find the means which will enable them to master the subject under consideration.

(Bloom, et al, 1971, page 43)

Drawing on work by Carroll (1963), Bloom argued that this was possible as long as the quality of instruction was high enough, and as long as students who needed additional time were provided with it. Indeed, he proposed a relationship between these 2 variables: with effective instruction, we can reduce the amount of time required by slower students to the point where this is not prohibitively long.

Without wanting to be too prescriptive, methodologically, Bloom recommended the approach that he and colleagues had been developing at the University of Chicago. Central to this approach was the idea of formative evaluation.[footnote 41] Starting from a clear and comprehensive specification of learning outcomes, a course could be broken down into units of learning of perhaps a week or 2 in duration. These units could then be broken down into a number of elements, and diagnostic progress tests could be developed to determine whether a student had mastered the elements, or if not, then what they still needed to learn.

Frequent formative evaluation, the Chicago group argued, helped to pace student learning, and helped to motivate students. For students who had mastered a tested element, formative evaluation would help to reinforce their learning. For the remaining students, the test would provide critical feedback to reveal their particular points of difficulty. Upon the foundation of this diagnosis, a teacher would then prescribe an appropriate instructional intervention to help close the gap in learning.

One obvious challenge associated with this personalised approach to teaching and learning – which was premised on the idea that pace of progression will differ across students – is how it can be accommodated when teaching whole classes. Although this might be achieved in various ways, Bloom recommended that enrichment, or extension, activities should be used with faster students, enabling them to deepen their learning while slower students were still acquiring the required breadth of learning (Guskey, 2023).

Central to the idea of mastery learning is the impact that it is presumed to have upon the slowest learners within any cohort. This impact derives from using evaluation formatively, that is, integrating assessment within teaching and learning rather than concentrating it all at the end of a course. If students are supported to achieve mastery in this fashion, then summative evaluation (summative assessment) should become a positive, reinforcing experience:

If the system of formative evaluation (diagnostic-progress tests) and summative evaluation (achievement examinations) informs the student of his mastery of the subject, he will come to believe in his own competence.

(Bloom, et al, 1971, page 56)

Classical approach in England

England and the USA have always had quite different assessment cultures. For instance, England never bought into multiple-choice testing with quite the same fervour as the USA. Yet, below the surface, their working assumptions and models have actually remained quite similar in many ways, and this was certainly true of grading practices during the middle of the 20th century. As such, it should not be surprising that the 2 nations experienced similar assessment and learning challenges, including how best to recognise success and to prevent failure.

Pedley, for example, described arrangements for grading regional technical exams in England, during the 1960s, as follows:

In all subjects the pass mark is 40 per cent […] In most subjects the credit mark is 65 per cent and the distinction mark 85 per cent.

(Pedley, 1964, page 154)

So, the idea of generally mastering the domain of learning was not built into the grading model for these TVET qualifications, as might have been indicated by specifying a high pass mark. Moreover, we have already seen how failure was a major concern for TVET qualifications during the 1950s and 1960s. Taylor & Beaumont noted that the typical failure rate for a City & Guilds or Regional Examining Board exam during the 1950s and 1960s was approximately a third (Taylor & Beaumont, 1967). Very similarly, O level pass rates tended to fluctuate around the 60% mark, from the 1950s to the 1970s, while A level pass rates tended to fluctuate around the 70% mark (Newton, 2022).

Neither were these exams designed to certify mastery of specific elements of competence. Instead, they were designed according to the classical approach, whereby exam marks were aggregated to a mark total, with candidates’ final grades determined by how many marks they achieved in total. That is, these exams adopted a compensatory (as opposed to a mastery) approach to aggregation.

In short, all of the challenges that led Bloom to propose a new philosophy of teaching and learning were just as evident in England during the 1970s as they had been in the USA.

Criterion-Referenced Measurement

In the introduction to his 1978 book ‘Criterion-Referenced Measurement’, Ronald Berk described the shift from norm-referenced to criterion-referenced testing as the “most dramatic” to have occurred over the past decade in the field of educational measurement and evaluation in the USA. He explained that an “increasing emphasis on mastery-proficiency-competency is permeating all levels of education and other professions, particularly medicine and the allied health fields” (both quotations from Berk, 1978, page 3).

The distinction between norm-referenced and criterion-referenced measurement had been introduced during the early 1960s, by Robert Glaser. Glaser had been a leading figure in the Programmed Instruction Movement and – like both Tyler and Bloom – his expertise in teaching and learning was informed by his background in educational measurement.

Glaser observed that most existing educational attainment measures were norm-referenced, that is, they embodied relative standards, indicating the proficiency of any particular student relative to their peer group. He argued, instead, for criterion-referenced measures, which embodied absolute standards, to indicate the proficiency of any particular student along a “continuum of knowledge acquisition” (Glaser, 1963, page 519). In addition, he argued, that educationists needed to:

specify minimum levels of performance that describe the least amount of end-of-course competence the student is expected to attain, or that he needs in order to go on to the next course in a sequence.

(Glaser, 1963, page 520)

The link to the Mastery Movement was quite explicit. As Jim Popham put it some years later, when the intention is to bring large numbers of learners to levels of competence not previously seen, relative comparisons are no longer meaningful because we want all learners to end up performing at a high level (Popham, 1994). Glaser’s take on summative assessment was very similar to Bloom’s take on formative assessment. Both involved new ways of thinking about assessment, driven by new ways of thinking about teaching and learning, plus the need for clarity concerning what students have actually learnt, or not yet learnt.

Berk (1978) explained how Glaser’s original conception of criterion-referenced measurement had subsequently been developed (in the USA) in 2 different directions: domain-referenced measurement and mastery testing. Mastery testing was closely linked to the idea of mastery learning, whereby tests were designed to measure particular instructional objectives, and cut-scores were established to distinguish between learners who had mastered those objectives versus those who had not yet mastered them. This had been the dominant direction of travel.

Domain-referenced measurement described the other (more complicated) direction of travel. It aimed to take testing beyond traditional educational objectives, to provide what Jim Popham described as an “unambiguous” definition of each domain of learning (Popham, 1978, page 13). Developing this line of reasoning, Popham had concluded that traditional objectives were simply too vague. He argued that the first step in developing any criterion-referenced test ought to involve a more precise definition of the domain of content or behaviours that needed to be assessed, which could be defined operationally as the specification of all possible test items.[footnote 42]

In fact, neither of these 2 directions of travel strongly influenced developments in England. What was influential, however, was the more general idea of moving away from norm-referencing and towards criterion-referencing. We have already seen how this influenced the work of the technical and vocational awarding organisations during the 1970s, including the TEC, the BEC, City & Guilds, and the RSA. But the idea of criterion-referencing became increasingly popular during the 1980s, with scholars and politicians alike, and came to influence qualification and assessment practices in England far more widely. Ultimately, the ideas that drove the development of technical and vocational qualifications in a certain direction during the 1970s – the direction that was later to take root in the CASLO approach – ended up driving the development of general qualifications in a somewhat different direction. The important point, however, is that practices evolved across the board under the influence of criterion-referencing during the 1970s and 1980s. In the following subsections, we will see how the legacy of criterion-referencing extended well beyond the TVET landscape.

Records of Achievement

A speech by the Secretary of State for Education and Science, Sir Keith Joseph, encapsulated the zeitgeist of the early 1980s. Joseph announced a variety of new policy goals that were premised on more clearly specified educational objectives and a move towards criterion-referenced assessment. This included the introduction of Records of Achievement:

But despite these difficulties no one of us can be satisfied with what our pupils attain by the time that they are allowed to leave school. Some of these attainments are not at present systematically assessed or acknowledged where they can be and ought to be. That is why I see as important the development of records of achievement on which I have recently issued a draft statement of policy.

(Joseph, 1984, page 139)

This draft statement inspired a conference in 1984, which resulted in a book that was edited by Patricia Broadfoot (1986). In her introduction to this book, Broadfoot explained that there was now a considerable groundswell of support among educationists for “a more comprehensive and curriculum-integrated approach to assessment” (Broadfoot, 1986, page 2), an idea that Records of Achievement clearly embodied. Indeed, she characterised England as being on the brink of an assessment revolution, confronting a longstanding tradition of overreliance upon external exams. Having said that, Broadfoot acknowledged that, while trailblazer schemes from the 1970s and early-1980s had now become a matter of government policy, there were still deep divisions in the movement concerning how best to achieve their mutually accepted goals.

One of these deep divisions concerned the degree to which Records of Achievement ought to be subjective and personal versus objective and comparable. Chapters within the edited book illustrated both extremes. Located at the more objective end of this continuum were schemes that City & Guilds had developed in the wake of the Further Education Unit report ‘A Basis for Choice’ (FEU, 1979), which were firmly grounded in criterion-referencing. Nick Stratton, a Senior Research Officer at City & Guilds, described the schemes that they had developed with particular reference to a general vocational preparation course known as ‘course 365’ (Stratton, 1986).

5 ‘steps’
Practical & Numerical Safety Can explain the need for safety rules Can remember safety instructions Can spot safety hazards Can apply safe working practices independently Can maintain, and suggest improvements to, safety measures
Practical & Numerical Using equipment Can use equipment safely to perform simple tasks under guidance Can use equipment safely to perform a sequence of tasks after demonstration Can select and use suitable equipment and materials for the job, without help Can set up and use equipment to produce work to standard Can identify and remedy common faults in equipment
Practical & Numerical Numeracy Can count objects Can solve problems by adding and subtracting Can solve problem by multiplying and dividing Can calculate ratios, percentages and proportions Can use algebraic formulae

Figure 2. Extract of profile grid from Stratton (1986, pages 110 to 111)

City & Guilds was keen to provide a scheme that would record student progress in a manner that would support and motivate students, as well as culminating in a reliable end-of-course attainment profile. Course 365 embodied this idea of capturing both progress and end-of-course attainment by using a profile grid. The (mark 3 version of this) grid was split into 4 generic outcome areas: communication, practical & numerical, social, and decision-making. Rows within each outcome area identified more specific objectives, such as, for the ‘social’ outcome area: working in a group, accepting responsibility, and working with clients. Alongside each of these rows were 5 columns that exemplified progress in the form of criterion statements that increased, from left to right, in terms of autonomy, complexity, and variety of application (albeit with the caveat that they were not necessarily organised in a strict logical hierarchy). Figure 2 recreates an extract from this profile grid relating to the ‘practical & numerical’ outcome area.[footnote 43]

Clearly, this was an outcome-based approach to recording progress and end-of-course attainment, and each of the criterion statements within this profile grid was amplified using concrete examples. City & Guilds saw each of the 5 criterion statements associated with each row as a ‘stepping stone’ towards maturity, rather than a formal level or grade.

Although perhaps not quite a direct precursor to the CASLO approach, the City & Guilds profiling schemes clearly reflected a similar ancestry, and were designed in the same spirit. It is worth noting that the Certificate of Pre-Vocational Education (CPVE) qualification model was based upon essentially the same kind of profiling scheme:

It embraces a fully fledged formative and summative profiling system based on both a personal reviewing system and a bank of summative ‘can do’ statements. The structure of this bank reflects the ten core areas of CPVE. There will be a dozen or so statements for each area and these will be organised as several sets of hierarchically related statements, with some left over ‘stand alone’ statements. Thus each printed-out profile report will contain only those statements corresponding to best performance.

(Stratton, 1986, page 124)

In the same speech, Joseph emphasised that external exams – including the soon to be launched General Certificate of Secondary Education (GCSE) – should also move towards a greater degree of criterion-referencing:

The existing system tells us a great deal about relative standards between different candidates. It tells us much less about absolute standards. […] We need a reasonable assurance that pupils obtaining a particular grade will know certain things and possess certain skills or have achieved a certain competence.

(Joseph, 1984, page 142)

His broader policy goal was to raise educational standards, and there were concerns that norm-referenced grading made it impossible to determine whether or not educational standards were actually rising or falling, whether at the national level or for individual schools.[footnote 44] He believed that a move towards criterion-referencing would render results capable of measuring improvements in educational standards over time. Results would also be better designed to hold individual schools to account, should they fail to raise educational standards.

In fact, his 1984 announcement largely reiterated sentiments expressed in a DES policy statement from 1982. This statement had acknowledged attempts to develop GCSE Grade Descriptions, which were intended to indicate the likely levels of competence and the knowledge that might be expected from those who obtained a particular GCSE grade in each subject area. The Secretary of State hoped that these Grade Descriptions would be a step towards the longer-term goal of developing Grade-related Criteria that would render: “the award of all grades conditional on evidence of attainment in specific aspects of a subject” (DES & WO, 1982, page 10). The exam boards had been asked to direct their attention to this longer-term goal.

Despite a protracted period of research and analysis, the longer-term goal of establishing Grade-related Criteria was never achieved. It was concluded that strong forms of criterion-referencing were incompatible with public examining in England (Cresswell 1987; Gipps, 1990; Tattersall 2007). The boards were, however, successful in developing Grade Descriptions, which helped to exemplify attainment standards at the subject level for both GCSE and A level exams (Kingdon & Stobart, 1988; Kingdon, 1991).

Assessment Objectives

Another legacy of enthusiasm for criterion-referencing during the 1980s was the development of assessment objectives, although this can equally be seen as a longer-term legacy of enthusiasm for educational objectives during the 1970s. The Schools Council appears to have been particularly influential in this respect.

The Schools Council was established in 1964 to assume responsibility for most of the work previously carried out by the Secondary School Examinations Council and the Curriculum Study Group of the Department of Education and Science (see Schools Council, 1965). This included a coordinating and advisory function in relation to O and A level examining. It very soon decided that rapid changes in schools and society demanded “a complete reappraisal of the sixth-form curriculum and examinations” (Schools Council, 1972, page 7). A paper on sixth-form examining methods indicated a need to revisit the fundamental principles of examining:

The traditional pattern of examinations based primarily on syllabus content has plainly undesirable and constrictive effects on teaching and learning. A valid test of the success of pupils in following a course of study requires not merely that it should test content but that above all it should be related to the aims and emphasis of the teaching that preceded it. […] Certainly the analysis of educational aims would seem to be the prerequisite of examination reform.

(Schools Council, 1968, page 6)

The concern, here, was that existing exams focused too much on testing “factual knowledge” and too little on testing the “ability to think” (Schools Council, 1968, page 6), with an inevitable negative backwash impact on teaching and learning. In response to this report, the Schools Council invited all of its subject committees to reconsider their examining techniques (Schools Council, 1973).

The report from the science committee noted that curriculum changes had progressed more rapidly in the sciences, influenced by the Nuffield Science projects (see Schools Council, 1970, for additional insights). It concluded that it was no longer acceptable for teachers and examiners to rely purely on syllabus content lists and past paper precedents. Its formal recommendations on the incorporation of clear and detailed objectives included:

The objectives should be explicit and should match the objectives of the curriculum.

The move towards higher mental objectives and away from questions demanding only the ability to remember should continue.

(Schools Council, 1973, page 44)

The science committee welcomed the introduction of ‘examination specifications’ that were based upon the idea of a 2-dimensional chart from Tyler (illustrated earlier in Figure 1). Appendix E from their report included a specification that had been developed for the 1969 Nuffield A level chemistry exam – which represented a full range of topic areas (as columns) alongside a full range of Bloomian cognitive behaviours (as rows) – with cell values indicating intended weighting. Already, these specifications were shifting emphasis “away from the ability simply to recall towards the ability to comprehend and apply” (Schools Council, 1973, page 25).

An authoritative review of O and A level physics syllabuses toward the end of the 1970s underlined the extent to which they differed in style across the 8 boards (Crellin, Orton, & Tawney, 1979). This report also noted that, while the examining boards had certainly engaged with the debate over objectives, they tended to prefer less detailed specifications: rather than developing long lists of behavioural objectives, they preferred more holistic approaches. For instance, a publication by the Joint Matriculation Board (JMB, 1970) included an illustration of how the objectives of a science exam might be represented in terms of 6 dimensions and associated weightings:

Knowledge                                          40%

Comprehension                                30%

Application                                          20%

Evaluation and investigation       10%

Expression

Experimental skills

In this example, expression was not weighted independently as it would be taken into account across all of the questions, and experimental skills would be dealt with separately in the practical exam. This approach is quite similar to how assessment objectives are expressed nowadays, although it took some time before this became standard practice.

A step in that direction occurred during the early 1980s, with the specification of Common Cores for A level subject areas (GCE Boards, 1983). These specifications indicated what all subject syllabuses ought to have in common, restricted to not more than half of any particular syllabus (Kingdon, 1991). Common Cores were intended to be useful for higher education selectors – clarifying what might be expected of prospective applicants – as well as having the potential to improve comparability of standards within and across examining boards. This was in the wake of widespread concern over the proliferation and increasing divergence of A level syllabuses.

Subject working groups were given considerable leeway in developing their Common Cores and – of relevance to the present report – some of them explicated aims and objectives as well as content. For example, as the first of 3 aims, the geology working group specified that the core syllabus will: “provide knowledge and practical experience of geology, both in the field and the laboratory, and demonstrate the use of this knowledge and experience”. In a subsequent section on objectives, it stated that students will be expected to demonstrate the ability to “perform basic tests and use elementary techniques in the field and the laboratory” and to “describe and understand geological processes both present and past” and to “formulate and test hypotheses in a geological context” and so on (all quotations from GCE Boards, 1983, page 72). Over time, A level syllabuses incorporated increasingly clear statements of aims and objectives (Kingdon, 1991).[footnote 45]

Specifications of this sort were developed more systematically within similar regulations for GCSEs, the National Criteria (DES, 1985). The Criteria specified a common structure for the major GCSE subject areas: permitted titles, general aims, assessment objectives, proportions of marks allocated to those objectives, schemes of assessment, and descriptions of standards at key grades (Kingdon & Stobart, 1988). In accordance with their roots in the Objectives Movement, GCSE assessment objectives were intended to indicate that GCSEs were about more than mere recall of factual knowledge, emphasising the importance of higher-order abilities and skills (Butterfield, 1996).[footnote 46] Indeed, part of the rationale for putting coursework at the heart of the GCSE model was to improve the assessment (and thereby the teaching) of these higher-order competencies. Significantly, though, GCSE assessment objectives were intended to be used for designing syllabuses and assessment procedures, rather than to be used directly by teachers when assessing students (Butterfield, 1996).

As far as GCSE and A level exams were concerned, assessment objectives had a significant role to play in warranting comparability claims across the examining boards. Scrutinising syllabuses through the lens of both breadth and depth – supported by tools like Bloom’s Taxonomy – had raised significant comparability concerns. For instance, an analysis by Crellin, et al (1979) suggested that O level physics syllabuses were fairly similar in terms of breadth of coverage, but varied considerably in the depth with which particular topics were treated. Based on a similar analysis for A level physics, they found it difficult not to conclude that the syllabuses differed significantly in demand. If, instead, all of the boards were required to allocate a certain proportion of marks for knowledge, comprehension, application, and so on, then this would help to deflect criticisms of this sort. Thus, GCSE and A level assessment objectives came to function as a tool for calibrating standards, rather than as a framework for teaching, learning, and assessment.

National Curriculum Assessment

Plans to assess the new national curriculum can be understood as the peak of enthusiasm for criterion-referencing in England. In December 1987, Professor Paul Black submitted proposals from the Task Group on Assessment and Testing (TGAT) to the Rt Hon Kenneth Baker MP. The TGAT report began by explaining that certain design principles would be prioritised: assessments should be criterion-referenced, they should be used formatively, and they should relate to progression through the curriculum:

More generally, the combination of a norm-referenced system with age-specific scaling would not be consistent with the proposals in the national curriculum consultative document. The overall national purpose is to work for achievement of the attainment targets of the curriculum. Assessment, whether for feedback to pupils or overall reporting and monitoring, should therefore be related to this attainment i.e. it should be criterion-referenced. Given this, it follows that different pupils may satisfy a given criterion at different ages: to tie the criteria to particular ages only would risk either limiting the very able, or giving the least able no reward, or both.

(TGAT, 1988, paragraph 99)

The report also emphasised that criterion-referencing in this manner would make it possible to monitor changes in national educational standards over time.

Ultimately, the goal of developing a tightly criterion-referenced assessment system proved to be very challenging to implement, and the model was radically loosened, if not entirely abandoned, during the mid-1990s (Dearing, 1993; Daugherty, 1995; Shorrocks-Taylor, 1999).

Scholarly roots

The roots of the CASLO approach are not straightforward to uncover. The architects of the new model – including Gilbert Jessup, of whom more will be said later – tended not to dwell on historical matters. Indeed, although the first major book on the adoption of the new model did include a chapter on its ‘background and origins’ (Tuxworth, 1989), it provided an oddly truncated account, which made it sound as though NVQs were little more than an adaptation of a North American approach to teacher education from the 1960s.

The roots of the CASLO approach were certainly North American, although they stretched back further into the first few decades of the 20th century. These roots were fundamentally educational, but it is interesting to note how they were pioneered by scholars who had a particular desire to improve assessment. The dominant theme here – from the Objectives Movement through to the Criterion-Referenced Measurement Movement – was the need to ensure that assessment is as comprehensive and authentic as possible, given the negative consequences associated with partial and inauthentic assessment. These concerns were just as salient in England as they were in the USA, which is why these movements migrated. In particular, there was a strong appetite in England for tackling over-reliance on the written exam format, widespread concern over the lack of attention to higher-level skills, and a strong appetite for addressing the prevalence of failure.

As the Objectives Movement began to influence educational thinking in England, particularly during the 1970s, there was a certain amount of resistance from scholars of education. This was a period prior to the introduction of the national curriculum, during which teacher control of the curriculum was hotly debated. It is easy to see how the idea of prespecifying educational outcomes might have appeared to embody one side of this debate, while many academic educationists continued to argue for retaining teacher autonomy, which was more consistent with the other side of the debate. Conversely, as the Criterion-Referenced Measurement Movement began to influence educational thinking in England, particularly during the 1980s, the idea of clarity over what students needed to learn and be assessed on seems to have been somewhat less controversial. Clarity would provide a necessary foundation for effective formative assessment as well as for improving summative assessment.

Wider sociopolitical context of the 1980s

As we have already noted, academic debate surrounding the introduction of NVQs, and the CASLO approach more generally, tended to focus on its conceptual basis. Williams & Raggatt (1998) chose instead to focus on the economic, institutional, and political factors that helped to explain the origins of these new competence-based vocational qualifications. From documentary analysis and interviews with policy makers, officials, and consultants, they identified 4 interrelated beliefs that seemed to capture the underlying rationale for reform. Their analysis extends ideas from our earlier discussion of the 1970s landscape.[footnote 47]

First, education, generally, had failed to prepare learners for the needs of employers and employment. The economic recession of the 1970s had focused attention on the degree to which education was preparing young people adequately for the world of work, particularly low-achieving students. Critics claimed that many students were leaving school essentially unemployable owing to a lack of basic skills, and with anti-industrial attitudes cultivated by the education system itself. Policy makers decided that this must change, and that education – particularly further education – should be refocused to respond to the needs of employers and employment. This became known as the ‘new vocationalism’ of the 1970s and 1980s, to some extent an expression of distrust of the educational establishment. This new policy stance led to the development of curriculum initiatives such as the Technical and Vocational Education Initiative (TVEI) during the early 1980s, which required co-operation with local industries to provide occupationally relevant school- and college-based programmes for 14 to 18-year-olds (Ainley, 1990; Stanton & Bailey, 2004).

Second, the employment market (and therefore skills requirements) had changed over time, yet education and training provision had failed to keep up with these changes. England had transitioned into a post-industrial age: some sectors were advancing rapidly (including services), while others were in decline (including manufacturing). Entrants to the new job market required new skills, including expertise in new technologies, as well as higher levels of skill. Yet, in many of the advancing sectors – retail, hotel and catering, and caring, for instance – levels of skill were low, with an absence of appropriate apprenticeships and qualification suites.

Third, the apprenticeship system of the 1980s was not consistently delivering the goods. It was predominantly focused on craft industries, and apprenticeship numbers were continuing to decline. We noted earlier that the quality of education and training delivered to apprentices was highly variable during the 1970s. Williams & Raggatt explained that apprenticeships were unknown in many advancing sectors, and were typified by restrictive practices. In short, the apprenticeship system needed to be disrupted rather than reinforced. The move towards competence-based vocational qualifications was therefore associated with an attempt to improve quality, to improve coverage, and to free the system from artificial barriers. Most importantly, apprenticeship should no longer be associated with serving time, but with acquiring competence.

Fourth, rising levels of youth unemployment raised new challenges and opportunities related to getting young people off the streets and into employment. The Manpower Services Commission (MSC) was a non-departmental public body, responsible to the Department of Employment, with a remit to co-ordinate employment and training services, and the work of the Industrial Training Boards. Active from 1974 to 1988, it published a number of influential reports from the late-1970s onwards, which outlined a new conception of training standards that was to become the foundation upon which NVQs were built.

As unemployment rose during the 1970s, the focus of the MSC shifted towards the short-term needs, and then to the long-term needs, of the unemployed. Building upon the Labour-initiated Youth Opportunities Programme (YOP), the Conservative-initiated Youth Training Scheme (YTS) was introduced in 1983 to provide a programme of integrated education and training for school leavers. It comprised a one-year programme of on-the-job training, designed as a new model of apprenticeship that was intended to replace apprenticeship-by-time-serving. Although originally intended to cater for both employed and unemployed young people, very few employed trainees were enrolled on the scheme (Ainley, 1990). It soon became seen as a low paid, low status, last resort option for young people, with no clearly defined objectives, no recognised certification, and low completion rates (Ainley, 1990).

There was clearly a pressing need to enhance the status of the YTS system, and formal certification was deemed critical to achieving this goal. As policy officials began to design the principles of a 2-year programme of education and training, the need for a review of the qualification system became increasingly apparent (Williams & Raggatt, 1998). The white paper ‘Education and Training for Young People’ (DES & DE, 1985) announced that all trainees should have the opportunity to work towards a recognised qualification, and set in train a Review of Vocational Qualifications. As we will soon see, this review led to the National Council for Vocational Qualifications, and to National Vocational Qualifications. In 1990, the YTS was replaced by Youth Training, which specified that all trainees must follow a training programme that leads to a Level 2 NVQ (Raggatt & Unwin, 1991).

Summary of the pre-history

National Vocational Qualifications – which we have identified as the first CASLO qualifications of national prominence in England – embodied an approach to qualification design that departed radically from traditional TVET qualifications, including Ordinary National Certificates and Diplomas, Higher National Certificates and Diplomas, and Craft Certificates. As NVQs began to be rolled out, critics began to compare them unfavourably with the traditional qualifications that they were replacing. In a radical critique, Smithers adopted exactly this strategy, arguing that “well-known and respected” qualifications were being replaced by low quality NVQs (Smithers, 1993, page 10 and section 6). Indeed, he took this one step further by strongly implying that the awarding organisations responsible for existing qualifications were fundamentally opposed to the new NVQ model, but were unable to express their true feelings owing to pressure to buy into the new system (Smithers, 1993, paragraph 4.13).

In fact, the story is more complicated than this. The qualification systems that existed prior to the introduction of the NVQ framework were far from perfect. There would, undoubtedly, have been examples of qualification suites that operated very effectively within particular sectors for certain purposes. However, the huge diversity of TVET qualification provision during the 1960s and 1970s would be enough, in itself, to raise questions concerning quality across the board. There was certainly concern that the content of many existing qualifications was driven more by what teachers and trainers felt comfortable delivering than by what apprentices really needed to learn (Raggatt & Williams, 1999).

Furthermore, there were widely recognised, long-standing, problems with existing qualifications – related to wastage, retardation, and failure – which helped to dispose commentators towards greater reliance upon centre-based assessment. As well as helping to ensure that TVET qualifications were tailored to local needs, greater reliance upon continuous centre-based assessment might also help to improve the comprehensiveness and authenticity of these qualifications, enabling them to target the higher-level competencies that written exams had often failed to reach.

Beyond concerns related to the contents and processes associated with existing qualifications, it is important to remember that formal qualifications were often not required by employers, even to complete an apprenticeship. Apprenticeship was primarily a matter of serving time rather than acquiring competence. In fact, the Haslegrave report estimated that the majority of technicians in the workforce had no relevant qualification at all (Haslegrave, 1969). Furthermore, even well into the 1980s, there were large areas of the economy where no relevant qualifications existed, or where uptake was very low (Raggatt & Williams, 1999).

Turning from off-the-job qualifications to on-the-job training, the situation had improved since the 1964 Industrial Training Act, but it was far from perfect. At least partly because there was no formal certification of on-the-job training, quality varied widely both within and across sectors. The apprenticeship system of the 1960s and 1970s clearly had serious flaws (Peters, 1967; Oates, 2004; Fuller & Unwin, 2009; Mirza-Davies, 2015).

Finally, the economic downturn meant falling apprenticeship opportunities and rising unemployment. Very many young people had no opportunity to receive either on-the-job or off-the-job training. The Youth Training Scheme was a key part of the policy solution to this problem, and NVQs had to work in synergy with this initiative. Given this trajectory, the NVQ model is sometimes said to have: “evolved out of the need to validate work-based training programmes for young people” (Lester, 2011, page 206). Yet, although YTS certification might well have ended up as the focal problem for NVQ designers, it is important to appreciate that NVQs were also designed with a variety of peripheral problems in mind, including all of those summarised above.

In the wake of the Haslegrave report, 2 new organisations – the TEC and the BEC – had begun to address many of these peripheral problems as part of their attempts to rationalise existing qualification systems. In particular, they had attempted to improve the authenticity and comprehensiveness of qualifications that served industry and commerce. A key part of their solution to this challenge was the adoption of insights and methods from the North American Objectives Movement. Thus, TEC and BEC awards were based on outcome-based qualification models, which clearly prefigured the CASLO approach and NVQs more specifically. When these organisations merged to form the BTEC, this outcome-based tradition continued, although qualification models evolved following their merger, and continued to evolve over time. The fact that the other principal awarding organisations were also experimenting with outcome-based qualification models during the 1970s indicates that hostility within the sector was clearly not as deep nor as wide as the Smithers critique appeared to imply.[footnote 48] Indeed, it seems reasonable to conclude that outcome-based approaches had become mainstream by the mid-1980s, both as a foundation for TVET training programmes and as a foundation for TVET qualifications.

  1. During the 1960s and 1970s, there was a clear distinction between ‘further education’ (which happened in colleges) and ‘training for skill’ (which happened at work, although sometimes also in colleges, albeit somewhat differently). As explained by Cantor & Roberts (1972), the purpose of further education was to provide the underpinning knowledge and understanding required for successful job performance, as well as to enable employees to cope with change, or to support progression to more advanced study. Conversely, the purpose of training for skill was to develop the skills required to perform a job competently in situ. This formal separation between education and training in England can be traced back to the 1870s (Hansen, 1967). 

  2. The following analysis of the nature of apprenticeship during the 1970s borrows heavily from a detailed European Commission report by Wheatley (1976). 

  3. We have decided to use the gendered term ‘craftsmen’ in this section of the report because this was the term that was used in the 1960s. The vast majority of learners studying for City & Guilds awards for craftsmen, or for National awards for technicians, were boys or men. Pedley (1964) provided related statistics for 1961. Just over 28,000 girls or women were studying for City & Guilds awards, compared to just over 317,000 boys or men. Just over 4,000 girls or women were studying for the Ordinary National Certificate, compared to just over 144,000 boys or men. 

  4. In 1966, City & Guilds offered exams in 282 subjects, from 19 subject groups. The subject groups included Mechanical Engineering, Mining & Quarrying, Vehicles, Textiles, Building, Distributive Trades, and so on (Peters, 1967). 

  5. If entering after 5 years of secondary education, but without the exemption granted by 4 GCE O level passes, they would generally be expected to study a General course for a year. This would have been a normal route into ONC or OND study during the mid-1960s. 

  6. Pedley noted that the Intermediate Certificate had recently been relaunched as the ‘Craft Certificate’ and the Final Certificate had recently been relaunched as the ‘Advanced Craft Certificate’ (Pedley, 1964, page 141). 

  7. Accounts seem to vary in detail. For instance, Haslegrave observed that: “At the time the white paper was written, the minimum specified time for technician courses was 180 hours in each year, although 220 was more usual for part-time students” (Haslegrave, 1969, page 15). 

  8. Montgomery noted that up to 40% of questions might be replaced by the assessors, and up to 40% might be made compulsory. 

  9. The GCE examining boards had experimented with this idea, by providing ‘mode 2’ and ‘mode 3’ syllabus options, which offered increasing amounts of teacher control (compared with the default ‘mode 1’ traditional syllabus option). Despite the strength of the case in favour of increasing teacher control, very few schools actually opted into these schemes (Montgomery, 1965). 

  10. The white paper addressed 4 main objectives: (i) to broaden students’ education and provide maximum continuity between school and technical college, (ii) to adapt the system more closely to the needs of industry, (iii) to increase the variety of courses available for students, and (iv) to reduce wastage substantially (Haslegrave, 1969). Bourne (1984) argued that the interaction of (ii) and (iii) pointed to the creation of narrowly specialised courses that were tailored to meeting the immediate needs of industry rather than the long-term career prospects of students, and that City & Guilds had responded to this brief by setting up many new technician courses in specialised fields. 

  11. These new T courses had been developed for industry but not for commerce. 

  12. The information in this section is drawn from a number of publications, including: Pearce (1975), Anson (1978), Blakey & Stagg (1978), Bolton (1978), Birbeck (1980), Riches (1980), Halliday (1981), Bourne (1984), Hunter (1985). 

  13. TEC awards could be obtained by studying full-time, part-time day, part-time evening, block, sandwich or any combination. 

  14. TEC relied heavily upon the idea of common units in its attempt to rationalise course provision. 

  15. A contemporaneous briefing note on TEC (and BEC) awards expressed the expectation slightly differently: “A student is assessed at regular intervals throughout the unit and is required to pass each assessment” (Bracknell/Wokingham School-Industry Partnership, 1980, page 3). 

  16. Morris (1977) estimated that this would have affected about 40,000 candidates per year on HND or HNC and OND or ONC courses in Business Studies, and about 25,000 candidates per year on a variety of other further education courses in the business field. 

  17. The modules were intended to occupy from 75 to 90 hours of guided study (Fisher, 1999). Note that BEC ‘modules’ corresponded to TEC ‘units’ (and later became known as units). 

  18. Milloy & Saker illustrated the evolution of an end-of-course cross-modular exam, from 1981 to 1983, which became increasingly grounded in real-life problem-solving: “Students were again placed in a fictitious company for two days and were asked to respond to specific situations. These took the form of role plays, memorandums, reports and, for the first time, computer response and group work. People from local business and education were invited to take part throughout the two days. Students brought their set texts, notes and graded in-course assignments and were encouraged to confer during breaks and, if they wished, to re-submit work at any time during the two days.” (Milloy & Saker, 1984, page 24) 

  19. Lacking a clear distinction between learning outcomes and assessment criteria this is probably best considered a precursor to the CASLO approach rather than the approach itself, as we have defined it. Ellis (1979) provided a slightly different example of a similar kind of test developed by City & Guilds, this time assessing the task of grilling steaks. The important features identified in this test were indicated as either desirable or essential. For a candidate to pass, all 15 essential points and 8 out of 12 desirable ones had to be ticked. So, again, this sort of test was an important precursor to the CASLO approach. 

  20. They defined skill as “a complex goal directed sequence of activities with a high level of organization and making extensive use of feedback” (Jones & Whittaker, 1975, page 9), distinguishing between motor, perceptual, and language skills, the latter including basic language skills as well as decision making and planning. They noted that it was “probably generally accepted” (page 2) that written tests of trade knowledge alone were not valid for measuring job competence, and that measures of actual performance were required, whether direct (observations) or indirect (effectively simulations, of higher or lower fidelity). 

  21. These were standards of competence for workplace assessment, intended to complement (rather than replace) further education qualifications (Norman Gealy, personal communication). 

  22. Achieving the qualification would certainly have added status, though, and may have led to a salary increase. Indeed, many craft apprentices actually chose to enrol on technician level courses, which emphasises the value attached to off-the-job training and associated qualifications. 

  23. There were important exceptions to this general rule, which included City & Guilds qualifications for gas fitters, for instance, which certified full competence across both practical and theoretical aspects (Wheatley, 1976). 

  24. Note that, even within these new ITB schemes, craft apprentices did not have to pass their college-based qualification to be awarded the certificate of completion (Wheatley, 1976). 

  25. Involving so many movements of such large scale, different accounts will inevitably emphasise different historical pathways. For instance, the general roots of Competence Based Education and Training have been traced in slightly different ways by Davies (1976), Neumann (1979), Brown (1994), and Nodine (2016), to name just a few authors. Links to developments in England during the 1980s have been traced by Tuxford (1989) and more broadly by Burke (1995). There could be no definitive family tree of influences on qualification designers in England during the 1970s and 1980s. However, the influences foregrounded in the present report appear to be particularly salient in making sense of the uptake of the CASLO approach, given the particular shortcomings of extant technical and vocational qualification systems, and given the growing appeal of outcome-based education and training, generally, in England during the 1970s and into the 1980s. For instance, while some might start an account of this sort from Frederick Taylor, the present account starts from Ralph Tyler, particularly given Tyler’s influence on work of the Schools Council, in England, during the 1960s (Davies, 1976). Tyler’s emphasis on specifying general objectives – which he contrasted with the highly specific objectives favoured by behaviourists – also chimes with the subsequent ambition of NVQ designers to rollout a broad model of competence linked to the Job Competence Model. 

  26. It also provided a description of the practical exam, which would “test whether the candidates have worked through a satisfactory course in the laboratory and are capable of handling simple apparatus” (UCLES, undated, page 37). 

  27. This was published as the first in a series of handbooks, this one focusing on the cognitive domain. Others would focus on the affective domain and the psychomotor domain (although the taxonomy for the psychomotor domain was never published). 

  28. Accordingly, we say that evidence of having solved a particular problem in chemistry permits us to attribute a certain level of understanding to a student (a descriptive analysis of understanding), rather than saying that having a certain level of understanding enables a student to solve a particular problem in chemistry (an explanatory analysis of understanding). 

  29. The taxonomy was revised nearly half a century after its original publication (Anderson & Krathwohl, et al, 2001). Rather than referring to ‘behaviours’ (which had often been misconstrued reductively) the new publication referred to ‘cognitive processes’ and the cognitive complexity dimension was reconfigured slightly: remember, understand, apply, analyze, evaluate, create. (Incidentally, we make no apology for repeatedly referring to ‘behavioural objectives’ within this section of the present report, as explaining what was originally meant by the term helps to illustrate why those who initially criticised the Objectives Movement for being naively behaviourist were wrong to have done so.) Anderson & Krathwohl, et al, also added another dimension to the revised taxonomy, the ‘knowledge’ dimension, which transformed it into a 2-dimensional framework. The knowledge dimension ranged from concrete to abstract: factual knowledge, conceptual knowledge, procedural knowledge, and metacognitive knowledge. 

  30. Leahey (1992) provides a more detailed and subtle account, which helps to unpack many of the complexities, as well as the disagreements, that underlie this much-mythologised paradigm. 

  31. The authors of the Taxonomy described behavioural objectives as the “ways in which individuals are to act, think, or feel as the result of participating in some unit of instruction” (Bloom, et al, 1956, page 12). 

  32. Albeit, in the case of the Objectives Movement, only really during a period spanning the middle of the twentieth century. 

  33. As explained some years later by another author of the Taxonomy, the behavioural approach is the “only viable alternative” when required to assess otherwise unobservable “processes and states” (Furst, 1981, page 442). This is exactly how the Training Agency was later to describe the development of standards for National Vocational Qualifications: “The exercise of developing standards for a particular occupational area is equivalent to developing an operational definition of competence in that area.” (TA, 1988a, page 1). 

  34. See Bloom, et al (1971) for an example of the minute level of detail associated with objectives developed for Programmed Instruction, including the structural sequencing of these objectives, which was critical to guiding the instructional process. 

  35. Popham recounts a personal exchange with Mager, in 1961, in which Mager presciently explained that once “all the furor about teaching machines and programmed instruction had died down, the single most important contribution of the movement would be the attention it directed to the form in which objectives should be formulated.” (Popham, 1978, page 14). 

  36. This was originally stated as: (a) Identify and name the over-all behavior act. (b) Define the important conditions under which the behavior is too occur (given and/or restrictions and limitations). (c) Define the criterion of acceptable performance. (See Mager, 1962, page 53.) 

  37. See Furst (1981) for an illuminating response from one of the team that originally produced Bloom’s Taxonomy. 

  38. Having said that, even within these circles, it was often accepted that outcome-based approaches can work very well in courses that focus on training rather than education, that is: “in courses which are essentially vocational” (Kelly, 2009, p.86). 

  39. Ormell’s alternative case against Bloom (his approach to specifying behavioural objectives) argued for “whole” objectives for education that “encompass both behavioural and mental objectives” (page 30). He argued that we want students who “actually do understand, actually do think, actually do take safety utterly seriously in the laboratory” (page 31). These students “do the appropriate things, as well as possess the mental states” (page 31). That this should be offered as an alternative to Bloom seems a little odd, to say the least. 

  40. Bloom’s 1968 report was reproduced, with minor editorial amendments, in both Block (1971) and Bloom, et al (1971). 

  41. The idea of formative assessment, which is now internationally recognised, can be traced back to their ‘Handbook on Formative and Summative Evaluation of Student Learning’ (Bloom, et al, 1971). 

  42. Determining an appropriate level of precision proved to be the most challenging aspect of this approach (Popham, 1978; Popham, 1994). 

  43. The profile grid actually incorporated an initial ‘half’ column, which acknowledged that some students (often those with a learning difficulty) would finish the course still working towards the first criterion statement for one or more of the objectives. 

  44. In fact, O and A level exams had never been norm-referenced, despite what many stakeholders had presumed (see Newton, 2022, for a more nuanced analysis) 

  45. Despite substantial progress in developing aims and objectives, it is interesting to note how the 1988 Higginson report on A levels echoed exactly the same concerns as had been expressed 2 decades earlier by the Schools Council: “As we have said, there is a need for leaner syllabuses in which the proportion of factual content has been reduced and in which the accent is on higher level skills and making sense of the facts” (Higginson, 1988, para. 5.2). 

  46. The following quotation illustrates how GCSE assessment objectives were originally formulated (reproducing a quotation from the National Criteria for English): “The Assessment Objectives in a syllabus with the title English must provide opportunities for candidates to demonstrate their ability to: (i) understand and covey information; (ii) understand, order and present facts, ideas and opinions; (iii) evaluate information in reading material and in other media, and select what is relevant to specific purposes; (iv) articulate experience and express what is felt and what is imagined; (v) recognise implicit meaning and attitudes; (vi) show a sense of audience and an awareness of style in both formal and informal situations; (vii) exercise control of appropriate grammatical structures, conventions of paragraphing, sentence structure, punctuation and spelling in their writing; (viii) communicate effectively and appropriately in spoken English.” (Abbott, McLone, & Patrick, 1989, pages 3 to 4). 

  47. Further insights into this sociopolitical context are provided in Raggatt & Williams (1999, chapter 2). 

  48. It is still true to say that the BTEC was more vocal in its opposition to aspects of the NVQ model than were the other awarding organisations (see Sharp, 1999, for instance). The Council developed its qualifications more in keeping with the BEC tradition than the TEC tradition, emphasising the centrality of integrated learning. Conversely, learning, per se, received little attention within the NVQ model (Cantor, et al, 1995). Moreover, as we shall soon see, the nature of the outcomes that were specified for NVQs, and the manner in which they were specified, was quite different from the approach adopted for BTEC awards, which proved to be a bone of contention. Yet, neither of these issues should detract from the fact that BEC, TEC, and BTEC awards were always based on outcome-based approaches to qualification design.