Guidance

Guidelines for examining patent applications relating to artificial intelligence (AI)

Updated 30 January 2025

1. This document provides guidelines for examining patent applications relating to artificial intelligence (AI) inventions at the Intellectual Property Office (IPO).

1.1. The relevant law is the Patents Act 1977 and the Patents Rules 2007.  The act and rules are interpreted by following judgments of the UK courts. Judgments concerning the Patents Act 1977 are binding on the IPO. The act implements provisions of the European Patent Convention (EPC). Although decisions made by the European Patent Office (EPO) Boards of Appeal concerning the EPC are not binding on the IPO, they are persuasive.

1.2. The guidelines explain how the exclusions to patentability of the act apply to AI inventions. They also look briefly at the requirement for sufficiency of disclosure of AI inventions.

1.3. The guidelines include an accompanying set of scenarios. Each scenario briefly describes an AI invention and includes a non-binding assessment of the invention in relation to the exclusions to patentability.

1.4. The guidelines and the accompanying scenarios should be read as a supplement to the Manual of Patent Practice (MoPP).

1.5. Any comments or questions arising from these guidelines should be addressed to:

Nigel Hanley
Intellectual Property Office
Concept House
Cardiff Road
Newport
South Wales
NP10 8QQ

Telephone: 01633 814746

Email: nigel.Hanley@ipo.gov.uk

Dr Stephen Richardson
Intellectual Property Office
Concept House
Cardiff Road
Newport
South Wales
NP10 8QQ

Telephone: 01633 813725

Email: stephen.richardson@ipo.gov.uk

Summary

2. These guidelines are summarised as follows.

2.1. In the UK, patents are available for AI inventions in all fields of technology.

2.2. AI inventions are computer-implemented inventions. They rely on mathematical methods and computer programs in some way. UK patent law excludes from patent protection inventions relating solely to a mathematical method or a program for a computer. However, the exclusions are applied as a matter of “substance not form” by considering the task or process an AI invention performs.

2.3. When the task or process performed by an AI invention makes a technical contribution to the known art, the invention is not excluded and is patent-eligible.

2.4. An AI invention makes a technical contribution if:

  • it embodies or performs a technical process which exists outside a computer, or
  • it contributes to solving a technical problem lying outside a computer or lying within the computer itself, or
  • it is a new way of operating a computer in a technical sense

2.5. An AI invention is only excluded from patent protection if it does not make a technical contribution. An AI invention does not make a technical contribution if:

  • it relates solely to items listed as being excluded (for example a business method) and there is no more to it, or
  • it relates solely to processing or manipulating information or data and there is no more to it, or
  • it is just a better or well-written program for a conventional computer and there is no more to it

2.6. The conditions set out above apply to inventions that may be categorised as “applied AI” or “core AI”. They also apply to the training of AI inventions.

2.7. Hardware implementations of AI are assessed in the same way as software implementations of AI, as a matter of substance not form, by considering the task or process they perform.

2.8. Patent protection is available for training data sets when they are used as part of an invention making a technical contribution. However, claims to data sets characterised solely by their information content are excluded as the presentation of information as such.

2.9. The sufficiency of disclosure of an AI invention or a data set is assessed in the same way as any other invention. The relevant legal principles for assessing sufficiency are set out in Eli Lilly v Human Genome Sciences.

What is an AI invention?

3. There is no single agreed-upon definition of artificial intelligence. The UK government has defined AI as:

technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation. 

3.1. Although there is no accepted definition of AI, the features and functions of AI inventions can be summarised simply as follows.

3.2. AI inventions are based on models and algorithms such as neural networks, genetic algorithms, machine learning algorithms, and other similar approaches. These models and algorithms are inherently mathematical in nature. However, their practical implementation typically relies on computers and computer programs in some way. AI inventions are therefore examples of so-called ‘computer-implemented inventions’. Practical implementations of AI inventions operate upon input data and provide various forms of output data.

3.3. AI inventions may take many forms. They may be implemented in computer hardware, in computer software, or as a mixture of computer hardware and computer software. The computer hardware used may be a conventional digital computer or it may be some other type of special-purpose hardware.

3.4. An AI invention may be implemented as an application program for a computer system, as part of the internal workings of a computing system, as physical computer hardware, or as an emulated (or simulated) computer running on another computer.

3.5. Artificial intelligence inventions are found across all fields of technology. Patent protection may be sought for many aspects of AI inventions. For the purposes of these guidelines, AI inventions are categorised as either ‘applied AI’ or ‘core AI’.

Applied AI

3.6. An applied AI invention applies AI techniques to a field other than the field of AI. An applied AI invention may carry out a specific process, or solve a specific problem, lying outside the computer on which it is implemented. Alternatively, an applied AI invention may make a computer work better in general. For example, it may perform a process or solve a problem concerned with the internal workings of a computer or a system of computers.

Core AI

3.7. In contrast to an applied AI invention, a core AI invention defines an advance in the field of AI itself. A core AI invention involves, for example, an improved AI model, algorithm, or mathematical method.

Other aspects of AI

3.8. Regardless of whether they might be categorised as applied AI or core AI, certain types of AI invention require training. For example, some AI models are trained using specific training data sets. These kinds of training methods may be referred to as ‘machine learning’ amongst others.

3.9. AI inventions may also be implemented in a fixed or permanent form in which its program is ‘hard-coded’ and cannot be changed. For example, an AI invention may be embedded in an electronic chip or in special-purpose or fixed function electronic circuitry.

The law

Conditions for the grant of a valid patent

4. Section 1(1) of the act sets out four conditions that an invention must satisfy for the grant of a valid patent.

A patent may be granted only for an invention in respect of which the following conditions are satisfied, that is to say—

(a) the invention is new;

(b) it involves an inventive step;

(c) it is capable of industrial application;

(d) the grant of a patent for it is not excluded by subsections (2) and (3) or section 4A below; and references in this Act to a patentable invention shall be construed accordingly.

4.1. These four conditions apply to all inventions in all fields of technology. A patent may only be granted if an invention is new, has an inventive step, is industrially applicable, and is not excluded from patent protection.

4.2. Patents are available for AI inventions provided they satisfy these four conditions. These guidelines are primarily concerned with the requirement of section 1(1)(d) that an AI invention must not relate to so-called excluded subject matter.

Excluded subject matter

4.3.Section 1(2) of the act implements Article 52 of the European Patent Convention. Section 1(2) declares that certain things are not inventions for the purposes of the act.

It is hereby declared that the following (among other things) are not inventions for the purposes of this Act, that is to say, anything which consists of—

(a) a discovery, scientific theory or mathematical method;

(b) a literary, dramatic, musical or artistic work or any other aesthetic creation whatsoever;

(c) a scheme, rule or method for performing a mental act, playing a game or doing business, or a program for a computer;

(d) the presentation of information;
but the foregoing provision shall prevent anything from being treated as an invention for the purposes of this Act only to the extent that a patent or application for a patent relates to that thing as such.

The meaning of ‘program for a computer’

4.4. In Emotional Perception the Court of Appeal considered the meaning of “program for a computer” as it appears in section 1(2).

4.5. The Court held that a computer is a machine which processes information. It held that a computer program (which is the same thing as a program for a computer) is a set of instructions for a computer to do something.

4.6. These definitions work together, so one can say that a computer is a machine which does something, and that the thing it does is to process information in a particular way. The program is the set of instructions which cause the machine to process the information in that particular way, rather than in another way.

4.7. These definitions mean that the exclusion to a “program for a computer” is not limited to programs for digital computers. Its scope encompasses programs for other kinds of computers, for example analogue computers, artificial neural networks, hybrid computers, and quantum computers.

AI inventions and section 1(2)

4.8. AI inventions are based on mathematical models and algorithms for their implementation. This means examiners should consider the “mathematical method” and “program for a computer” exclusions carefully when examining AI inventions.

4.9. The words “as such” appearing at the end of section 1(2) qualify the extent of its exclusions. For example, it is only a program for a computer “as such” that is excluded from patent protection. A computer-implemented invention is not excluded if it relates to something more than a program for a computer “as such”.

4.10. Whether a program for a computer is caught by the exclusion is decided by asking whether it makes a relevant ‘technical contribution’.

Technical contribution and the Aerotel approach

4.11. According to binding precedent of the UK courts, a computer-implemented invention avoids exclusion under section 1(2) if it makes a technical contribution to the state of the art. However, a contribution that consists purely of excluded subject matter, or that is not technical in nature, does not count as a technical contribution.

4.12. Whether an invention makes a relevant technical contribution is decided by following the approach approved by the UK Court of Appeal in Aerotel/Macrossan. This approach is known as the ‘Aerotel test’ and it has four steps. In its judgment in Emotional Perception the Court of Appeal stated the four steps of the Aerotel test as:

(1) Properly construe the claim.

(2) Identify the actual contribution (although at the application stage this might have to be the alleged contribution).

(3) Ask whether it falls solely within the excluded subject matter.

(4) If the third step has not covered it, check whether the actual or alleged contribution is actually technical.

4.13. Examiners should use the Aerotel test to determine whether an AI invention is a patentable invention for the purposes of section 1(2). A full explanation of the four steps of the Aerotel test is found in the Manual of Patent Practice.

When does a computer-implemented invention make a technical contribution?

4.14. The answer to this critical question has been considered in many judgments of the Court of Appeal. They include the judgments handed down in Merrill Lynch, Gale’s Application, Aerotel/Macrossan, Symbian, HTC v Apple, Lantana, and Emotional Perception.

4.15. There is no precise test or clear rule for determining when a computer-implemented invention makes a technical contribution. Instead, the UK Courts have developed five so-called ‘signposts’ which may be used as a helpful guide in considering whether a claimed invention has a relevant technical effect.

4.16. The signposts were first set out in AT&T/Cvon and later refined by the Court of Appeal in HTC v Apple where it expressed the fourth signpost less restrictively. The five signposts are:

i) whether the claimed technical effect has a technical effect on a process which is carried on outside the computer;

ii) whether the claimed technical effect operates at the level of the architecture of the computer; that is to say whether the effect is produced irrespective of the data being processed or the applications being run;

iii) whether the claimed technical effect results in the computer being made to operate in a new way;

iv) whether the program makes the computer a better computer in the sense of running more efficiently and effectively as a computer;

v) whether the perceived problem is overcome by the claimed invention as opposed to merely being circumvented.

4.17. Examiners may use the signposts to assist them in answering steps (3) and (4) of the Aerotel test when examining AI inventions. Examiners should note that the “computer” mentioned in the signposts need not be a single computer, it may be an arrangement of two or more computers. The “computer” may be a physical computer implemented in hardware or it may be an emulated or simulated computer implemented in software.

4.18. When examining AI inventions under section 1(2), examiners should treat each case on its own merits.

4.19. Examiners should remember the signposts are not necessarily determinative in every case. Examiners should not treat the signposts as prescriptive conditions. Conversely, if a signpost is found to apply to an invention, then it is not automatically patentable (paragraph 149 of HTC v Apple).

4.20. Examiners should also bear in mind the guidance given in the decisions from which the signposts were distilled. The guidance includes the judgments of the Court of Appeal in Merrill Lynch, Gale’s Application, and Symbian and the decisions of the EPO Boards of Appeal in Vicom T 0208/84, IBM T 0006/83, IBM T 0115/85, and Hitachi T 0258/03. These decisions constitute reliable guidance on whether an invention makes a technical contribution, and examiners should follow them unless there is a very strong reason not to do so.

‘Substance not form’

4.21. The principle of “substance not form” means the mere fact that an AI invention involves a computer (or several computers) does not make its contribution technical. Instead, when applying steps (2), (3), and (4) of Aerotel, examiners should assess AI inventions by considering the task or process they perform. When the task or process performed makes a technical contribution, the invention is a patentable invention.

4.22. The need to consider the claimed task or process of a computer-implemented invention in this way was explained by the high court in Halliburton:

[32] Thus, when confronted by an invention which is implemented in computer software, the mere fact that it works that way does not normally answer the question of patentability. The question is decided by considering what task it is that the program (or the programmed computer) actually performs. A computer programmed to perform a task which makes a contribution to the art which is technical in nature, is a patentable invention and may be claimed as such. Indeed (see Astron Clinica [2008] RPC 14) in those circumstances the patentee is perfectly entitled to claim the computer program itself.

Artificial neural networks

4.23. In Emotional Perception, the Court of Appeal considered whether an artificial neural network (or ‘ANN’) engages the program for computer exclusion.

4.24. At paragraph 68, the Court held that, whether it is implemented in hardware or software, an ANN is a computer – it is a machine for processing information. The Court held that, irrespective of its implementation in hardware or software, the weights and biases of an ANN are a computer program. The weights and biases are a set of instructions for a computer to do something. For a given machine, a different set of weights and biases will cause the machine to process information in a different way. The weights and biases instruct the machine to process the information it is presented with in a particular way.

4.25. The Court concluded that, however it is implemented, the weights and biases of an ANN are a program for a computer and are within the purview of the computer program exclusion. The Court also expressed its opinion that the weights and biases may be a mathematical method, so the mathematical method exclusion may be relevant to ANNs.

4.26. Examiners should consider the program for a computer and mathematical method exclusions carefully in respect of ANN-implemented inventions.

4.27. The Court emphasised that, although the exclusions of section 1(2) apply to ANN implemented inventions, this does not mean they are unpatentable (paragraph 71 of Emotional Perception). Like other computer-implemented inventions, an ANN implemented invention is a patentable invention if it makes a relevant technical contribution. The Emotional Perception judgment simply means ANN inventions are in no better and no worse position than other computer implemented inventions.

4.28. As with other computer implemented inventions, examiners may use the signposts as guidance in considering whether an ANN-implemented invention makes a technical contribution.

Applied AI: performing processes or solving problems lying outside the computer

5. Signpost 1 asks whether a computer-implemented invention “has a technical effect on a process which is carried on outside the computer”. In other words, signpost 1 asks whether an invention produces an ‘external technical effect’.

5.1. For example, in paragraph 38 of Halliburton the high court observed that:

… if the task performed by the program represents something specific and external to the computer and does not fall within one of the excluded areas … that circumstance is likely to indicate that the invention is patentable.

5.2. An external technical effect may be shown in two ways. Firstly, an external technical effect may be seen in performing or controlling a technical process which exists outside the computer on which an invention is implemented. Secondly, an external technical effect may be shown if a computer-implemented invention contributes to the solution of a technical problem lying outside the computer on which it is implemented.

5.3. Examiners should consider carefully whether an AI invention shows an external technical effect in either (or both) of these ways. If it does, then it is very likely the AI invention meets signposts 1 and/or 5.

Technical processes outside the computer

5.4. The paradigm computer-implemented invention embodying a technical process outside a computer is found in the decision of the EPO Board of Appeal in Vicom. The invention in Vicom concerned a method of image convolution which involved repeated selection and application of convolution filters using a mathematical error-minimisation technique. The convolution filters had a smaller dimension compared to those used by a known method. When the inventive image convolution was run on a conventional computer it required far fewer calculations to give approximately the same result as the known method. Hence, the invention produced image convolutions with an increase in processing speed compared to the known method when it was executed on a conventional computer.

5.5. As the court explained in AT&T/Cvon, the Board held that the inventive algorithm in Vicom was more than a mathematical method “as such” because:

… the mathematical method which underlay the invention was being used in a technical process which was carried out on a physical entity by technical means.

5.6. And it was more than a program for a computer “as such” because:

… what was claimed was not the computer program at all, but the process of manipulating the images. That process was a technical process and hence made a technical contribution.

5.7. The computer-implemented invention in Vicom made a technical contribution because it performed a specific technical process (image processing) lying outside the computer itself. The essential reasoning of Vicom is reflected in signpost 1.

5.8. Examiners should consider whether an AI invention performs or controls a technical process which exists outside the computer (or computers) on which it is implemented. If the examiner finds that it does, then the AI invention is not excluded.

5.9. For example, if an AI invention makes a technical contribution to the field of image processing, then it is not excluded under section 1(2).

5.10. In BL O/296/21 (Imagination Technologies) the invention involved a combination of a general-purpose computer and a deep neural network (DNN) accelerator. The combination processed image data for computer vision problems more efficiently. The hearing officer found that this was a relevant technical effect and that a technical problem was solved. Signposts 1 and 5 pointed to patentability.

5.11. In BL O/0007/23 (Teledyne FLIR Commercial Systems) the invention included a synthetic image generator for generating a set of synthetic images. The synthetic images were used, along with a subset of images retained from a previous training data set, to generate an updated training data set. The updated data set was used for retraining a neural network to improve its performance in identifying and classifying objects present in images. The hearing officer found the claimed invention performed a specific task external to the computer, namely image object detection. The hearing officer found the contribution was made to the technical field of image recognition and that signposts 1 and 5 both pointed to allowability.

5.12. Image processing is just one example of performing or controlling a technical process outside a computer. Another example is seen in Koch & Sterzel T 0026/86 in which a program for a computer was held to be a patentable invention because it performed steps necessary to control an X-ray machine.

5.13. Further illustrative examples of external technical effects are given in the scenarios, for example:

  • scenario 1 (ANPR system for recognising a vehicle registration number)
  • scenario 2 (monitoring a gas supply system for faults)
  • scenario 3 (analysing and classifying movement from motion sensor data)
  • scenario 4 (detecting cavitation in a pumping system)
  • scenario 5 (controlling a fuel injector in a combustion engine)
  • scenario 6 (measuring percentage of blood leaving a heart)

Solving technical problems lying outside the computer

5.14. In Halliburton the high court considered the patentability of computer simulations by applying the Aerotel approach. The invention in Halliburton was concerned with modelling the design of drill bits for drilling oil wells to improve their drilling efficiency and operational life. The claimed invention included iteratively modelling multiple designs of a drill bit (including its cutting elements) using finite element methods. The drilling of earth formations was simulated using the multiple designs to determine an optimised design for the drill bit.

5.15. The court held that the invention in Halliburton was more than a computer program “as such” because it was:

… a method of designing a drill bit. Such methods are not excluded from patentability by Art 52/s1(2) and the contribution does not fall solely within the excluded territory.

5.16. And it was more than a mathematical method “as such” because:

the data on which the mathematics is performed has been specified in the claim in such a way as to represent something concrete (a drill bit design etc.).

5.17. Finally, the court confirmed the contribution was technical in nature because:

Designing drill bits is obviously a highly technical process, capable of being applied industrially … The detailed problems to be solved with wear and ability to cut rock and so on are technical problems with technical solutions. Accordingly finding a better way of designing drill bits in general is a technical problem. This invention is a better way of carrying that out. Moreover the detailed way in which this method works – the use of finite element analysis – is also highly technical.

5.18. Hence, the invention in Halliburton made a technical contribution. As well as being a technical process, it solved a technical problem (with drilling efficiency and operational lifespan) lying outside the computer. The reasoning in Halliburton indicates the presence of signposts 1 and 5.

5.19. Vicom and Halliburton are similar in that the data processed by the computer represented a physical entity external to the computer. In Vicom the data represented an image and in Halliburton it represented a drill bit. However, examiners should note that a computer-implemented invention may make a technical contribution even if the data it processes does not represent a physical entity.

5.20. For example, Protecting Kids the World Over (PKTWO) concerned an alert system for monitoring electronic communications data (for example email) for particular words and phrases. The system worked to ensure that users (for example children) were not exposed to inappropriate content or language. The high court held that the speed and reliability of the algorithm used for sampling and analysing expressions found in the electronic communications was improved. This in turn improved the speed and reliability of an alarm notification for alerting a user (for example a parent) to inappropriate communication.

5.21. The court held that, when viewing the claim in PKTWO as a whole, providing the alarm in this way was a relevant technical process. Thus, the court held:

The effect here, viewed as a whole, is an improved monitoring of the content of electronic communications. The monitoring is said to be technically superior to that produced by the prior art. That seems to me to have the necessary characteristics of a technical contribution outside the computer itself.

5.22. Accordingly, the court held that the contribution made by the invention in PKTWO

… does not reside wholly within the computer program as such exclusion. I think that conclusion is in accordance with the AT&T signposts. In particular I would say that the invention solves a technical problem lying outside the computer, namely how to improve upon the inappropriate communication alarm provided by the prior art.

5.23. Thus, PKTWO solved a technical problem lying outside the computer, indicating the presence of signposts 1 and 5.

5.24. Examiners should consider whether an AI invention contributes to the solution of a technical problem lying outside the computer (or computers) on which it is implemented. If the examiner finds that it does, then the AI invention is not excluded.

5.25. One example of applied AI being used to solve a technical problem outside the computer is BL O/296/21 (Imagination Technologies) which was discussed earlier in paragraph 5.10.

5.26. Further illustrative examples of AI inventions solving technical problems outside a computer are given in the scenarios, for example:

  • scenario 1 (ANPR system for recognising a vehicle registration number)
  • scenario 2 (monitoring a gas supply system for faults)
  • scenario 3 (analysing and classifying movement from motion sensor data)
  • scenario 4 (detecting cavitation in a pumping system)
  • scenario 5 (controlling a fuel injector in a combustion engine)
  • scenario 6 (measuring percentage of blood leaving a heart)

Non-technical processes

5.27. The analysis presented above does not mean that every process (or solution of a problem) lying outside a computer reveals a technical contribution. Computer-implemented inventions which perform non-technical processes, or that fail to solve a technical problem, do not involve a technical contribution.

5.28. For example, in paragraph 33 of Halliburton the court observed that:

If the task the system performs itself falls within the excluded matter and there is no more to it, then the invention is not patentable …  Clear examples are from the cases involving computers programmed to operate a method of doing business, such as a securities trading system or a method of setting up a company (Merrill Lynch and Macrossan) … When the result or task is itself a prohibited item, the application fails.

5.29. When the contribution made by a computer-implemented invention consists purely of excluded matter, and there is no more to it, then it does not count as a technical contribution. In such cases the contribution made is non-technical and it will be excluded at steps (3) and (4) of the Aerotel test.

5.30. A clear example is found in Merrill Lynch which related to a data processing system for making a trading market in securities. The court acknowledged that the invention had the real-world effect of producing an improved trading system. Although this was arguably an effect outside the computer, it did not count as a technical contribution. This was because the contribution consisted of nothing more than a method of doing business as such. The actual contribution was not technical.

5.31. Similarly, in Macrossan, the invention involved producing the documents for use in forming a corporate entity such as a company. The court held that the task of the invention was “for the very business itself, the business of advising upon and creating an appropriate company”. This task consisted solely of a method of doing business “as such” and was non-technical.

5.32. Examiners should consider whether an AI invention relates to a purely excluded process or non-technical process. If the examiner finds that it does, and there is no more to it, then they should raise an appropriate objection under section 1(2).

5.33. An example of an AI invention found to perform no more than a non-technical process is the invention in Emotional Perception. It provided music track recommendations to a user by passing the tracks through a trained ANN. This allowed the invention to make suggestions of similar music in terms of human perception and emotion, irrespective of the genre of the music. To achieve this, the ANN of the invention was trained in a special way.

5.34. It was trained by using the output of a second ANN as a fixed reference. In the training phase, the inventive ANN processed data representing extracted physical properties of the music files. The physical properties of the music files related to properties such as the tone, timbre, speed, and loudness of the music they contained. By contrast, the second ANN processed data representing natural language descriptions of the music files. The natural language (semantic) descriptions, for example the metadata relating to the music files, conveyed how they might be perceived by a human.

5.35. Once the inventive ANN was trained, the extracted properties of music tracks were passed through it. This produced outputs which were compared to a database from which recommendations of semantically similar tracks were produced. The recommendations were provided to an end user by sending a file and message.

5.36. In its judgment in Emotional Perception, the Court of Appeal held the recommendation message was non-technical. What made the recommended file worth recommending was its semantic qualities. The Court held this was a matter of aesthetics or, alternatively, subjective and cognitive in nature (referring to paragraph 5.2 of the decision of the Board of Appeal in Yahoo T 0306/10). The Court also opined that, unless it involves a technical contribution, the provision of a recommendation message is an excluded presentation of information.

5.37. The fact that the recommendation message in Emotional Perception was transmitted over a network did not help. What distinguished the invention from conventional file transmission was that it provided a better recommendation, for example a song which the user was likely to enjoy. This distinction was merely aesthetic, so the file recommendation was non-technical in nature. The invention did not produce any relevant technical effect and it was excluded as a program for a computer as such.

5.38. Several examples of applied AI inventions have been refused for performing no more than a method of doing business as such. BL O/0517/23 (Nielsen Consumer) related to a machine learning model whose outputs adjusted a marketing strategy. In BL O/0315/23 (WNS Global Services (UK) Ltd) a classifier was used to output a “social equity index” measuring the social reputation of brands. BL O/0284/23 (Kinaxis) involved selecting machine learning models for forecasting sales of products. BL O/811/22 (Halliburton Energy Services Inc.) used AI for planning oilfield exploration.

5.39. A further example is BL O/0566/23 (Tomas Gorny) which used an unsupervised deep learning model to assign tickets to agents in a customer service centre. In BL O/0988/23 (Conquest Planning Inc.) an AI module was used to generate and rank modified financial plans for achieving certain financial goals. In BL O/774/22 (Flatiron Health Inc.) a machine learning model processed unstructured medical information and produced outputs indicating the suitability of an individual for participation in a medical trial. The invention was refused, among other things, as a method of doing business.

5.40. Additional examples of AI inventions performing excluded or non-technical processes are found in the scenarios, for example:

  • scenario 7 (automated financial instrument trading)
  • scenario 8 (analysing patient health records)

Non-technical processes within the computer program exclusion

5.41. Any effect which is no more than one produced by merely running a program (such as the manipulation of data) does not count as a technical contribution. Such effects fall solely within the program exclusion and are non-technical.

5.42. For example, in Autonomy one aspect of the claimed invention involved automatic text analysis. The text in an active window of a computer was analysed automatically and a list of links related to that content was generated. This aspect was held to be a paradigm example of what is caught by the exclusion to a program for a computer “as such”.

[40] In my judgment … automatic text analysis, comparison and results generation is a paradigm example of a case in which the contribution falls squarely within excluded matter, i.e. a program for a computer. The claimed contribution, so far as the first element is involved does not exist independently of whether it is implemented by a computer. On the contrary, it depends on a computer processing or displaying information in an active window, and on a search program to analyse it and to compare and generate results . The only effect produced by the invention is an effect caused merely by the running of the program, which consists of the manipulation of data. It is in short a claim to a better search program.

5.43. The invention in Autonomy (automatic text analysis) would not produce a technical effect external to a computer in accordance with signpost 1.

5.44. Examiners should consider whether an AI invention makes a technical contribution beyond the mere manipulation, analysis, or generation of text data. If the examiner finds the invention does not, and there is no more to it, then they should raise an objection under the program exclusion.

5.45. Examples of applied AI inventions involving nothing more than the analysis or generation of text have been refused under the computer program exclusion. In BL O/1047/22 (Hunter IP) a trained machine learning model was used to determine sentence style type. In BL O/829/22 (Google LLC) a trained machine learning model suggested to a user how they might reply to a received communication.

5.46. A further example of an AI invention performing a non-technical process falling within the program exclusion is found in scenario 9 (identifying junk e-mail using a trained AI classifier).

Applied AI: making computers work better

6. Signposts 2, 3, and 4 each ask whether a computer-implemented invention results in a ‘better computer’. Although each asks the question in a slightly different way, each is asking whether the computer (or computers) which implements the invention produces an ‘internal technical effect’.

6.1. For example, paragraph 37 of Halliburton explains that making computers work better is not excluded by section 1(2).

The “better computer” cases - of which Symbian is paradigm example - have always been tricky however one approaches this area. The task the program is performing is defined in such a way that everything is going on inside the computer. The task being carried out does not represent something specific and external to the computer and so in a sense there is nothing else going on than the running of a computer program. But when the program solves a technical problem relating to the running of computers generally, one can see that there is scope for a patent. Making computers work better is not excluded by s1(2).

6.2. A computer-implemented invention may reveal an internal technical effect, indicating it is a better computer, in two ways. Firstly, a better computer may be seen when the invention solves a technical problem lying within a computer. Secondly, a better computer may be seen when the invention defines a new way of operating a computer in a technical sense. A better “computer” in this context need not be a single computer, it may also be a better system of two or more computers.

6.3. Examiners should consider whether an AI invention produces an internal technical effect, indicating a better computer or system of computers, in either of these two ways. If the examiner finds that it does, then it is likely the AI invention meets one or more of signposts 2, 3, 4, and 5.

6.4. Examiners should bear in mind it is possible that an AI invention may itself be a better computer. Alternatively, an AI invention may make the computer on which it runs a better computer, or it may improve a system of computers.

Solving technical problems lying within the computer

6.5. The paradigm computer-implemented invention solving a technical problem lying within a computer is Symbian. It concerned the programming of a dynamic linked library (DLL) for storing functions common to the applications running on the computer’s operating system. In certain circumstances, adding functions to the DLL meant that existing applications using the DLL were unable to link to the added functions correctly, causing a malfunction. The inventive program had an “extension part” for the DLL which ensured that any application could select and link correctly to the desired functions in the DLL.

6.6. The court held the invention was a program which makes the computer operate on other programs faster than prior art operating systems enabled it to do by virtue of the claimed features.. This solved a technical problem lying within the computer because it has the knock-on effect of the computer working better as a matter of practical reality.

6.7. Thus, Symbian indicates the presence of signposts 4 and 5. Signpost 2 may also be present since the effect was achieved irrespective of the nature of the data being processed and the applications being run.

6.8. Symbian also approved two IBM decisions of the Board of appeal, T 0006/83 and T 0115/85. These decisions are two further examples of programs solving technical problems concerned with the internal workings of a computer itself. Both decisions provided technical effects in a computer system which operated irrespective of the nature of the data being processed or the applications being run. Their essential reasoning is reflected in signpost 2 (as explained in paragraphs 21 to 31 of AT&T).

6.9. In marked contrast to Symbian and the two IBM decisions, Gale’s Application is the paradigm computer program that does not solve a technical problem with the internal workings of a computer. The invention was a new way of calculating square roots which was sympathetic to the operation of the underlying architecture of the computer.

6.10 For example, prior art methods of calculating square roots were said to rely on binary division operations. The problem with these prior methods was that, conventionally, division operations were not directly provided within binary systems. Conventionally, they had to be implemented by combining various binary “add”, “subtract”, “test” and “shift” functions. In Gale, the inventive method of calculating square roots eschewed division operations, relying instead on multiplication operations. The multiplication operations were inherently easier and faster to implement using the registers of a conventional computer, for example by using binary “shift” operations.

6.11. Yet the court of appeal concluded the invention in Gale was no more than a program for a computer as such. This was because its instructions did “not solve a “technical” problem lying within the computer.” The Court held that the instructions did no more than:

prescribe for the cpu in a conventional computer a different  -set of calculations from those normally prescribed when the user wants a square root.

6.12. The court also held that the instructions did not:

define a new way of operating the computer in a technical sense.

6.13. So, the court decided that the effect produced in Gale did not count as a technical contribution. It was no more than an effect produced by merely running a new program. In other words, the effect of the invention in Gale extended no further than it being just a better or well-written program. This is an effect which is excluded as a program for a computer as such under section 1(2).

6.14. Viewed through the modern-day signposts, the invention of Gale would not indicate any effect that is “technical” in the sense of signposts 2, 3, 4, or 5.

6.15. Examiners should consider whether an AI invention solves a technical problem lying within a computer (or a system of computers) which leads to a better computer (or a better system of computers). If the examiner finds that it does, then the AI invention is a patentable invention.

6.16. Illustrative examples of AI inventions producing a better computer (or system of computers) may be found in the scenarios, for example:

  • scenario 10 (cache management using a neural network)
  • scenario 11 (continuous user authentication)
  • scenario 12 (virtual keyboard with predictive text entry)

A new way of operating a computer in a technical sense

6.17. As we have seen, Gale’s Application was refused (in part) because its program for calculating square roots did “not define a new way of operating the computer in a technical sense”. This principle was distilled into the wording of signpost 3 which asks:

whether the claimed technical effect results in the computer being made to operate in a new way.

6.18. Thus, signpost 3 is indicated by a computer-implemented invention when it is a new way of operating a computer in a relevant technical sense.

6.19. However, it does not follow that merely running a new program on a computer is a new way of operating a computer in a technical sense. The new program must produce some sort of technical effect which is not caught by the program exclusion. As explained in Aerotel, a “technical effect which is no more than the running of the program is not a relevant effect”. In AT&T, the high court stated that a new way of operating a computer in a technical sense points towards some generally applicable method of operating a computer rather than a way of handling particular types of information.

6.20. The new program for calculating square roots in Gale did not operate the computer in any relevant technical sense that avoided the program exclusion. Nor is it a generally applicable method; it only concerns the calculation of square roots. Gale did not operate a computer in a new way beyond just being a better or well-written program for a conventional computer. Gale is a negative example of signpost (iii).

6.21. Examiners should consider whether an AI invention is a new way of operating a computer in a technical sense. If the examiner finds that it is, then the AI invention is a patentable invention meeting (at least) signpost 3.

6.22. Illustrative examples of AI inventions considered to meet signpost 3 are shown in the scenarios, for example:

  • scenario 16 (processing a neural network on a heterogeneous computing platform)
  • scenario 17 (special purpose processing unit for machine learning computations)
  • scenario 18 (a multiprocessor topology adapted for machine learning)

Core AI

7. In contrast to applied AI inventions, the advance a core AI invention makes is necessarily limited to the field of AI itself. The advance made will lie in the models, algorithms, or mathematical methods constituting the core AI invention. Core AI inventions are not concerned with the real-world application of those models and algorithms to technical problems external to, or lying within, a computer.

7.1. Examiners should assess core AI inventions in the same way as any other computer-implemented invention. Each case should be considered on its own merits. The guidelines for assessing applied AI inventions that are set out above apply equally to core AI inventions. If a core AI invention reveals a relevant technical contribution, then it will not be excluded under section 1(2). For example, in principle, it is possible that a computer-implemented core AI invention may meet one or more of the signposts.

7.2. The models and algorithms on which a core AI invention is based are inherently mathematical and abstract in nature – they are not technical. If a claim is merely directed to the core AI model itself – for example when the claim is not explicitly (or implicitly) limited to some form of computer-based implementation – then examiners should raise an appropriate objection under the mathematical method exclusion.

7.3. However, when a claim to a core AI invention is limited to its practical implementation – for example in hardware and/or software – the examiner’s focus should shift to consideration of the program exclusion. Examiners should consider whether such claims make a technical contribution falling outside the excluded subject matters of section 1(2). If no such technical contribution is revealed, then examiners should raise an objection under the program for a computer exclusion.

7.4. Examiners should note that several recent decisions of the EPO Boards of Appeal have rejected examples of core AI inventions as being unpatentable. Mitsubishi T 0702/20 is an example and was mentioned with apparent approval by the Court of Appeal in Emotional Perception.

7.5. In Mitsubishi T 0702/20 the contribution lay in using a “check matrix” having a particular form to remove weights from a fully connected neural network model to create a sparsely connected model. Thereafter, the modified neural network model was trained in the conventional way. The Board accepted the storage and computational requirements of the modified model were reduced. However, this reduction arose merely because the modified model was smaller than the fully connected network and it did not do the same thing as the fully connected network. The Board held the invention did not establish a technical effect necessary for a patentable invention.

7.6. In Google T 1425/21 the invention involved using a “cumbersome” (for example, a larger) neural network model as a fixed reference for training a “distilled” (smaller) neural network. The “distilled” neural network had fewer layers or parameters compared to the “cumbersome” neural network. In addition, the “temperature” parameter of the networks was set to a higher value during training compared to the value that would be used for inference.

7.7. The Board rejected the invention in Google T 1425/21 as being unpatentable for essentially the same reasons as Mitsubishi T 0702/20. Although the “distilled” model had reduced memory requirements compared to the “cumbersome” model, this was due solely to the distilled model being smaller, and having lower accuracy, compared to the “cumbersome” model. This was not enough to establish a technical effect necessary for a patentable invention.

7.8. Bosch T 1952/21 was an example of a particular kind of machine learning known as reinforcement learning. The advance related to the features of an intermediate layer provided between a pre-processing stage (which included a convolutional neural network and a recurrent neural network) and an output stage (which included separate “policy” and “value” neural networks). In particular, the intermediate layer was a feed-forward neural network which was provided with “stochastic” units which introduced randomness into the intermediate layer in some way.

7.9. The Board remarked that the system for reinforcement learning in Bosch T 1952/21 was claimed as a neural network comprising various sub-networks implemented on a computer. However, the Board held the functioning of the computer was not adapted in any technical way and the claim did not involve any further technical use of the system. The Board concluded the invention did not solve a technical problem. Accordingly, the invention was not a patentable invention.

7.10. The IPO believes it is likely that application of the UK Aerotel approach to the inventions in Mitsubishi T 0702/20, Google T 1425/21, and Bosch T 1952/21 would show they do not provide any technical contribution outside the program exclusion of section 1(2). The IPO believes it is unlikely that these inventions would meet any of the signposts.

7.11. An example of a core AI invention being excluded by the high court is found in Reaux-Savonte v Comptroller General of Patents. The invention in Reaux-Savonte was an “AI genome”. This was understood to be a hierarchical or a modular arrangement of computer code facilitating its evolution over time. The inventive computer code could modify, adapt, change, and improve over time in the same way that biological code evolves.

7.12. The court upheld the hearing officer’s finding that the invention was a program for a computer “as such” and was not technical in nature. Amongst other things, the court upheld the hearing officer’s findings that signposts 3 and 5 were not shown. For example, the hearing officer found that even if the applicant’s program was new, it did not meet signpost 3:

… a computer system operating on new code does not imply that the system works in any way differently to how it would with the old code. I have been unable to find anything in the application that suggests that a computer system is being made to operate in a new way.

7.13. In BL O/390/22 (IBM Corp.) the invention was aimed at improving active learning of entity resolution rules to scale better over large data sets. Entity resolution relates to finding records in a data set that refer to the same entity across different data sources. It may be used for deduplication in a single database or for matching entities of different databases. The claimed invention included conventional hardware elements (distributed memory and disk cache hierarchy) to perform entity resolution more efficiently. However, the operation of these hardware elements was unchanged, so a computer did not operate in a new way and signpost 3 did not assist.

7.14. BL O/1045/22 (Institute for Information Industry) related to a federated machine learning system. A host device updated a model based on training results received over multiple training rounds from many client devices. The training speeds of the client devices were different, so the host device updated the model using a threshold value. The value indicated a difference in the number of training rounds between client devices. When the difference was below the threshold, the host updated its model without waiting for results from the client having a lowest training round.

7.15. In addition, the time taken to receive a training result from a client was considered. If it was greater than a pre-set time value, then the host would no longer receive training results from that client. The hearing officer found these features were policy-based decisions about when to update the host model. They were not concerned with a technical process and did not solve a technical problem.  The invention was found to be excluded as a program for a computer as such.

7.16. The scenarios illustrate examples of AI core inventions which make no technical contribution, for example:

  • scenario 13 (optimising a neural network)
  • scenario 14 (avoiding unnecessary processing using a neural network)
  • scenario 15 (active training of a neural network

7.17. Scenarios 13 to 15 may be contrasted with scenarios 16 to 18 which meet signpost 3.

Training AIs and data sets

8. An AI invention may rely on a model that requires training with a set of training data before it can be used for its intended application or purpose. This process of training may be referred to as machine learning. Training methods and machine learning methods may also be categorised as being either applied AI or core AI inventions. It follows they should be examined in the same way as applied AI or core AI using the guidelines above. Inventions involving training of AI inventions are not excluded if they reveal a relevant technical contribution to the known art.

8.1. A useful analogy for thinking about inventions involving the training of AIs is that of calibration. Certain technical devices or functions may require calibration before they can be used accurately for their intended technical purpose. Examples are devices having a sensor such as a thermometer or touch-sensitive display. These sorts of devices require calibration to provide accurate estimates of physical parameters (for example to measure temperature or detect touches on a touch-sensitive display). This is true whether the devices are implemented in hardware or software (or some combination of both).

8.2. Under the Aerotel approach, a computer-implemented method of calibration that makes a technical contribution to the art would be a patentable invention. By analogy, it follows that methods of training AI models or machine learning for achieving a specific technical purpose may also make a technical contribution. Methods of training may, therefore, meet one or more of the signposts. Each case should be examined on its own merits.

8.3. In practice, there are many ways in which methods of training AIs and machine learning may be claimed. For example, an optional training step or method may be claimed in a dependent claim of a patentable AI invention. An independent claim directed to training an AI may share an inventive concept with another independent claim for a patentable use of the trained AI. Alternatively, a method of training an AI model may make a technical contribution in its own right and may be claimed as such.

8.4. The scenarios illustrate examples of training AI models that reveal a technical contribution, for example:

  • scenario 4 (detecting cavitation in a pumping system)
  • scenario 6 (measuring percentage of blood leaving a heart)
  • scenario 11 (continuous user authentication)
  • scenario 18 (a multiprocessor topology adapted for machine learning)

8.5. However, where an AI invention is trained for a non-technical purpose, and there is no more to it, then it is not a patentable invention. A clear example is Emotional Perception where part of the contribution lay in a special method for training an neural network for the purpose of providing music recommendations to a user. The Court of Appeal held this purpose was aesthetic in nature and was non-technical. The invention in Emotional Perception did not make a technical contribution beyond the computer program exclusion.

8.6. In the context of a neural network, the Court of Appeal held that, for the purposes of analysing patentability, its training is in effect part of the creation of its program. It follows, that if an invention involving machine learning fails to define a technical purpose or does not contribute to the solution of a technical problem, then it is not a patentable invention. In such cases, the invention is excluded as a computer program as such. Clear examples of this are seen in Mitsubishi T 0702/20 and Google T 1425/21 discussed in the previous section at paragraphs 7.4 to 7.10.

8.7. The scenarios have illustrative examples of non-technical methods of training, for example scenario 13 (optimising a neural network) and scenario 15 (active training of a neural network).

Data sets

8.8. Methods of training AI models and machine learning rely on training data, often referred to as a “data set”.  There are several ways in which patent protection for the features comprising data sets might be considered.

8.9. Firstly, the use of a data set may be explicitly (or implicitly) claimed as a constituent feature of a training method. If the training method makes a technical contribution, then the data set will be protected by being part of the patentable method. Secondly, a patentable innovation may lie in a method of generating or improving a dataset. If the method makes a technical contribution, then it is patent-eligible.

8.10. Thirdly, the constituent features of a dataset itself may be claimed directly as a data set characterised by its content. This may involve claiming the information the data represents, its organisation, or its delivery (for example on paper or in some computer-readable form). However, it is unlikely that a claim to a data set itself can be shown to meet all four requirements of section 1(1) for a patentable invention. For example, claims to data sets characterised solely by their information content are likely to be excluded as presentation of information as such.

8.11. The scope of the exclusion to presentation of information was considered by the high court in Gemstar v Virgin. The court noted that “if the presentation of information has some technical features over and above the information and its delivery, then it might be patentable”. The court distinguished “the content or its mere delivery”, which is excluded, from “that material plus some additional technical aspect of its delivery” which may be patentable. The court concluded, “So what achieves patentability is some real world technical achievement outside the information itself.”

8.12. The conclusions in Gemstar are consistent with the subsequent judgment of the high court in Garmin v Philips. In Garmin, the court held that, “the key point is to ensure that the claimed feature is not in substance a claim to information content.”

8.13. Thus, if a claim to a data set is only characterised by its content and a conventional means or process for presenting it (for example, an electronic file), then it is likely excluded as presentation of information. It is unlikely that claiming a data set in this way provides any real-world technical achievement outside its information itself.

8.14. Examiners should treat any claim to a data set on its own merits. However, unless there is a persuasive reason that the claimed data set makes, as a matter of substance, a real-world technical achievement outside the information it holds, then such a claim is likely to be excluded under section 1(2)(d) as the presentation of information as such.

8.15. BL O/0007/23 (Teledyne FLIR Commercial Systems) involved a patentable method of generating an updated training dataset for retraining a neural network so as to improve its performance in identifying and classifying objects present in images.

Hardware-only implementations of AI inventions

9. AI inventions may be implemented in computer hardware and/or computer software. Accordingly, claims for AI inventions may be drafted to seek protection for embodiments which implement the invention in hardware and/or software.

9.1. However, merely drafting a claim so that it is directed only to a hardware implementation (or implementations) is not sufficient to avoid the exclusions of section 1(2).

9.2. In Gale’s Application and Emotional Perception the Court of Appeal held that there is no distinction between a program for a computer implemented in a hardware and a software form. Hardware implementations of computer-implemented inventions are still within the purview of the program for a computer exclusion.

9.3. In considering the nature of the program for a computer exclusion in Emotional Perception the court of appeal held:

Another distinction which I believe is irrelevant relates to permanence. There are some computers with programs which cannot be changed – e.g. the chips embedded in a payment card or a washing machine – but it remains meaningful to draw the same distinction between the program in that case and the computer itself. Whether the program for a given computer is fixed in a permanent form or not does not, in my judgment, alter the fact that the program represents a set of instructions for a computer to do something. The result in Gale, which involved rejecting a distinction between the permanence of instructions in ROM circuitry as opposed to those stored in other media would have been quite different if this distinction was relevant.

9.4. Earlier, in Gale’s Application, the Court of Appeal had approached the issue whether the invention in suit was a program for a computer as such:

on the footing that it is right and convenient to strip away, as a confusing irrelevance, the fact that the claim is for “hardware”. The claim in the specification is, in substance, a claim to a series of instructions which incorporate Mr. Gale’s improved method of calculating square roots. It is a claim to electronic circuitry in the form of a ROM which is only distinguishable from other electronic circuitry in the form of a ROM by the sequence of instructions it contains. As such those instructions are not patentable, because they constitute a computer program.

9.5. Thus, the fact the inventions in Gale’s Application and Emotional Perception took “hardware” form was irrelevant to the question of excluded matter. Instead, the Court approached the issue in the same way as programs implemented in software. The correct question to be answered is whether, as a matter of substance, the program makes a relevant technical contribution, irrespective of it taking software or hardware form.

9.6. Examiners should assess claims to hardware-only implementations of AI inventions at steps (2), (3) and (4) of the Aerotel test by considering the task or process they perform. If the task performed makes a technical contribution, for example when one or more of the signposts is met, then the invention is a patentable invention.

Sufficiency

10. Section 14(3) of the Patents Act 1977 requires that:

The specification of an application shall disclose the invention in a manner which is clear enough and complete enough for the invention to be performed by a person skilled in the art.

10.1. The relevant principles to be applied when determining whether a patent application satisfies this section are summarised in Eli Lilly v Human Genome Sciences:

The specification must disclose the invention clearly and completely enough for it to be performed by a person skilled in the art. The key elements of this requirement which bear on the present case are these:

(i) the first step is to identify the invention and that is to be done by reading and construing the claims;

(ii) in the case of a product claim that means making or otherwise obtaining the product;

(iii) in the case of a process claim, it means working the process;

(iv) sufficiency of the disclosure must be assessed on the basis of the specification as a whole including the description and the claims;

(v) the disclosure is aimed at the skilled person who may use his common general knowledge to supplement the information contained in the specification;

(vi) the specification must be sufficient to allow the invention to be performed over the whole scope of the claim;

(vii) the specification must be sufficient to allow the invention to be so performed without undue burden.

10.2. Examiners should consider these requirements when considering the sufficiency of disclosure of AI inventions.  Whether an AI invention meets these disclosure requirements is decided by considering each case on its own merits.

10.3. For example, in  BL O/0007/23 (Teledyne FLIR Commercial Systems) the hearing officer considered sufficiency and concluded:

I am in no doubt that performing such a technique in a practical embodiment of the invention would clearly be within the realms of what the relevant skilled person could achieve without further instruction.

10.4. Likewise, the extent to which a training data set should itself be disclosed is a matter to be decided by considering each case on its own merits. For example, the EPO Board of appeal considered whether an invention relying on a data set was sufficiently disclosed in T 0161/18 (Äquivalenter Aortendruck/ARC SEIBERSDORF). This decision both reflects, and is consistent with, the principles set out in Eli Lilly v Human Genome Sciences. The decision in T 0161/18 highlights the importance of disclosing how an AI invention relies upon a training data set sufficiently. A patent application should teach those details in a manner that enables the invention to be worked across its scope without undue burden.

10.5. For example, in paragraph 2.2 of T 0161/18 the Board stated that:

the application does not disclose which input data are suitable for training the artificial neural network according to the invention, or at least one dataset suitable for solving the present technical problem. The training of the artificial neural network can therefore not be reworked by the person skilled in the art and the person skilled in the art therefore cannot carry out the invention.