Artificial intelligence call for views: patents
Updated 23 March 2021
Introduction
The origins of patents for invention are obscure. No single country can claim to have been the first to have a patent system. However, Britain may have the longest continuous patent tradition in the world. Its origins can be traced back to the 15th century. The Crown started making specific grants of privilege to manufacturers and traders signified by Letters Patent, open letters marked with the King’s Great Seal.
The earliest known English patent for an invention was granted by Henry VI in 1449. The patent gave John of Utynam a 20-year monopoly for a method of making stained glass, required for the windows of Eton College. This method had not been previously been known in England.
The law on patents has developed since then, and today we use the UK Patents Act 1977 (“the Act”). At the time, this was the most radical piece of patents legislation to be passed for nearly 100 years. The questions posed in this section of the call for views explore how this legislation meets the challenges of AI.
The Act tells us how patents should be granted, who the patent belongs to and what is an infringement. Some of these principles may or may not be appropriate for AI technology. We also consider how the Act meets the government’s policy objectives for AI, and whether the current approach is the right one.
The aims of the patent system
Patents are used to protect inventions. They provide the right to take legal action against anyone who makes, uses, sells or imports the invention without permission. To be granted a patent, the invention must be new, not obvious and something that can be made or used.
Patents incentivise invention in two main ways. They encourage investment in research and development because, as exclusive rights, they give the owner the opportunity to secure a return. The publication of the patent allows others to learn of technical advances, so an invention can be further developed or worked around.
The government wants to make the UK a global centre for AI and data-driven innovation. It aims to make sure that the UK has the best environment for developing and using AI. It would be useful to hear your views on the value of patent incentives for AI research and development in the UK.
Questions
- What role can/does the patent system play in encouraging the development and use of AI technologies?
The nature of AI systems themselves and their potential to innovate independently might present challenges to the current patent system. Some of these challenges are discussed in more detail below. If there are challenges that are not listed, please tell us in your response. We would also like to hear if you think the current patent system is flexible enough to accommodate these challenges.
Al as an inventor
If we accept that AI could or will devise an invention alone or with a human, recent decisions found that current law needs the inventor to be a natural person. One of these decisions followed applications to the UK Intellectual Property Office where the inventor was identified as an AI machine. The UK hearing officer found that the definition of inventor in the Act was pivotal. They commented that there should be a debate on patent law being fit for purpose with the increasing likelihood of AI in the invention process.
Box A: the DABUS hearing decision by the UK Intellectual Property Office.
The applicant for two UK patents had asserted that the inventor was an AI machine named DABUS. This raised the issue of inventorship and ownership. The hearing decision concluded that a non-human inventor cannot be regarded as an inventor under the Act. This decision has been appealed to the High Court. The European Patent Office has similarly rejected patent applications from the same applicant where the AI system is named as inventor.
There is a long history of inventors having the right to be identified. A patent will not be granted under current UK law unless the person believed to be the inventor is identified (s7(1) of the Act). The right to be granted a patent is usually given to the inventor(s) (s7(2) of the Act), although several specific situations may over-ride this (s7(2)(b) & (c) of the Act).
The applicant for a patent is required to identify the person(s) who they believe are the inventor(s) (s13(2) of the Act). Failure to name an inventor will lead to a UK patent application being treated as withdrawn.
As the inventor is considered “the actual deviser of the invention” (s7(3) of the Act), the question is whether AI could be considered as the inventor and should be named on a granted patent or on a patent application.
Commentators have said that current AI systems are not “inventors”. They are no different to other tools that are used by people to invent. Questions have been asked about why AI algorithms are different from other platform technologies such as combinatorial chemistry which is used to produce novel drug compounds. We would welcome your views on the current level of human input in designing AI systems that can produce an inventive output - such as taking decisions on choice of algorithms, the selection of parameters and the design and choice of input.
Questions
2. Can current AI systems devise inventions? Particularly:
a) to what extent is AI a tool for human inventors to use?
b) could the AI developer, the user of the AI, or the person who constructs the datasets on which AI is trained, claim inventorship?
c) are there situations when a human inventor cannot be identified?
3. Should patent law allow AI to be identified as the sole or joint inventor?
4. If AI cannot be credited as inventor, will this discourage future inventions being protected by patents? Would this impact on innovation developed using AI? Would there be an impact if inventions were kept confidential rather than made public through the patent system?
It is the patent owner, not the inventor, who controls the rights. This includes choosing whether to assign or licence a patent or whether to sue for infringement. Anyone can apply for a patent in the UK. Not everyone can be granted a patent. The law assumes that the inventor is entitled to be granted a patent, followed by someone claiming through them, such as an employer.
Even if an inventor does not qualify to own a patent, for example as an employee, it is considered natural justice for the inventor to be named in the patent. This is because publication of the patent can provide the inventor with some kudos.
Question
5. Is there a moral case for recognising AI as an inventor in a patent?
In the UK DABUS hearing decision, the applicant agreed that an AI system was a machine and could not own intellectual property. The question of patent ownership arises if an AI could be named as inventor in a patent.
Patent entitlement could rest with people other than the owner of an AI inventor. This could include the creator of the AI algorithm or the user of the algorithm setting it to a task. It could also include the creator of the dataset used for training the algorithm or the owner of the AI tool.
Questions
6. If AI was named as sole or joint inventor of a patented invention, who or what should be entitled to own the patent?
Conditions for grant of a patent
Encouraging invention is the main justification for providing patents. We want to find out whether the legal requirements for the grant of a UK patent are set up to provide the best level of incentive for AI inventions. Use of the term “AI inventions” includes an invention to the AI system itself, for example a neural network or an invention involving the use of an AI system.
The current legal requirements for granting a patent are designed to be as technology neutral as possible. This is important for effective operation of the law across all technologies - both known and yet to be developed. We need to know whether the law is and will operate effectively for AI inventions.
We are aware of the potential problems caused by certain legal requirements for the grant of an AI invention. These are set out in more detail below. You do not have to constrain your comments to these areas.
Questions
7. Does current law or practice cause problems for the grant of patents for AI inventions in the UK?
8. Could there be patentability issues in the future as AI technology develops?
Exclusions from patent rights
Only inventions can be protected by a patent. UK patent law provides a list of things which are not considered inventions (s1(2 of the Act) and so are not patentable.
Some of these exclusions could be relevant for AI patent applications since AI is computer implemented. Exclusions, such as computer programs, mental acts and mathematical methods, ensure that exclusive rights do not stifle research and development.
Mathmatical algorithms, which lie at the heart of AI systems, are not patentable. But inventions involving mathematical methods, along with computer programs, are patentable if shown to make a technical contribution.
The last independent review of the UK patent system (chapter 6) recommended keeping the line drawn between which computer programs are excluded or non-excluded from patent rights. We should consider whether this conclusion is still appropriate for AI technologies.
Questions
9. How difficult is it to secure patent protection for AI inventions because of the list of excluded categories in UK law? Where should the line be drawn here to best stimulate AI innovation?
10. Do restrictions on the availability of patent rights cause problems for ethical oversight of AI inventions?
Disclosure of the invention
A patent is a temporary exclusive right in exchange for making the details of an invention available to the public. This benefits society so that research efforts avoid wasteful duplication. It increases the opportunity for inventions to be improved and incentivises competitors to develop new inventions in the same technical area.
This is why UK law requires that a patent provides enough detail so that the invention can be worked by a skilled person (s14(3) of the Act). It must allow a skilled person to perform the invention to the full extent as claimed in the patent. This may pose problems for patent applicants when disclosing AI inventions.
It may not be an issue if the invention involves AI techniques which can be adequately described, for example if it uses well-known off-the-shelf AI algorithms. This may not be true for all AI techniques. There could be questions around the need to disclose details of the AI algorithms for the invention to be performed by a skilled person. We would be interested to hear your views about if and when this would be necessary.
Do questions arise if the invention involves AI making decisions in a black box? While we understand the inputs and we know the outcomes, the process cannot be understood by a human, or at least not without considerable difficulty. There is also the issue of how much detail of the invention should be given if the AI, acting as a black box, does not always produce the same result.
Questions
11. Does the requirement for a patent to provide enough detail to allow a skilled person to perform an invention pose problems for AI inventions?
In answering this question, you may wish to consider:
-
is it clear how much information would be sufficient for a skilled person to be able to work the invention?
-
could there be uncertainty knowing when an AI could be obtained by a skilled person to achieve the specific purpose of a patent claim and when an AI would need to be specified in a patent application?
-
what are the consequences if the details of AI algorithms need to be disclosed?
-
if AI is making decisions in a black box:
- Could there be a need to disclose more than a basic trained AI model, for example training data or the coefficient or weight of the model? If yes, is it clear how much information would be sufficient for a skilled person to be able to work the invention? Are special provisions needed for this information to be filed and stored?
- What would be the effect if competitors could use data to quickly train a different AI model?
- How would the skilled person know whether the invention could be repeated across the breadth of the patent claims or whether a claimed result could be achieved?
12. In the future could there be reasons for the law to provide sufficient detail of an AI invention for societal reasons that go beyond the current purposes of patent law?
Inventive step
Further uncertainty of traditional definitions and interpretation of patent law could occur as AI develops. One requirement for a grant is that the invention is non-obvious to a person skilled in the art (s3 of the Act). This is the inventive step test.
The person “skilled in the art” has the ability to make routine workshop developments. If more sophisticated AI tools were available to the person skilled in the art as part of their standard workshop toolkit, the result may be that more inventions are considered obvious.
There is also a question whether the concept of “the person skilled in the art” needs to extend to “the machine trained in the art”. It may become obvious to arrive at something new when AI identifies gaps or trends following lengthy but essentially routine analysis of data. For example, if AI was used to understand, learn and review all dosage regimes for different pharmaceutical products. Its suggested products and dosage regimes might be work too complex and difficult for a person skilled in the art. However, it could be routine to the “machine trained in the art”.
If AI tools are part of the standard toolkit for the person skilled in the art, this raises questions about where the threshold for inventive step should be set. Views on the inventive step threshold may change if the whole process of invention only involves AI, with no human input.
Questions
13. Does or will AI challenge the level of inventive step required to obtain a patent? If yes, can this challenge be accommodated by current patent law?
14. Should we extend the concept of “the person skilled in the art” to “the machine trained in the art”?
Infringement
UK law defines the activities under the control of a patent owner and those carried out without the consent of the patent owner which will infringe a patent. The law only recognises “a person” as infringing a patent. It does not set out how liability works when a person is not involved.
Infringing activities (s60(1) of the Act) include the making or using of a product or a process. The activity must occur in the territory of the UK. It is only an infringement to use a patented process if the infringer knows that it is an infringement. The law assumes that people can predict when their actions will infringe somebody else’s rights. For infringement of patented products there is no need to establish knowledge.
With the increasing use of AI, there is the increased chance of patent infringement by an AI machine rather than a human. This could raise a number of issues.
Discouraging the use of AI to infringe patents means we must consider who should be held responsible for any infringing actions. It is not obvious that naming the AI itself would achieve this. If a person is to be held liable it is not immediately clear who this should be. There may be a number of humans behind infringement by AI including the developer, the manufacturer, the owner or the product end user.
Knowledge is not a requirement for infringement of a patent. With the increasingly independent operation of AI systems, the human behind the AI may be unaware that they are infringing a patent. There may be resource and reputational considerations for patent holders if the law holds legally unsophisticated individuals liable for infringement. This may suggest that the AI developer or manufacturer, who have made money from the sale of the AI, is accountable.
The question of legal liability for the actions of AI is being considered much more widely than just in the area of IP. For example, for damage that may be caused by autonomous vehicles. The wider legal approach will have to be taken into account when considering liability of patent infringement by AI, for example, if AI is recognised as a legal person or the provision of compulsory insurance or contractual terms to deal with the damage caused when AI acts autonomously.
Question
15. Who is liable when AI infringes a patent, particularly when this action could not have been predicted by a human?
There is also the question of how to establish whether a patent has been infringed. This could be a problem if AI processing occurs in a black box. It may be difficult, if not impossible, for the patent holder to work out if AI used a process protected by their patent. The patent holder may not be able to prove infringement without a court order requiring a third person to reveal information. The courts do not freely grant orders requiring significant disclosure because their decisions balance the rights and interests of all parties.
There could be a further issue trying to establish if the AI black box process has infringed a patent. An infringing act must take place in the territory in which the right applies. If an AI’s neural network extends across different territories, in the cloud or on servers, it may be difficult to establish whether the AI process has infringed in a particular territory.
There is the risk that these problems increase the likelihood of litigation. Alternatively, AI inventors could choose to avoid the use of the patent system altogether and rely on other means of protection, such as trade secrets.
Questions
16. Could there be problems proving patent infringement by AI? If yes, can you estimate the size and the impacts of the problem?
Respond to the call for views by emailing AIcallforviews@ipo.gov.uk