Robot Rights and the Emergence of the “Electronic Person”

By Kyle Bowyer

Current laws may soon be inadequate for regulating increasingly sophisticated and intelligent robots. As machines approach and even surpass human physical and cognitive abilities, is there a need for the creation of limited robot rights? If so, what would these be? These questions may need answering sooner than you think.


Robots and artificial intelligence are becoming increasingly more sophisticated with advances in innovation and application occurring at an alarming pace. This progress, however, raises concerns about the impact of intelligent machines on human society and culture. Most concerns deal with the impact of machines on industry, the workplace, and professions. However, other important questions as to legal implications in areas such as liability, consumer protection, ownership, intellectual property, safety, privacy, security and more arise.

The increased use of advanced machines in many aspects of human life and endeavour certainly calls for the appropriate regulation of their manufacture and use. However, should this regulation be taken a step further to include the creation of a set of distinct rights in relation to or for certain robots? Is it possible that in the near future, a robot may be recognised as a separate legal entity or “electronic person”? Preposterous as the question of robot rights may have seemed a decade ago and perhaps even today, it is not inconceivable that it be seriously considered in the near future.

Opposition, usually vehement, to the notion of robot rights often appears to be based on the following objections:

1. How can we even consider robot rights when the rights of animals and, indeed, some humans are not adequately or universally recognised?

2. How can we assign rights to an inanimate machine lacking true consciousness (whatever that is)?

Unfortunately, the assumption is often made that any proposal of robot rights means a necessary elevation of the legal status of machines above that of animals and to the level of the status afforded to humans and which is therefore a blatant, almost blasphemous, disregard for the plight of the rights-poor. This specious argument seems to suggest that there are limited rights to go around, that we afford rights to one group to the exclusion or detriment of others and that time or resources devoted to the exploration of robot rights means a waste of time or resources that should be given over to animal or human rights.

The simple fact is that machines are becoming more complex, with the ability to reason and act autonomously in response to their environment to the extent that traditional legal rules may not be adequate. For example, traditional laws on negligence or manufacturers’ product liability often require the manufacturer to foresee the type of harm that could occur through the use of their product. This becomes more difficult if autonomous, self-learning machines act independently of a manufacturer’s or owner’s control and in response to changing environments. Regulation in this developing area is thus arguably inadequate and may be improved via the creation of a limited legal status for certain machines with resultant rights and duties akin to those applying to other non-living creations such as corporations.

The simple fact is that machines are becoming more complex, with the ability to reason and act autonomously in response to their environment to the extent that traditional legal rules may not be adequate. 


Furthermore, a requirement of “consciousness” need not be an impediment to the granting of some type of legal status or the granting of appropriate rights or duties in relation to appropriately sophisticated machines. Corporations are considered legal entities, with consequent rights and the conferral of duties upon their directors, despite the corporation lacking any physical body or consciousness. Is it really a stretch of the imagination that an autonomous machine approaching, equalling or exceeding the physical and cognitive abilities of humans be recognised as a legal entity for the purpose of creating limited rights and duties in relation to the machine itself, its manufacturers or users and for the benefit and protection of all?

The increased complexity of robots and artificial intelligence and the myriad possible and actual applications of advances are undeniable. The concept of autonomous vehicles, no longer relegated to the realms of science fiction, is becoming almost mundane and accepted by many as inevitable.1 Intelligent machines are used for much more complex tasks such as surgery.2 In terms of cognitive abilities, machines that can mimic human handwriting,3 fool software designed to distinguish machine from human input,4 create sounds indistinguishable from natural sounds5 and “feel” pain6 are just some of the significant developments now routinely occurring. The notion that machines may one day be able to think like humans is perhaps not so fanciful. Alan Turing, the father of computing, stated that, “I believe that at the end of the century the use of words and general educated opinion will have altered so that one will be able to speak of machines thinking without expecting to be contradicted.”7

[ms-protect-content id=”544″]

Naturally there are concerns about the impact of intelligent machines on human society, particularly in the workplace. A recent report by major accounting firm Deloitte entitled, “From Brawn to Brains” suggests that around a third of jobs in the UK could become automated within the next decade or two. On the bright side, the report also suggests that technology will result in the creation of far more higher-skilled (and better paid jobs).8

Some are concerned that the pace of development is too fast. Microsoft’s Bill Gates has suggested a tax on robots to slow down the spread of automation and protect areas of human employment likely to be significantly impacted.9 For others, the concern is even greater. Bill Gates, physicist Stephen Hawking, and innovator Elon Musk warn that artificial intelligence (AI) could be humanity’s greatest threat.10 At the recent opening of the Leverhulme Centre for the Future of Intelligence, a research collaborative with the aim of ensuring that humans make the best use of artificial intelligence, Stephen Hawking said that AI, “could develop a will of its own – a will that is in conflict with ours. The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity.”11

Governments too are beginning to grapple with the consequences of living in an increasingly automated society. In May 2015, the European Parliament’s Committee on Legal Affairs raised the issue of the legal status of robots. In its draft report and motion for a European Parliament Resolution, the committee declared that:

“Whereas now that humankind stands on the threshold of an era when ever more sophisticated robots, bots, androids and other manifestations of artificial intelligence (“AI”) seem poised to unleash a new industrial revolution, which is likely to leave no stratum of society untouched, it is vitally important for the legislature to consider all its implications.”12

The Committee considered various implications of recent progress in artificial intelligence and the use of robots and called for a set of civil law rules on robotics encompassing their manufacture, use, autonomy, and impact upon human society. Of the legal solutions proposed, perhaps none was more thought provoking than the call to create a legal status of “electronic persons” for the most sophisticated, autonomous robots, although what this means is a subject for future consideration.

On 16 February 2016, the European Parliament adopted this resolution by 451 votes to 138 indicating the strong level of concern and support for improved regulation of robots. A directive will now be proposed covering subject matters like registration of smart robots, ethical principles for the development, design and modification of robots, employment implications, and a code of conduct for robotics engineers to name but a few. With respect to liability issues, it is proposed that the Parliament explore solutions such as compulsory insurance schemes, compensation funds and a specific legal status for robots.13

A directive will now be proposed covering subject matters like registration of smart robots, ethical principles for the development, design and modification of robots, employment implications, and a code of conduct for robotics engineers to name but a few. 

It is important to reiterate that a call for robot rights does not prima facie mean that robots should be afforded the same rights as humans. The nature and extent of any rights or legal status will undoubtedly be the subject of much debate. It may however be useful to conceptualise a framework of rights based on what we already do with corporations. This framework would still need to be heavily influenced by and dependent on the regulation of property (including intellectual property), product liability, tort law, insurance, consumer protection, and perhaps aspects of contract, bailment, and agency law.

The likening of possible robot rights to corporate rights may be a good place to start. Corporations, like robots, are not “natural” persons but they are considered legal entities. Accordingly, corporations can own property, sue or be sued, and are entitled to protection in relation to how they are used (or treated) by their directors. However, a corporation cannot act by itself but through the actions of its directors who have duties and obligations to act in the best interests of the corporation.

The likening of possible robot rights to corporate rights may be a good place to start. Corporations, like robots, are not “natural” persons but they are considered legal entities.

Thus a suitably sophisticated robot may be able to own assets or sue or be sued.  Its manufacturers may still be bound by traditional product liability and consumer protection laws but with additional ethical codes of conduct applied to the design and manufacture of robots. Owners, who could be the manufacturers themselves or other purchasers of robots, may have obligations (other than those arising from their own property interests) to protect the physical and software integrity of robots and to act to ensure, to some extent, that they do not cause harm. The robot itself would also be liable for its own acts or omissions that are beyond the scope of its owners to control and its assets or insurances would cover any liability.

Some distinctions are also important of course; corporations do not have physical bodies, they cannot independently cause physical harm or damage or interact with their environment in the way that living things and physical objects do. Corporations function only in relation to the business they are created for. A corporation is not going to have a car accident or commit an assault but a robot may interact with its environment in a myriad of ways that are not part of its work function.

Therefore, any framework of rights at this stage is merely a suggestion and if robots are afforded any type of legal status or rights, determining the actual nature and extent of these will be a challenge. However, it is becoming increasingly likely that traditional legal rules will need to be revised and the question of whether or not inadequacies in the law may be best resolved with a very limited right-based approach needs to be seriously considered and probably very soon.

Featured image: Man with robot by Matt Cardy/Stringer © Getty Images


About the Author

Kyle Bowyer is a Lecturer at the Curtin Law School, Curtin University, Western Australia. Kyle teaches a range of Law and Business Law subjects. A qualified and admitted lawyer, Kyle has degrees in Law and Commerce, a Master’s Degree in Education Management and is currently undertaking a PhD in Law.


1. See for example, KPMG’s report on autonomous vehicles and the UK economy at–-The-UK-Economic-Opportu…1.pdf.
2. See for example,
3. See for example,
4. See for example,
5. See for example,
6. See for example,
7. M. Turing (1950). Mind, New Series, Vol. 59, No. 236, October, pp. 433-460, 442.
10. See
11. See and the Leverhulme Centre at
12. European Parliament Committee on Legal Affairs, Draft Report with recommendations to the Commission on Civil Law Rules on Robotics, 2015/2103(INL), 3.

The views expressed in this article are those of the authors and do not necessarily reflect the views or policies of The Political Anthropologist.