A Review Paper: Robots Must Follow Ethics

DOI : 10.17577/IJERTCONV8IS10009

Download Full-Text PDF Cite this Publication

Text Only Version

A Review Paper: Robots Must Follow Ethics

Chandana1

1 Assistant Professor,

Mangalmay Institute of Engineering & Technology

Lalita Verma3

Taranjeet Singp

2Assistant Professor,

Mangalmay Institute of Engineering & Technology

3Assistant Professor,

Mangalmay Institute of Engineering & Technology

AbstractEthics is the branch of philosophy which deals with human conduct, moral assessments, right and wrong behaviour. The same concept is proposed for robots also.The word robo-ethics brings up a fundamental ethical reflection that is related to particular issues and moral conundrum generated by the development of robotic applications.

Robot ethics, sometimes known by the short expression "roboethics", concerns ethical problems that occur with robots. Roboethics is also known as machine ethics that deals with the code of conduct that robotic designers must implement in the Artificial Intelligence of a robot. Through this kind of artificial ethics, roboticists must guarantee that autonomous systems are going to be able to exhibit ethically acceptable behavior in situations where robots or any other autonomous systems such as autonomous vehicles interact with humans.

In this review paper,we have also discussed three laws of Robotics.

KeywordsRobot-ethics, Laws of Robotics

  1. INTRODUCTION

    In recent years, there has been increased attention on the possible impact of future robotics and AI systems. Prominent thinkers have publicly warned about the risk of a dystopian future when the complexity of these systems progresses further. These warnings stand in contrast to the current state-of-the-art of robotics and AI technology.

    Authors and movie makers have, since the early invention of technology, been actively predicting how the future would look with the appearance of more advanced technology. One of the firstlater regarded as the father of science fictionis the French author Jules Gabriel Verne (18281905). He published novels about journeys under water, around the world (in 80 days), from the earth to the moon and to the center of earth. The amazing thing is that within 100 years after publishing these ideas, allexcept the latterwere made possible by the progression of technology. Although it may have happened independently of Verne, engineers were certainly inspired by his books (Unwin, 2005). In contrast to this mostly positive view of technological progress, many have questioned the negative impact that may lie ahead. One of the first science fiction feature films was Fritz Langs 1927

    German production, Metropolis. The movies setting is a futuristic urban dystopian society with machines. Later, more than 180 similar dystopian films have followed,1

    including The Terminator, RoboCop, The Matrix, and A.I. Whether or not these are motivating or discouraging for todays researchers in robotics and AI is hard to say but at least they have put the ethical aspects of technology on the agenda.

  2. ETHICAL CHALLENGES AND COUNTERMEASURES OF

    DEVELOPING ADVANCED ARTIFICIAL INTELLIGENCE AND ROBOTS

    Ethical perspectives of AI and robotics should be addressed in at least two ways. First, the engineers developing systems need to be aware of possible ethical challenges that should be considered including avoiding misuse and allowing for human inspection of the functionality of the algorithms and systems [1].Second, when moving toward advanced autonomous systems, the systems should themselves be able to do ethical decision making to reduce the risk of unwanted behavior [2].

  3. ETHICAL SOCIETAL CHALLENGES ARISING WITH ARTIFICIAL

    INTELLIGENCE AND ROBOTS

    Our society is facing a number of potential challenges from future highly intelligent systems regarding jobs and technology risks:

    • Future jobs: People may become unemployed because of automation. This has been a fear for decades, but experience shows that the introduction of information technology and automation creates far more jobs than those which are lost. Further, many will argue that jobs now are more interesting than the repetitive routine jobs that were common in earlier manufacturing companies. Artificial intelligence systems and robots help industry to provide more cost-efficient production especially in high cost countries. Thus, the need for outsourcing and replacing all employees can be reduced. Still, recent reports have argued that in the near future, we will see overall loss of jobs [3] and [4]. However, other researchers mistrust these predictions [5]. Fewer jobs and working hours for employees could tend to benefit a small elite and not all members of our society.

  4. LAWS OF ROBOTICS

    The notion of killer robots is a mainstay of science fiction; but then again, so is the idea of robots with built-in safeguards against that. In his 1942 story Runaround, Isaac Asimov offered his now-famous Three Laws of Robotics: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given to it by human beings except where such orders would conflict with the First Law; and a robot must protect its own existence as long as such protection does not conflict with the First or Second Law. Most of Asimov's stories deal with things going awry because these laws don't equip robots to tackle real-world situations. In his 1947 story With Folded Hands, Jack Williamson had robots adhere to an even simpler directive: To serve and obey, and guard men from harm. That, too, had an unwelcome result: a totalitarian society in which robots prohibit humans from participating in almost all activities, lest one of us be injured.

    Another problem is that robots could be misused for criminal activities such as burglary. A robot in your own home could either be reprogrammed by people with criminal intent or they might have their own robots carry out the theft. So, having a home robot connected to the Internet will place great demands on security mechanisms to prevent abuse. Although we must assume that anyone who develops robots and AI for them has good intentions, it is important that the developers also have possible abuse in mind. These intelligent systems must be designed so that the robots are friendly and kind, while difficult to abuse for malicious actions in the future. Part of the robot-ethics discussion concerns military use [6]. That is, e.g., applying robots in military activities have ethical concerns. The discussion is natural for several reasons including that military applications are an important driving force in technology development. At the same time, military robot technology is not all negative since it may save lives by replacing human soldiers in danger zones. However, giving robotic military systems too much autonomy increases the risk of misuse including toward civilians.

    The British Standards Institute has published the worlds first standard on ethical guidelines for the design of robots: BS8611, in April 2016 (BSI, 2016). It has been prepared by a committee of scientists, academics, ethicists, philosophers and users to provide guidance on potential hazards and protective measures for the design of robots and autonomous systems being used in everyday life. This was followed by the IEEE Standards Association initiative on AI and Autonomous System ethics publishing an Ethical Aligned Design, version 1 being a A Vision for Prioritizing Human Wellbeing withArtificial Intelligence and Autonomous Systems [7]. It consists of eight sections, each addressing a specific topic relatedto AI and autonomous systems that has been discussed by a specific committee of the IEEE Global Initiative. The theme for each of the sections is as follows: General principles.

    1. Embedding values into autonomous intelligent systems.

    2. Methodologies to guide ethical research and design.

    3. Safety and beneficence of artificial general intelligence and artificial superintelligence.

    4. Personal data and individual access control.

    5. Reframing autonomous weapons systems.

    6. Economics/humanitarian issues.

  5. CONCLUSION AND DISCUSSION

Here are some examples of recommendations made by the project participants for commercial robots:

  • Safety. There must be mechanisms (or opportunities for an operator) to control and limit a robots autonomy.

  • Security. There must be a password or other keys to avoid inappropriate and illegal use of a robot.

  • Traceability. As with aircraft, robots should have a black box to record and document their own behavior [8].

  • Identifiability. Robots should have serial numbers and registration numbers similar to cars.

  • Privacy policy. Software and hardware should be used to encrypt and password protect sensitive data that the robot needs to save.

REFERENCES

    1. Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In The Cambridge Handbook of Artificial Intelligence (pp. 316-334). Cambridge University Press

    2. Wendell Wallach and Colin Allen: moral machines: teaching robots right from wrong,Oxford University Press, 2009, 273 pp, ISBN: 978-0-19-537404-9

    3. Dennis, L. A., Fisher, M., and Winfield, A. F. T. (2015). Towards verifiably ethical robot behaviour. CoRR abs/1504.03592.

    4. Lin, P., Abney, K., and Bekey, G. A. (eds) (2012). Robot Ethics. The Ethical and Social Implications of Robotics.

      Cambridge, Massachusetts. London, England: The MIT Press

    5. Manzotti, R., and Tagliasco, V. (2008). Artificial consciousness: a discipline between technological and theoretical obstacles. Artif. Intell. Med. 44, 105117. doi:10.1016/j.artmed.2008.07.002

    6. Winfield, A. F., Blum, C., and Liu, W. (2014). Towards an ethical robot: internal models, consequences and ethical action selection, in Advances in Autonomous Robotics Systems, eds M. Mistry, A. Leonardis, M. Witkowski, and C. Melhuish (Springer), 8596.

    7. Haidegger, T., Barreto, M., Gonçalves, P., Habib, M. K., Veera Ragavan, S. K., Li, H. (2013). Applied ontologies and standards for service robots. Rob. Auton. Syst. 61, 12151223. doi:10.1016/j.robot.2013.05.008

    8. BSI. (2016). Robots and Robotic Devices. Guide to the Ethical Design and Application of Robots and Robotic Systems, Vol. BS 8611 (BSI Standards Publications), 2016.

Leave a Reply