Intelligent Agents and Autonomous Cars : A Case Study

DOI : 10.17577/IJERTV2IS1250

Download Full-Text PDF Cite this Publication

Text Only Version

Intelligent Agents and Autonomous Cars : A Case Study

Nilotpal Chakraborty(1), Raghvendra Singh Patel(2)

  1. School of Future Studies & Planning, Devi Ahilya University, Indore, India

  2. School of Future Studies & Planning, Devi Ahilya University, Indore, India

Abstract

One of the found techniques in Artificial Intelligence is intelligent agent technology.The concept of Agents has become important both in Artificial Intelligence and in mainstream of Computer Science. Artificial Intelligence is defined as the branch of Computer Science that deals in developing intelligent agents and consequently, Intelligent Agent is defined as an automated agent that can replicate the functioning of human beings. It perceives its environment, analyses it, and takes actions that will maximize the probability of its success Intelligent agents continuously perform three functions: perception of dynamic conditions in the environment; reasoning to interpret perceptions, solve problems, draw inferences, and determine actions. In this paper, we begin with the basic terminologies related to intelligent agents, and then proceed towards the various environments that an agent may have to perceive. We discuss about the Multi Agent Systems then finally site the most extravagant intelligent agent i.e., Autonomous Cars with Googles driverless car technology.

Keywords Agent, intelligent agent, agent environment, multi agent system, autonomous car, driverless car.

  1. Introduction

    The Encarta World English Dictionary says that the word Agent comes from the Latin word Agere, which word gave also the words act, active, agile, agitate etc. This etymologic approach is interesting to have a first idea of what it is. In general, we can view an agent to be an entity that performs some actions on behalf of others on request. In real life, we come across a number of human agents such as Travel agent, Business agent, Police agent etc. But here our concern is the study and development of computational automated agents that will replicate the functioning of human agents. More generally, in their book "Artificial intelligence, a modern approach", S. Russel and P. Norvig highlighted

    the importance of the environment, defining an agent as something which percepts through sensors and acts through effectors. A definition close to present-day reality is that of Ted Selker from the IBM Almaden Research Center: An agent is a software thing that knows how to do things that you could probably do yourself if you had the time.

    Agents come in many different flavors. Depending on their intended use, agents are referred to by an enormous variety of names, e.g., knowbot, softbot, taskbot, userbot, robot, personal (digital) assistant, transport agent, mobile agent, cyber agent, search agent, report agent, presentation agent, navigation agent, role agent, management agent, search and retrieval agent, domain-specific agent, packaging agent. The word agent is an umbrella term that covers a wide range of specific agent types. Most popular names used for different agents are highly non-descriptive. It is therefore preferable to describe and classify agents according to the specific properties they exhibit.

    Figure-1 is a pictorial representation of functioning of an agent.

  2. Characteristics of agents

    The idea of intelligent software agents has captured the popular imagination. Lets address the question of what makes an agent intelligent by explaining the characteristics of intelligent agents.

    1. Primary characteristics of agents

      The most important attributes of an agent are referred to as primary attributes; less important or secondary attributes, are listed below. The primary attributes include the following

      Autonomy: reflects the ability of agents to operate on their own, without immediate human guidance, although the latter is sometimes invaluable.

      Co-operation: refers to the ability to exchange high-level information with other agents: an attribute which is inherent in multiple agent systems (MAS).

      Learning: refers to the ability of agents to increase performance over time when interacting with the environment in which they are embedded.

      Mobility: This refers to the property of the agent of being movable to and from various places.

    2. Secondary characteristics of agents

      Agents can be classified according to a number of other attributes, which could be regarded as being secondary to the ones described above. Rather than a comprehensive list, some examples of secondary attributes that agents may exhibit will be given. Agents may be classified, for example, by their pro-active versatility the degree to which they pursue a single goal or engage in a variety of tasks. Furthermore, one might attribute social abilities to agents, such as truthfulness, benevolence and emotions, although the last is certainly controversial. One may also consider mental attitudes of agents, such as beliefs, desires, and intentions.

      By combining the primary and secondary properties and characteristics, hybrid agents and heterogeneous agents can be constructed. With hybrid agents two or more properties and/or attributes are combined in the design of a single agent. This results in the combination of the strengths of different agent-design philosophies in a single agent, while at the same time avoiding their individual weaknesses. It is not possible to separate such an agent into two other agents. Heterogeneous agents combine two or more different categories of agents in such way that they interact via a particular communication language.

  3. History of intelligent agents

    The notion of intelligent agents has been around for the past 50 years; it was first introduced by McCarthy (1956, 1958) and later coined by the prominent MIT Lincoln Laboratory computer scientist Oliver Selfridge. In the 1950s, John McCarthy conceived the Advice Taker (McCarthy 1958), a software robot living and working in a computer network of information utilities (much like todays Internet).When given a task by a human user, the software robot could take the necessary steps or ask advice from the user when it got stuck. The futuristic prototypes of intelligent personal agents, such as Apple Computers Phil or Microsofts Bob, perform complicated tasks for their users following the same functions laid out by McCarthy in his Advice Taker.

    Although modern approaches to software agency can trace their roots to these earlier visions, current research started in the mid-1980s and has been influenced by work done in a number of fields including artificial intelligence (e.g., reasoning theory and artificial life), software engineering (e.g., object-oriented programming and distributed processing), and human- computer interaction (e.g., user modeling and cognitive engineering).

  4. Agent environments

    The critical decision an agent faces is determining which action to perform to best satisfy its design objectives. Agent environments are classified based on different properties that can affect the complexity of the agents decision-making process. They include

    Accessible vs. inaccessible

    An accessible environment is one in which the agent can obtain complete, timely and accurate information about the state of the environment. The more accessible an environment, the less complicated it is to build agents to operate within it. Most moderately complex environments are inaccessible.

    Deterministic vs. non-deterministic

    Most reasonably, complex sytems are non- deterministic the state that will result from an action is not guaranteed even when the system is in a similar state before the action is applied. This uncertainty presents a greater challenge to the agent designer.

    Episodic vs. non-episodic

    In an episodic environment, the actions of an agent depend on a number of discrete episodes with no link between the performances of the agent in different scenarios. This environment is simpler to design since there is no need to reason about interactions between this and future episodes; only the current environment needs to be considered.

    Static vs. dynamic

    Static environments remain unchanged except for the results produced by the actions of the agent. A dynamic environment has other processes operating on it thereby changing the environment outside the control of the agent. A dynamic environment obviously requires a more complex agent design.

    Discrete vs. continuous

    If there are a fixed and finite number of actions and percepts, then the environment is discrete. A chess game is a discrete environment while driving a taxi is an example of a continuous one.

    From the above, it is clear that the combination of inaccessible, non-deterministic, non-episodic, dynamic and continuous environments is the toughest to perceive and an agent capable of perceiving this combination of environments will have the highest level of intelligence.

  5. Types of agents

    Based on the way an agent handles a request or takes an action upon perceiving its environment, intelligent agents can be classified into four categories

    1. Simple reflex agents

    2. Agents keeping track of the World

    3. Goal based agents

    4. Utility based agents

    We shall discuss each one of them in brief details.

    1. Simple reflex agents

      A simple reflex agent is an agent that performs actions based on certain conditions being fulfilled. It monitors it environment, and performs the same action every time the same condition occurs. A simple reflex agent can be implemented by the simple conditional clauses such as if. For example, for a car, we can implement the following condition

      if car-in-front-is-braking then initiate-braking

      The above statement explains the fact that if the car running in front brakes and the brake lights are on, then the driver of the car behind this should initiate brakes to avoid collision.

      The functioning of simple reflex agents has been depicted in figure-2.

    2. Agents keeping track of the World

      The simple reflex agent described before will work only if the correct decision can be made on the basis of the current percept. But it is very often that the external environment changes without notice (in case of dynamic environments) and then simple reflex agents fail to act rationally. For example, if the brake lights come on, it does not necessarily mean that the car in front in stopping, it may be on because it wants to change its directions, which is a very common case in real life. In such cases, the agent may need to maintain some internal state information in order to distinguish between world states that generate the same perceptual input but nonetheless are significantly different.

      Updating this internal state information as time goes by requires two kinds of knowledge to be encoded in the

      agent program. First, we need some information about how the world evolves independently of the agentfor example, that an overtaking car generally will be closer behind than it was a moment ago. Second, we need some information about how the agents own actions affect the worldfor example, that when the agent changes lanes to the right, there is a gap (at least temporarily) in the lane it was in before, or that after driving for five minutes northbound on the freeway one is usually about five miles north of where one was five minutes ago.

      Figure-3 shows the functioning of Agents that keep track of the World.

    3. Goal based agents

      Knowing about the current state of the environment is not always enough to decide what to do. For example, at a road junction, the taxi can turn left, right, or go straight on. The right decision depends on where the taxi is trying to get to. In other words, as well as a current state description, the agent needs some sort of goal information, so that it can act accordingly to fulfill that goal.

      Figure-4 represents the functioning of a Goal based agent.

    4. Utility based agents

      Goals alone are not really enough to generate high- quality behavior. For example, there are many action sequences that will get the taxi to its destination, thereby achieving the goal, but some are quicker, safer, more reliable, or cheaper than others. Goals just provide a crude distinction between happy and

      unhappy states, whereas a more general performance measure should allow a comparison of different world states (or sequences of states) according to exactly how happy they would make the agent if they could be achieved. Because happy does not sound very scientific, the customary terminology is to say that if one world state is preferred to another, then it has higher utility for the agent.

      Utility is therefore a function that maps a state onto a real number, which describes the associated degree of happiness. A complete specification of the utility function allows rational decisions in two kinds of cases where goals have trouble. First, when there are conflicting goals, only some of which can be achieved (for example, speed and safety) the utility function specifies the appropriate trade-off. Second, when there are several goals that the agent can aim for, none of which can be achieved with certainty, utility provides a way in which the likelihood of success can be weighed up against the importance of the goals.

      A pictorial representation of Utility based agents is shown in figure-5

  6. Intelligence and agents

    By varying the extent of the learning attribute an agents intelligence can range from more to less intelligent. By varying the extent of the attributes autonomy and co-operation an agents agency can vary from no inter-activity with the environment to total inter-activity with the environment.

    In this case, intelligence relates to the way an agent interprets the information or knowledge to which it has access or which is presented to it. The most limited form of intelligence is restricted to the specification of preferences. Preferences are statements of desired behavior that describe a style or policy the agent needs to follow. The next higher form of intelligence is described as reasoning capability. With reasoning, preferences are combined with external events and external data in a decision-making process. The highest form of intelligence is called learning. Learning can be described as the modification of behavior as a result of experience.

  7. Example of intelligence agents

    Agents can be classified according to the properties they exhibit. This section will provide some examples of actual implementations of software agents

    Collaborative agents: Collaborative agents interconnect existing legacy software, such as expert systems and decision support systems, to produce synergy and provide distributed solutions to problems that have an inherent distributed structure.

    Interface agents: Interface agents provide for personalized user interfaces, for sharing information learned from peer-observation, and for alleviating the tasks of application developers. Interface agents adapt to user preferences by imitating the user, by following immediate instructions of the user. One has to realize that interface agents can only be effective if the tasks they perform are inherently repetitive (otherwise, agents will not be able to learn) and if the behavior is potentially different for different users.

    Mobile agents: Mobile agents reduce communicaion costs and overcome limitations of local resources. Decentralization of the selection process prevents unwanted information being sent over networks, thus economizing on network utilization. As an example, imagine one has to download many images from a remote location just to pick out one. Mobile agents

    could go to that location and only transfer the selected compressed image across the network.

    Information agents: Information agents circumvent

    drowning in data, but starving for information.

    Reactive agents: Reactive agents have as primary advantages that they are robust and fault-tolerant yet, in spite of their simple stimulus-response communication behavior, allow for complex communication behaviors, when combined. Examples include sensors and robotics.

    Role model agents: These are agents that are classified according to the role they play, e.g. World Wide Web (WWW) information-gathering agents.

    Hybrid agents: Hybrid agents combine the strengths of different agent-design philosophies into a single agent, while at the same time avoiding their individual weaknesses. Most examples involve hybrid agents that combine deliberative agents with reactive agents. The reactive agent is used for tasks that are behavior-based and that involve relatively low-level messaging; the deliberative agent is used for tasks that involve local planning or coordinating planning activities with other agents or the user.

    Heterogeneous agents: Heterogeneous agents combine two or more different categories of agents in a single application, which can interact via a particular communication language. These agents provide for interoperability of existing software products in order to produce synergetic effects.

    It is to be noted that the list given above about the examples of intelligent agents is not exhaustive. Day by day technology is improving and newer versions or newer types of agents are being developed with more power and intelligence.

  8. Multi-agent system (MAS)

    As the field of AI matured, it broadened its goals to the development and implementation of multi-agent systems (MASs) as it endeavored to attack more complex, realistic and large-scale problems which are beyond the capabilities of an individual agent. The capacity of an intelligent agent is limited by its knowledge, its computing resources, and its perspective. By forming communities of agents or agencies, a solution based on a modular design can be implemented where each member of the agency specializes in solving a particular aspect of the problem. Thus, the agents must be able to interoperate

    and coordinate with each other in peer-to-peer interactions. The characteristics of MASs are defined as follows

    Each agent has incomplete information or capabilities for solving the problem and, thus, has a limited viewpoint

    There is no global control system Data are decentralized Computation is asynchronous

    Agency relates to the way an agent can perceive its environment and act on it. Agency begins with asynchrony, where the agent can be given a task which it performs asynchronously with respect to the users requests. The next phase of agency is user representation, where an agent has a model of the users goals or agenda. In subsequent phases, the agent is able to perceive access, act on, communicate and interact with data, applications, services and other agents. These phases are called: data inter-activity, application inter-activity, service inter-activity, and agent inter-activity.

    Figure-6 shows the interaction among multiple agents.

  9. Autonomous car

    An autonomous car, also known as robotic or informally as driverless or self-driving, is an autonomous vehicle capable of fulfilling the human transportation capabilities of a traditional car. As an autonomous vehicle, it is capable of sensing its environment and navigating on its own. A human may choose a destination, but is not required to perform any mechanical operation of the vehicle.

    Autonomous vehicles sense the world with such techniques as RADAR, LIDAR, GPS and computer vision. Advanced control systems interpret the information to identify appropriate navigation paths, as well as obstacles and relevant signage. Autonomous vehicles typically update their maps based on sensory input, such that they can navigate through uncharted environments.

    There have been several programs around the world. In June 2011 the state of Nevada was the first jurisdiction in the United States to pass a law concerning the operation of autonomous cars. The Nevada law went into effect on March 1, 2012, and the Nevada Department of Motor Vehicles issued the first license for a self-driven car in May 2012. The license was issued to a Toyota Prius modified with Google's experimental driverless technology. Three U.S. states have passed laws permitting driverless cars, as of September 2012: Nevada, Florida and California.

  10. Googles driverless car technology

    The Google Driverless Car is a project by Google that involves developing technology for driverless cars. The project is currently being led by Google engineer Sebastian Thrun, director of the Stanford Artificial Intelligence Laboratory and co-inventor of Google Street View. Thrun's team at Stanford created the robotic vehicle Stanley which won the 2005 DARPA Grand Challenge and its US$2 million prize from the United States Department of Defense. The team developing the system consisted of 15 engineers working for Google, including Chris Urmson, Mike Montemerlo, and Anthony Levandowski who had worked on the DARPA Grand and Urban Challenges The system combines information gathered from Google Street View with artificial intelligence software that combines input from video cameras inside the car, a LIDAR sensor on top of the vehicle, radar sensors on the front of the vehicle and a position sensor attached to one of the rear wheels that helps locate the car's position on the map.

    In 2009, Google obtained 3,500 miles of Street View images from driverless cars with minor human intervention. As of 2010, Google has tested several vehicles equipped with the system, driving 1,609 kilometres (1,000 mi) without any human intervention, in addition to 225,308 kilometres (140,000 mi) with occasional human intervention. Google expects that the increased accuracy of its automated driving system could help reduce the number of traffic-related injuries and deaths, while using energy and space on roadways more efficiently.

  11. Figures

Figure 1: Functioning of an agent Figure 2: Simple reflex agents

Figure 3: Agents keeping track of the World

Figure 4: Goal based agents Figure 5: Utility based agents

Figure 6: Multi-agent system

Figure 7: Googles driverless car technology

9. Conclusion

The dream of creating artificial devices that reach or outperform human intelligence is many centuries old. The development of intelligent agents is making that dream come true for the researchers and as well as for the industry. A fundamental feature of agent systems is the ability to make decisions, and to manage the consequences of these decisions in complex dynamic environments. Agent technology is greatly hyped as a panacea for the current ills of system design and development, but the developer is cautioned to be

aware of the pitfalls inherent in any new and untested technology. The potential is there but the full benefit is yet to be realized. Agent technology will achieve its true potential only if users understand its business value. Much work is yet to be done.

Acknowledgement

It our heartiest pleasure to present our sincere thanks and gratitude to Dr. Priyesh Kanungo, Reader and Senior System Engineer, Devi Ahilya University, Indore, India for his constant support and encouragement during the course of Artificial Intelligence and Neural Networks. We are also very thankful to Dr. V. B. Gupta, Head, School of Future Studies and Planning, Devi Ahila University, Indore, India for his constant support and help. Finally, we would like to thank our friends and family members for their valuable presence in our life.

References

  1. P. Agre and D. Chapman. PENGI: An implementation of a theory of activity. In Proceedings of the Sixth National Conference on Artificial Intelligence (AAAI-87), Seattle, WA, 1987.

  2. R. A. Brooks. Intelligence without reason. In Proceedings of the Twelfth International Joint Conference on Artificial Intelligence (IJCAI-91), Sydney, Australia, 1991.

  3. L. P. Kaelbling. An architecture for intelligent reactive systems. In M. P. Georgeff and A. L. Lansky, editors, Reasoning About Actions & Plans-Proceedings of the 1986 Workshop, Morgan Kaufmann Publishers: San Mateo, CA, 1986.

  4. J. McCarthy, P. J. Hayes; Some philosophical problems from the standpoint of artificial intelligence, in B. Meltzer and D. Michie, editors, Machine Intelligence, Edinburgh University Press, 1969.

  5. M. Wooldridge and N. R. Jennings; Intelligent agents: Theory and practice. The Knowledge Engineering Review, 10(2):115-152, 1995.

  6. M. Wooldridge; Agent-based software engineering. IEE Transactions on Software Engineering, 144(1):26-37, February 1997.

  7. Janca P; Intelligent Agents: Technology and Application, GiGa Information Group, 1996.

  8. Etzioni O, Weld DS; Intelligent agents on the Internet: fact, fiction, and forecast. IEEE Expert 10, 1995.

  9. Descouza KC; Intelligent agents for competitive intelligence: survey of applications. Competitive Intel Rev 12:5763, 2001.

  10. Feldman, S., and E. Yu. Intelligent Agents: A Primer. Infotoday.com , October 1999.

  11. Gilbert, D. Intelligent Agents: The Right Information at the Right Time. IBM white paper, May 1997.

  12. Stuart Russell, Peter Norvig; Artificial Intelligence: A Modern Approach, Prentice-Hall, Inc, 1995.

  13. Beer, M., et al. Negotiation in Multi-agent Systems. Knowledge Engineering Review, Vol. 14, no. 3 (1999): 285289.

  14. Decker, K., et al. Continuing Research in Multi- agent Systems. Knowledge Engineering Review, Vol. 14, no. 3, September 1999, Pages: 279283.

  15. Maes, P., Artificial Intelligence Meets Entertainment: Lifelike Autonomous Agents. Communications of the ACM, Vol. 38, no. 11, November 1995, Pages: 108114.

  16. O'Toole, Randal (2009). Gridlock: why we're stuck in traffic and what to do about it. Cato Institute. ISBN 978- 1-935308-23-2.

  17. Sebastian Thrun (2010-10-09). "What we're driving at": The Official Google blog retrieved 2010-10-11.

  18. Muller, Joann. "With Driverless Cars, Once Again It Is California Leading the Way", Forbes.com, September 26, 2012.

  19. John Maroff (Oct 9, 2010). "Google Cars Drive Themselves, in Traffic". The New York Times. Retrieved August 12, 2012.

  20. IBM Intelligent Agents http://www.networking.ibm.com/iag/iaghome.html

  21. IBM Technology and Research http://www.ibm.com/technology

Authors Profile

Nilotpal Chakraborty is pursuing Masters of Technology in Systems Management from Devi Ahilya University, Indore, India. He has earned his Bachelors degree in Information

Technology from Assam University, Silchar, India. He is a prolific researcher and his area of interests includes Computer Networks and Security, Cloud Computing, Intelligent agent technology, Design and Analysis of Algorithms, Databases and Web Development.

Raghvendra Singh Patel is the Co-founder of Glimpse Solution. He graduated in Computer Science and Engineering from Oriental Institute of Science and Technology, Indore, India. He is currently pursuing his Masters of Technology in

Future Studies and Planning from Devi Ahilya University, Indore, India. His field of interest includes Operations Management, Knowledge Management, Artificial Intelligence, Databases and Web Programming.

Leave a Reply