Analysis of Requirements Using Ontology : Survey

DOI : 10.17577/IJERTV1IS10494

Download Full-Text PDF Cite this Publication

Text Only Version

Analysis of Requirements Using Ontology : Survey

Roshan Bande, Prof. K. N. Hande

Smt. Bhagwati Chaturvedi College of Engineering, Nagpur, Maharashtra, India

Abstract

We propose a software requirements analysis method based on ontology technique, where we can establish a mapping between software requirements specification and the ontology that represents semantic components. Our ontology system consists of a thesaurus and inference rules and the thesaurus part comprises domain specific concepts and relationships suitable for semantic processing. It allows requirements engineers to analyze a requirements specification with respect to the semantics of the application domain. More concretely, we demonstrate following three kinds of semantic processing through a case study, detecting incompleteness and inconsistency included in a requirements specification, measuring the quality of a specification with respect to its meaning and predicting requirements changes based on semantic analysis on a change history.

  1. Introduction

    One of the goals of requirements analysis is to develop a requirements specification document of high quality. There are several methods to achieve this goal and their supporting tools are going to be used in practice, e. g., goal oriented requirements analysis methods, scenario analysis, use case modelling techniques and so on. One of the most crucial problems to automate requirements analysis is that requirements documents are usually written in natural language, e. g. English or Japanese. Although techniques for natural language processing (NLP) are being advanced nowadays, it is hard to handle such requirements documents sufficiently by computer. However, semantic processing in requirements is indispensable for producing requirements specifications of high quality. To overcome the problem, there are several approaches, but each of them has its inherent problems. In some studies, a semi-formal notation for representing requirements, e. g. restricted natural languages was introduced, but it was difficult for human engineers to write syntactically and semantically correct requirements sufficiently by using this notation. Rigorous formal notations with axioms

    and inference system seem to be suitable, but its usage is very limited to practitioners because of their difficulty and complexity in the practitioners' learning and training.

    We use an ontology system to develop a software requirements document of high quality. Ontology technologies are frequently applied to many application domains nowadays, because concepts, relationships and their categorizations in a real world can be represented in ontology. Ontology can be used as resources of domain knowledge, especially in a specific application domain. By using such ontology, several kinds of semantic processing can be achieved in requirements analysis without rigorous NLP techniques.

    In this paper, we propose a requirements analysis method by using an ontology technique, where we establish a mapping between a requirements specification and ontological elements. This technique allows us to have the possibility of automating semantic analysis with lightweight processing, not heavyweight NLP techniques. By mapping requirements descriptions in a requirements document onto ontological elements, which represents fragments of meaning in a problem domain, each description can be semantically interpreted. By applying inference rules to the ontological elements, we can achieve semantic processing about the requirements document.

  2. Literature survey

    "IEEE Recommended Practice for Software Requirements Specifications" states the content and qualities of a good software requirements specification (SRS) are described and several sample SRS outlines are presented. This recommended practice is aimed at specifying requirements of software to be developed but also can be applied to assist in the selection of in- house and commercial software products. Expert systems are computer applications which embody some non-algorithmic expertise for solving certain types of problems. For example, expert systems are used in diagnostic applications servicing both people and machinery. They also play chess, make financial planning decisions, configure computers, monitor real time systems, underwrite insurance policies, and

    perform many other services which previously required human expertise. Ontology is an explicit specification of a conceptualization. The term is borrowed from philosophy, where Ontology is a systematic account of Existence. For AI systems, what "exists" is that which can be represented. When the knowledge of a domain is represented in a declarative formalism, the set of objects that can be represented is called the universe of discourse. This set of objects, and the describable relationships among them, are reflected in the representational vocabulary with which a knowledge- based program represents knowledge. Thus, in the context of AI, Proposed system can describe the ontology of a program by defining a set of representational terms. In such ontology, definitions associate the names of entities in the universe of discourse (e. g., classes, relations, functions, or other objects) with human-readable text describing what the names mean, and formal axioms that constrain the interpretation and well-formed use of these terms. Formally, ontology is the statement of a logical theory. Prolog can be used as the inference engine,

    Prolog is a general purpose logic programming language associated with artificial intelligence and computational linguistics. Prolog has a built- in backward chaining inference engine that can be used to partially implement some expert systems. Prolog rules are used for the knowledge representation, and the Prolog inference engine is used to derive conclusions. Other portions of the system, such as the user interface, must be coded using Prolog as a programming language. Other than Prolog, Protege is used to develop Ontology of the domain, Protégé is a free, open source ontology editor and knowledge-base framework.

  3. Related work

    "Requirements Analysis and Prototyping using Scenarios and Statecharts approach" uses precise action semantics, supports changing requirements and enables seamless generation of a fully functional prototype for end user requirements validation. The method is currently being implemented in the STAMP tool

    (State Modelling and Prototyping).

    "Real-time fault diagnosis using knowledge- based expert system" shows that fault diagnosis methodology is comprised of three steps (Fig. 1). The first step is acquiring the real-time process information, from critical equipments, such as boilers, compressors, separators or reactors. Temperature, pressure, level, and flow rate are the most important process variables to be monitored and have the capability of representing the state of operation in a variety of equipments. The fault in these variables can affect the stability and safety of the whole process system. The second step is making

    inferences (diagnosis) based on acquired process information. The last step is making actions according to the inference values, such as informing operators, raising alarms, shutting down equipment, activating higher layer protections and trying to bring the system back to normal condition.

    Fig. 1 – Three steps of the methodology.

    "ONTOLOGY FOR MOBILE PHONE

    OPERATING SYSTEMS" is the ongoing study deals with an important part of a line of research that constitutes a challenging burden. It is an initial investigation into the development of a Holistic Framework for Cellular Communication (HFCC). The main purpose is to establish mechanisms by which existing wireless cellular communication components and models can work holistically together. It demonstrates that establishing a mathematical framework that allows existing cellular communication technologies (and tools supporting those technologies) to seamlessly interact is technically feasible. The longer-term future goals are to actually improve the interoperability, the efficiency of mobile communication, calls quality, and reliability by applying the framework to specific development efforts.

  4. Existing work

    "Advanced and Innovative Models And Tools for the development of Semantic-based systems for Handling, Acquiring, and Processing knowledge Embedded in multidimensional digital objects" by Information society technology pursued innovations towards digital representations of shapes capable of modelling not only the visual appearance of objects but also their meaning or functionality in a given knowledge domain. In this setting, shape knowledge has been concerned with the geometry (the spatial extent of the object), the structure (object features and part-whole decomposition), attributes (colours, textures), semantics (meaning, purpose), and has had interaction with time (morphing, animation). The harmonization of shape modelling approaches in

    Computer Graphics and Computer Vision has been pursued via the definition of shared vocabularies and ontologies, not only for the abovementioned specific domains, but also on a higher level as the basis for the project's eScience platform, the Digital Shape Workbench. As the project's main technological innovation, this workbench served the role of an operational, large-scale, distributed and web-based software system serving as common infrastructure. The scientific innovation sought by this project is focused on modelling the semantics of digital shapes at each stage of their lifecycle.

  5. Proposed work

    Designing and development of ontology will be the major task as the ontology will work as the knowledge base in the proposed system. The inference engine will be either prolog or any other existing system which will be used as inference engine.

    Require

    ments

    Inference Engine

    Ontology

    Fig2 . working of proposed system

    Fig 2 shows the block structure of the proposed system. Requirements are input for inference engine inference engine will then perform the guided operation to analyse the requirements in the ontology and show the output as the result of this operation. More detailed working is shown in Fig 3.

    Fig 3. Mapping from requirement to ontology

    Fig 3 illustrates mappings from requirements items (statements) in a requirements document to elements in ontology. The requirements document may be described in advance, or it may be described incrementally through the interaction between a requirements analyst and stakeholders. The

    requirements document is analyzed by using this kind of mappings. For example, OBSRA may suspect a requirements document is incomplete when not all elements in an appropriate ontology are related to items in the document. The mapping between the statements and ontology has to be done by using a frame of natural language. OBSRA checks whether a requirements document is consistent or complete by using an ontology system each requirements item (statement) is mapped onto a set of elements (concepts and relationships) in the thesaurus of the ontology system. To detect inconsistency of a requirements document, proposed system try to find mutually contradicting elements where requirements items are mapped. For example, proposed system decide the document is inconsistent if there is a relationship "contradict" between two concepts where the document is mapped. To detect incompleteness of a requirements document, proposed system follow specific relationships from concepts where the document is already mapped. For example, proposed system follow "require" relationship and find a concept that does not appear in the current document. Then, proposed system add new requirements items (statements) corresponding to the concept.

    For the sake of example, we assume a requirement document

    1. Home page should have Logo

    2. Home page should have name of organization

    3. Home page should contain information about organization

    4. Homepage should have copyright information

    5. Homepage should have Image slider showing the work of organization

    6. Home page should have quick form to get the information of user.

      Fig 3 shows the input to the system will be requirement documents similar as stated above.

      Requirement document is analyzed by our system to find if the keyword is present in the ontology or not if it is present then the linked classes will be put in front of the user in terms of suggestions so that user can think whether they have to consider for writing the document. These keywords are checked against the ontology of the organization, all the possible linked aspects will be covered which will lead to approximately perfect requirement document.

      The expected output of the system is the classes that are related to the keyword present in the requirement document. Such as

      1. Name of organization has relation with tagline, phone number, email address.

      2. Copyright information has relation with license document.

      Ontology development requires thorough study of domain in which it is being developed also it requires keeping account of all relationships. The importance of protégé is demonstrated clearly while developing the ontology. There are several features that distinguish Protégé from other knowledge base editing tools. To the best of our knowledge, no other tool except Protégé has all of the following features: Intuitive and easy-to- use graphical user interface. Scalability: Protégé's database back-end loads frames only on demand and uses caching to free up memory when needed. There is virtually no deterioration in performance as you go from several hundred frames to several thousand frames. Extensible plug-in architecture: We can easily extend Protégé with plug-ins tailored for our domain and task. Some ideas for plug-ins are: Small user- interface components that are particularly well suited to displaying and acquiring values in our domain. Such components could be used on Protégé forms. Custom back-end plug-ins that use our own storage mechanisms. New applications intricately linked with a knowledge base as a Protégé tab.

      Prolog rules are used for the knowledge representation, and the Prolog inference engine is used to derive conclusions. Other portions of the system, such as the user interface, must be coded using Prolog as a programming language.

      One tentative proposal to achieve the structure in fig. 2 might be that we develop an Ontology using Protege and we load it into Prolog, one of the most obvious consequences of this will be the ontology is accessible by prolog now proper programming will help us to achieve what we have proposed.

  6. Conclusion

    In this paper, we propose a requirements analysis method by using ontology. Even though the method does not support rigorous natural language processing techniques (NLP), the method enables us to detect incompleteness and inconsistency about a requirements document, to measure the quality of the document, and to predict requirements changes in the future versions of the document. After defining the process to use our ontology approach, we will design and implement its supporting tools. There are many studies using NLP for requirements engineering. For example, inconsistencies in natural language requirements are discovered, conceptual models are semi-automatically generated by linguistic analysis , or formal method and lightweight natural language processing are used together .

    However, it seems to be unclear how to handle domain knowledge and quality of requirements document itself in such studies. Studies to handle ambiguity in use case descriptions written in natural language exist but they also unclearly handled domain knowledge. How to develop ontology is to be studied. However, most methods for building ontology are ambiguous, thus the quality and efficiency of building ontology depend on the skills of each engineer . Therefore, we have to explore systematic procedure to build ontology. Normally, we focus on the frequency of the occurrences of words or phrases in the documents when we build ontology. In contrast to source codes, there are no unified and formal languages in requirements documents thus it is hard to analyse them in requirements analysis. In our study, ontology plays a role to relate different versions of documents and their change histories with each other, thus we can predict changes in requirements documents. In our study, quality characteristics are also represented as concepts in ontology. However, such characteristics are represented in a goal model and such goal model and ontology are combined in a study. We also have our own goal oriented requirements model, thus we try to explore the possibility to combine a goal model and ontology. With respect to extending a model for semantic processing, we have to take implementation issues into account. To add knowledge about implementation into ontology, tasks in design and implementation phases could be supported by the ontology.

  7. References

  1. K. K. Breitman and J. C. S. do Prado Leite. Ontology as a Requirements Engineering Product. In 11th IEEE International Requirements Engineering Conference (RE'03), ages 309-319, Sep. 2003.

  2. D. Zowghi, V. Gervasi, and A. McRae. Using Default Reasoning to Discover Inconsistencies in Natural Language Requirements. In Eighth Asia-Pacific Software Engineering Conference (APSEC'01), pages 113-120, Dec. 2001.

  3. N. Guarino and C. Welty. Evaluating Ontological Decisions with Ontoclean. Communications of the ACM, 45(2):61-65, Feb 2002.

  4. M. Khedr and A. Karmouch. Negotiation Context Information in Context-Aware Systems. IEEE Intelligent Systems, 19(6):21- 29, Nov/Dez 2004.

  5. O. Khriyenko and V. Terziyan. Context Description Framework for the Semantic Web. In Context Representation and Reasoning Workshop, Jul 2005.

  6. Jason I. Hong and James A. Landay, "An Infrastructure Approach to Context-Aware Computing", Human- Computer Interaction, Vol. 16, 2001.

[7]

Harry Chen and Tim Finin, "An Ontology for a

[22]

T. Kindberg and J. Barton, "A Web-based Nomadic

Context Aware Pervasive Computing Environment",

Computing System", Computer Networks, 35(4):443-

IJCAI workshop on ontologies and distributed systems,

456, 2001.

Acapulco MX, August 2003.

[23]

Information Society Technology Advanced and

[8]

Anand Ranganathan and Roy H. Campbell, "A

Innovative Models And Tools for the development of

Middleware for Context-Aware Agents in Ubiquitous

Semantic-based systems for Handling, Acquiring, and

Computing Environments", In Proceedings of

Processing knowledge Embedded in multidimensional

ACM/IFIP/USENIX International Middleware

digital objects

Conference, Rio de Janeiro, Brazil, June 2003.

http://cordis.europa.eu/ist/kct/aimatshape_synopsis.htm

[9]

M. Smith, C. Welty, and D. McGuinness, Web

Ontology Lanugauge (OWL) Giude, August 2003.

[10]

Andy Harter, Andy Hopper, Pete Steggles, Andy Ward,

Paul Webster, "The Anatomy of a Context-Aware

Application", Wireless Networks 8(2-3): 187-

197(2002).

[11]

H. Wu, M. Siegel, and S. Ablay, "Sensor Fusion for

Context Understanding", Proceedings of IEEE

Instrumentation and Measurement Technology

Conference, Anchorage, USA, May 2002.

[12]

Henricksen K, Indulska J, Rakotonirainy A., "Modeling

Context Information in Pervasive Computing Systems",

In Proceedings Pervasive Computing, Zurich, August

2002.

[13]

Karen Henricksen, Jadwiga Indulska, and Andry

Rakotonirainy, "Generating Context Management

Infrastructure from High-level Context Models", In

Proceedings of the 4th International Conference on

Mobile Data Management, Melbourne, January 2003.

[14]

Held, A., Buchholz, S., Schill, A., "Modeling of

Context Information for Pervasive Computing

Applications", In Proceedings of the 6th World

Multiconference on Systemics, Cybernetics and

Informatics (SCI), Orlando, FL, July 2002.

[15]

Ian Horrocks, "DAML+OIL: a Reason-able Web

Ontology Language", In Proceedings of the 8th

International Conference on Extending Database

Technology (EDBT), Prague, March 2002.

[16]

Dan Brickley, R. V. Guha, RDF Vocabulary

Description Language 1.0: RDF Schema, World Wide

Web Consortium, January 2003.

  1. Jena 2 – A Semantic Web Framework, http://www.hpl.hp.com/semweb/jena2.htm

  2. Tao Gu, H. C. Qian, J. K. Yao, H. K. Pung, "An Architecture for Flexible Service Discovery in OCTOPUS", In Proceedings of the 12th International Conference on Computer Communications and Networks (ICCCN), Dallas, Texas, October 2003.

  3. Henricksen K, Indulska J, Rakotonirainy, "Infrastructure for Pervasive computing: Challenges", Workshop on Pervasive Computing INFORMATIK 01, Viena, September 2001.

  4. Dey, A. and Abowd, G., "Towards a Better Understanding of Context and Context-Awareness", Workshop on the what, who, where, when and how of context- awareness at CHI 2000, April 2000.

  5. Dey, A. K., Salber, D. Abowd, G. D., "A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications", Human- Computer Interaction (HCI) Journal, Vol. 16(2-4, pp. 97-166, 2001.

International Journal of Engineering Research & Technology (IJERT)

ISSN: 2278-0181

Vol. 1 Issue 10, December- 2012

Leave a Reply