Component Based VR Content Generation In A Multi-Server Environment

DOI : 10.17577/IJERTV1IS10546

Download Full-Text PDF Cite this Publication

Text Only Version

Component Based VR Content Generation In A Multi-Server Environment

Dr E Kirubakaran

AGM, SSTP, BHEL,

Tiruchirappalli, India,

D Ravindran Associate Professor of Computer Science,

St Josephs College, Tiruchirappalli, India,

Dr D I George Amalarathinam, Director, MCA, Jamal Mohamed College, Tiruchirappalli, India

Abstract

Virtual Reality (VR) applications bring the real world scenes into the virtual world by describing the scenes in a three dimensional space. Users immersion depends on the quality of the virtual scenes produced. The framework proposed here in this paper distributes the VR content generation logic into various remote machines, geographically situated in different locations. In this scenario, many web servers take part, in unison, in the production of VR content, to respond to the Users request. The response also depends on the users preferences and it, thereby, enhances the VR content with dynamism. The preferences are persisted for future use.

  1. Introduction

    Internet based content generation is achieved with many techniques and languages. Hypertext Markup Language (HTML) plays prime role in generation of static content. Dynamism is incorporated within the page with the help of scripting languages and java based applets. There are server-side presentation tools also available for dynamic content generation like servlets and incomplete web pages like JSP, ASP etc

    All these technologies produce web pages which are two dimensional in nature. Three dimensional contents may be generated with the help of Virtual Reality Modeling Language (VRML). VRML helps in creating a scene instead of a simple page. The scene can be explored with devices like mouse, keyboard etc, without involving costly equipments. It is a based on body-centered or user-centered interaction and the immersion of the users in the virtual world increases. Users can view the scenes differently at different visits to the scene as they can navigate to different locations within the world freely. As browsers are not provided with the facility to view the VR

    scenes, plug-ins are installed, with the browsers, to view the scenes as one gets the additional buttons and features.

    The VR content is generated from a single server and it is also static in nature as a HTML page, but because of body-centeredness, it looks like a dynamic scene. VR content is a collection of objects, arranged in a hierarchical manner. The objects are placed in the center of the scene and one need to relocate the objects to different locations within, in order to create a perfect scene. Instead of creating such scenes from a single server, it could be achieved by involving multiple servers, such that each one contributes to the scene in the way of providing the different objects with different features.

    To create content from multiple servers, the concept of components may be used to have higher independence and improved usefulness. Any server- side tools may be used to achieve this but the model proposed here uses JSP along with Java Beans and Tag Library concept. This will lead to a multi-user shared environment where the information about current users is persisted into different database servers. This situation could create a seamless environment that achieves enhanced immersion of the users into the virtual world.

    Rest of the paper is organized as follows: Section 2 will discuss the related work whereas the Section 3 discusses the model for VR content generation. Section 4 discusses the implementation details of the proposed model and Section 5 concludes by summarizing the work and opens up the area for further study.

  2. Related Work

    In an internet based application development scenario, client server architecture plays an important role in improving the interactions as both client and

    server share the processing. Due to this unwanted data transfer is avoided. Any server-side tool can be used to generate the content for the users. As the content is generated after getting the request from the client, dynamism could be incorporated in the pages. Along with the content in the form of HTML documents, three dimensional Virtual Reality scenes may also be generated by the server-side tools.

    Probir Ghosh et al [14] proposed an approach which integrates prediction based dynamic web page pre-generation with PFC and DBC caching. This approach reduces dynamic web page generation time by achieving the eventual benefits through middle-tier database cache(DBC) and HTML page fragment cache(PFC) and the immediate benefit through dynamic web page pre-generation. Aye Aye Khaing and Ni Lar Thein [1] propose a framework for detection of fragments, pieces of information that have an independent meaning and identity. These independent parts can be assembled to compound parts like whole web pages.

    Virtual Reality is a very effective way to convey educational contents with Virtual Reality Modeling Language (VRML), a kind of description language for 3D worlds, and is appropriate for constructing, distributing, and rendering a shared 3D world over Internet. YU Chunyan et al [16] proposed a web-based Collaborative Virtual Environment, a new Java-based framework for web-based CVE which adopts VRML language to construct, distribute and render a shared 3D world over the Internet, and integrates Java with VRML through Enterprise Application Integration (EAI) to manage and control the virtual world to support multi-user collaboration. Java applets and RMI concepts are used for the implementation. Krzysztof Walczak [9] proposed an approach for the dynamic generation of virtual scenes, from arbitrarily selected sets of specifically designed reusable virtual objects, called VR-Beans. A VR-Bean can have number of media components, which are used for representing the VR-Bean in virtual scenes. Such media components may be 3D models, audio and video sequences, or texts.

    Mixed reality is the combination of real and virtual scene content and Ingo Schiller [7] discusses all key elements of mixed-reality applications like camera calibration, environment model generation, real-time handling between interacting virtual and real content, shadowing for virtual content and dynamic object tracking for content planning. For simulation they used combination of three different cameras, a fisheye camera as pose sensor, a ToF-camera as depth sensor and a perspective camera as target camera for mixing the result into a complete system. Gennaro Costagliola

    et al [4] propose an innovative authoring system allowing teachers to easily define 3D interactive simulations, in which the most of the authoring tasks are achieved through the use of wizards, to minimize the need of 3D knowledge They have also proposed an Authoring System Architecture. Muhammad Azam Rana et al [13] present an interactive system to artistically model, animate, and render visually convincing clouds using modern graphics hardware. The system facilitates the design of the shape of clouds by placing cubes (objects) at appropriate places and then selecting 2D texture for each cube; in rendering phase these 2D textures are rendered as actual clouds.

    Marcio Cunha et al [11] propose an environment for collaborative learning and generation of new educational content, an entirely three- dimensional virtual world using Second Life, a simulator of real life and relationships. Visitors and students will interact with the Online Campus as if in real life, with a person, in the form of an avatar in a virtual reception desk. Hasup Lee et al [6] proposed a method for Virtual Reality Contents Generaion using real images. Panoramic images from the real environment are captured using a digital camera and a panoramic tripod head and apply them to the CAVE like system to produce real image based background contents of VR world. This will definitely increase user's immersion. Choonsung Shin et al [3] proposed a framework that supports intelligent guidance and enables users to participate in content generation with reference to museum guidance. This framework also enables them to combine augmented contents with different information to change the shape of content according to their preferences using Context- Awareness.

    Byounghyun Yoo et al [2] proposed a framework for motion generation with multisensory VR effects. Motion is generated from movement of the viewpoint of the visual image, and motion effects, which are prepared in advance, are blended to realize motion simulation. They also state that experiential effects for a scientific and cultural experience system are effective when heterogeneous sensory organs are simultaneously stimulated. Martha Burkle and Kinshuk

    [12] stressed the importance of Virtual Reality in education. With respect to learning, the use of virtual simulations extends its possibilities to content access, transforming education into a participatory and immersive experience. Virtual reality provides students with an unprecedented chance to explore, engage, and visualize the complex processes like never before. As the technology allows more and more content to be virtual and so improves the possibility of better learner engagement. VR has also started to transform the way

    students have access to content, entertainment, and knowledge, making content portable and therefore, transforming the physical limits of the classroom.

    Yohnosuke Harada et al [15] proposed a system which consists of a large screen with high definition resolution, super graphic processors, PCs for individual students and VR contents. The central problem is how to provide VR contents to students. Interactivity and collaboration are the key factors of the system. They have also tested the effectiveness of 3D content and showed that it has improved the retention capacity. B. ISMAIL Imen and MOUSSA Faouzi [8] present a model-based approach for User Requirements (UR). Monitoring the real-time evolution of states, including those complicated, is the primary objective of the patients monitoring system here, with the help of Petri Nets. This monitoring is made possible through biological sensors that control, periodically, the patient glucose level. Lee A. Belfore and Sudheer Battula [10] describe how collaborative capabilities are integrated into the Interactive Land use VRML Application, that supports highly interactive functionality, live updates, and the dynamic generation of VRML content. The collaborative functions have been added in the context of an Internet chat session and by sharing the session details with others. Hanhua Lu et al [5] introduces the evolution of SDP, analyzes new features and conceptual model of the next generation SDP. They also discussed the related technologies and predict its development trend as well – SDP embraced additional service enablers such as voice, multimedia, location, presence and charging.

  3. Framework for VR content generation

    Any web content, either HTML document or VR content can be generated from a web server, with the help of any server-side presentation technique. These contents are meaningful only on the client side and nothing to do on the server side. Generating static VR content is the simpler in the internet context today, but involving many servers for the generation needs extra care in the distribution of processing logic and integration of all these logical units.

    Figure 1 Multi-server content generation

    This paper proposes a framework for the generation of VR content from multiple servers. It is depicted in Figure 1. It uses the concept of components. A component is a self-contained unit, complete in itself and acts as a detachable part of a bigger system. Components (Bean) are deployed within different servers and these beans provide some functionality that can be incorporated within the VR content meant for the clients. These beans are accessed one by one and each adds its own features to the VR content, in other words, all these servers together produce the required content for the client. All of them act as a single entity to satisfy the users requirement.

    A request from the client is received by the server-side web component, it has the logic to generate the content, accesses the bean available in the same machine, forwards the incomplete content created, to other machines for completion. Any number of servers may take part in the completion of the VR content. For example, one server will generate the VR scene and the same is forwarded to the next machine, there the sound object is added and forwarded to the next and in that server the audio clip is added with specific URL of the audio, to be played.

    The properties for the sound source may be retrieved from a database, there by dynamism is incorporated within the VR content. Clients can also give their preferences like color, textures to be wrapped in 3D objects, sound intensity etc and these parameters may be used in the content generated.

  4. Implementation

    The model proposed here is implemented in a web environment which will have an application server to host JSPs, Beans created with Java to be accessed from JSPs, some more VR objects, in the form of java coding, are made available with tag library concept, beans available in other servers are linked with the forwarding option available with JSPs etc. Clients can forward the colors by specifying Red, Blue and Green components and the same may be used to generate a VR object with the specified color and also stored within the database for future purpose.

    A portion of VR content received in the client machine is shown in the Figure 2 and the output rendered in the browser, equipped with the plug-in cortona, is shown in Figure 3.

    The color values, viz, (1 0 1) specified for diffuseColor object, are the values received from the client and used to create a box with magenta color and the sound object is substituted by using a java bean. The source object, within sound object, is substituted within a JSP available in another server and the two JSPs are linked with jsp:forward option. Instead of one JSP, many JSPs are linked together in a chain and all of them contribute some feature to the VR content.

    children [

    Shape {

    appearance Appearance{ material Material{

    diffuseColor 1 0 1

    }

    }

    geometry Box { }

    }

    Sound {

    source AudioClip {

    url "poovukkul.wav" stopTime -1

    loop TRUE

    }

    minBack 1

    minFront 3

    maxBack 3

    maxFront 20

    direction 1 0 0

    }

    Figure 2 VR code segment

    The output rendered in the browser, equipped with a plug-in, has two cubes. One of them is given the magenta color, with RGB values (1 0 1) and the other cube is wrapped with a two dimensional picture. Both the cubes are added with a sound source, with two different audio clips. The sound intensity increases as one moves towards these objects and decreases if moved away. Two different ovals are drawn with values for the parameters minBack, minFront, maxBack and maxFront. Sound intensity will be constant within the inner oval and zero outside the outer oval. The intensity decreases from the inner oval to outer oval and it becomes zero, exactly on the outer oval. So the audible range is controlled by these four values.

    Figure 3 Rendering of VR code

  5. Conclusion and future Scope

The framework proposed in this paper is to generate VR content from multiple servers, which is implemented with various concepts like server-side presentation with JSP, java bean as component, java code associated with a tag available in the tag librry etc. Users could pass their preferences to the web component so that the response is tuned to their liking. VR content is produced, not from a single server but from multiple servers, each contributing some features. Dynamic contents are generated and logic to produce the content need not concentrate on a single machine but can be distributed into different machines. One of the limitations of this implementation is that the content, once generated, becomes static for the users. Users can have interactions with the scene rendered and they cannot stimulate an activity based on some action or event. Sharing of scene also is not part of the

framework here as each user, within the rendered scene, is independent and their action will not have any effect on the other users

References:

  1. Aye Aye Khaing, Ni Lar Thein, 6th Asia-Pacific Symposium on Information and Telecommunication Technologies, APSITT 2005 Proceedings, pp: 154 159, 2005.

  2. Byounghyun Yoo, Moohyun Cha, Soonhung Han,International Conference on Cyber worlds, Pp: 8 pp.

    244, 2005

  3. Choonsung Shin, Hyejin Kim, Changgu Kang, Youngkyoon Jang, Ahyoung Choi, Woontack Woo, 2010 International Symposium on Ubiquitous Virtual Reality (ISUVR), pp: 52 55, 2010.

  4. Gennaro Costagliola, Sergio Di Martino, Filomena Fermcci Michele Risi, 2nd International Conference on Information Technology: Research and Education, pp: 194 198, 2004.

  5. Hanhua Lu, Yong Zheng, Yanfei Sun, Second International Symposium on Intelligent Information Technology Application, 2008. IITA '08. pp: 141 145, 2008.

  6. Hasup Lee, Y oshisuke Tateyama, Tetsuro Ogi, 8th International Conference on Information Science and Digital Content Technology (ICIDT), pp: 542 545, 2012

  7. Ingo Schiller, Bogumil Bartczak, Falko Kellner, Jan Kollmann and Reinhard Koch, 5th European Conference on Visual Media Production (CVMP 2008), pp: 1 10, 2008

  8. B. ISMAIL Imen, MOUSSA Faouzi, 2010 Fourth International Conference on Next Generation Mobile Applications, Services and Technologies (NGMAST), pp: 48 53, 2010.

  9. Krzysztof Walczak, 1st International Conference on Information Technology,, pp: 1 4, 2008.

  10. Lee A. Belfore I, Sudheer Battula, Proceedings of the Winter Simulation Conference, 2002.Volume: 1, pp: 518

    – 524 vol.1, 2002

  11. Marcio Cunha, Alberto Raposo, Hugo Fuks, 12th International Conference on Computer Supported Cooperative Work in Design, 2008. CSCWD 2008. pp: 716 720, 2008.

  12. Martha Burkle, Kinshuk, International Conference on CyberWorlds, CW '09, pp: 320 327, 2009

  13. Muhammad Azam Rana, Mohd Shahrizal Sunar, Mohd Norikhwan Nor Hayat, Sarudin Kari, Abdullah Bade, International Conference on Computer Graphics, Imaging and Visualization, CGIV 2004. Proceedings, pp: 56 61, 2004.

  14. Probir Ghosh, Andrew Rau-Chaplin, International Conference on Next Generation Web Services, Practices, 2006. NWeSP 2006. pp: 56 63, 2006

  15. Yohnosuke Harada, Kiyoshi Nosu, Naohito Okude, 8th International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprises, 1999. (WET ICE '99) Proceedings. pp: 238 244, 1999.

  16. YU Chunyan, WU Minghui, and WU Haihong, Yu Chunyan, Wu Minghui, Wu Haihong, Networking, Sensing and Control, Proceedings. pp: 299 304, 2005.

Leave a Reply