Blending Multiple Images into Morphed Image by 3D Gestures

DOI : 10.17577/IJERTCONV6IS15035

Download Full-Text PDF Cite this Publication

Text Only Version

Blending Multiple Images into Morphed Image by 3D Gestures

Blending Multiple Images into Morphed Image by 3D Gestures

Prof. Imran khan, Meghana Pawar, Nagalaxmi S B, Navya T N.

Department of Information Science, GMIT, Davangere

Abstract:- The objective of the system is to give effective animation for games and cartoons by extending concept to more than two base shapes and use morphing to produce blends of several objects. It includes designing requirements based on future enhancement of morphing technique. The mode of implementation is enhanced with the concept of image modeling providing axis coordinate in the form of 3D objects. Axis rotation, curve binding, cluster imaging are the major components. Recent advancement in 3D digitization techniques have prompted to the need for 3D object retrieval. Our method of comparing 3D objects for retrieval is based on 3D morphing. It computes, for each 3D object, two spatial feature maps that describe the geometry and topology of the surface patches on the object, while preserving the spatial information of the patches in the maps. The feature maps capture the amount of effort required to morph a 3D object into a canonical sphere, without performing explicit 3D morphing. In this paper we present a system which automatically generates a 3D model from a single frontal image of a model with the help of generic 3D model. Our system consists of three components. The first component detects the features like eyes, mouth, eyebrow and contour of face. After detecting features the second component automatically adapts the generic 3d model into face specific 3D model using geometric transformations.

  1. INTRODUCTION

    Morphing is the special effect in motion pictures and animations that changes one image to other. Our system allows the rotation and zooming of 3D model and generation of texture. Animation is produced using 3D shape morphing between the corresponding face models and blending the corresponding textures. Recent developments in 3D shape modeling and digitization techniques have led to an increased accumulation of 3D models in databases and on the Internet. The objective of the system is to give effective animation for games and cartoons. The system extends the concept to more than two base shapes. Use of morphing to produce blends of objects. Morphing represents such changes over time as the interpolation of two given shapes In order to represent an animation we would have to search for a suitable set of base shapes so that all shapes comprising the animation could be produced as a blend of the base shapes. This concept can be extended to more than two base shapes and use morphing to produce blends of several objects. Morphing involves the production of sequence of intermediate objects that gradually change from one object to other. The amount of effort required to morph an object can be used to measure the difference between them.

    3D object can be characterized by two main features: Geometry and topology. Intuitively, geometry determines the shape of the objects and shape features such as size, curvature and smoothness of object surfaces. On the other hand topology determines the structures of the objects such as the number of holes and disconnected components.

  2. RELATED WORK

    The use of canonical object for shape comparison has been applied by Hebert et al. [5] to object recognition. Their method deforms the mesh representation of an ellipsoid onto a 3D object and measures the simplex angle at each node of the mesh. The difference between two 3D objects is computed by comparing the nodes angles in their meshes. This method is applicable only to 3D objects that are topologically equivalent and geometrically similar to a sphere. The method of Hilaga et al. [6] is the only method that uses topological feature for 3D object matching.

    The topology of an object is represented in a reeb graph. The computation of the reeb graph requires vertex resampling, short-cut edge generation, and computation of the geodesic distance. In contrast, the topological feature used in our method is simpler and far less expensive to compute than the reeb graph, as will be evident in Section

    3.3. Existing methods that use geometric features for 3D object retrieval can be divided into three broad categories according to the type of shape features used:

    (1) global features, (2) histograms, and (3) spatial maps.

    Global features refer to shape features such as moments, aspect ratio, volume-to-surface ratio, etc. Since single feature values are used to characterize the overall shape of the objects, these features tend to be not very discriminative about the objects. They have been used in [4, 21, 22]. Histograms of local shape features are probably the most widely used feature types for 3D object retrieval. The term histogram has been used by various authors to mean somewhat different things. Here, we use the term to mean a discrete frequency or probability distribution of features such that each bin of a histogram represents a range of feature values and each bin count is either a frequency or a probability of occurrence of the feature values within the range of the bin. Thus, histograms capture the distribution of features over the entire object without representing spatial information of the features. In general, histograms are invariant to rotation, reflection, and uniform scaling of objects. Histograms of various feature types have been used, such

    as angle, distance, area, volume, and curvature [11, 12, 13, 16, 21, 22]. Special types of histograms such as spin image and shape context have also been used to represent the relative positions of points [1, 2, 9, 14]. Spatial maps are representations that capture the spatial information of an objects features. The map entries correspond to physical locations or sections of an object, and are arranged in a manner that preserves the relative positions of the features in the object. For example, Kriegel et al. [7, 8] and Suzuki et al. [17, 18] divided an object into cells and used the number of points within each cell as the feature. Vranic et al. [19, 20] computed 2D maps of spherical harmonics coefficients and Novotni and Klein [10] computed 3D maps of distances to features on the objects.

    Since spatial maps preserve the spatial information of the features in an object, they are generally not invariant tolinear transformations, except for specially designed maps (e.g., the rotationally invariant map of [7, 8]). So, Fourier transform is often performed to transform spatial maps into the frequency domain to obtain invariant features [15, 19, 20]. In some cases, Fourier transforms of the objects are used directly as the shape features [15, 19, 20, 21, 22]. Our method also uses 2D spatial maps to capture shape features. However, the shape features are based on 3D morphing and they capture both geometric and topological properties.

  3. METHODOLOGY

    Input

      1. Interpolation- Interpolation is nothing but filling in between pixels. In this method the model which is formed by lines is taken as input. Later in morph map step map the two related sets of volume data. The data sets should have the same grids, dimension, spacing and number of points. In the morph map stage, the model is mapped to different segments of body parts so that with the morph maps, we can store alternative user defined deformation for any mesh. By applying morph you can apply an existing morph map into new one or directly onto the mesh. The next step is cluster creation, the mapped model is later undergo with cluster creation is nothing but bunch or closely toether. We have to create cluster to our model. In case of joint position our model should contain the cluster of edges and vertices so that it will help while giving animation to that dragging the vertices.

      2. Decomposition-In order to give the animation for model the object must be decomposed. Finally the decomposed object is assigned with a different events such as sit, stand, walk etc. The selected events must be assigned to our model with respect to that event the model should be animated.

        Animation of three-dimensional shapes involves the change of vertex attributes over time.

        Object

        Interpolation

        Decomposition

        Morph map

        Cluster creation

        of objects

        Assigning of events

        Figure 1: Deforming objects picking and pulling (editing)

        For giving an animation the object must be deformed so that first the appropriate vertex should be selected and then make that vertex to move to some other place by pulling as shown in Figure 1 this is called picking and pulling.

        Morphing represents such changes over time as the interpolation of two given shapes. In order to represent an animation we would have to search for a suitable set of base shapes so that all shapes comprising the animation could be produced as a blend of the base shapes. We can extend this concept to more than two base shapes and use

        In this study object is nothing but a model .The model undergoes into two different steps. During interpolation step model is taken as input either a male or female model later this model is mapped to different segments of body parts. These mapped parts are later clustered. During decomposition the model is decomposed and each decomposed object is assigned with different events. It is systematic, theoretical analysis of the methods applied to the field of study.

        morphing to produce blends of several objects.

        Figure 3: 3D shapre interpolation

        For N X N interpolation, the output pixel is assigned the value of the pixel that the points fall in.

        Figure 4: Patch parameterizing and re-meshing

        In this patch parameterization there is a specific boundary for parameterization image where we cannot pull the lines further. If there is no specific boundary we can pull the vertices how much we want. So, we must search for suitable base shapes which we want to change.

        In re-meshing the irregular mesh is converted into semi regular mesh. The irregular or source images has many lines and vertices. In the above figure the destination image is morphed or smooth image and the middle image combines both regular and irregular image and the both are morphed into a single image.

        Figure 5: Example of patch parameterization and re- meshing.

        The model with different faces are shown in Figure 5. If the model has less number of faces then it will become difficult for deformation and the clarity of that model is also reduces. If the number of faces in the model increases then the clarity also increases and the deformation procedure means picking and pulling the vertices for giving an animation also become easy.

  4. EXPERIMENTAL RESULTS

    Face model adaptation-This is a process in which the generic 3D face model is deformed according to frontal face. Our proposed generic model [15] is shown in Figure 5 and Figure 6 which is polygon-based (triangle mesh) and consists of 350 triangles and 215 vertices. We have also used Candide-3 face model [16] shown. Model is adapted to given frontal face with the help of two geometrical transformations scaling and translation. Assuming orthographic projection, the translation vector is determined by calculating the distance between the centers of 3D face model and 2D frontal face. After global adaption of model we perform local refinement of model eyes, eyebrows, mouth and contour with that of face features. Appropriate translation factor does local refinement.

        1. Eye brow (B) eyes (C) Mouth (D) Left Check (E) Right check (F) Nose

    Figure 6: Parts of face model

    Figure 7: Face model

    (A)

    (B)

    Figure 8: Model (A) stand event (B) salute event.

    The model was created using edges and vertices. The vertex which joining the two or more edges is displaced for giving an animation. Animation is done not for an image but it is done for lines which involves the change of vertices. fter the creation of model, the model will rotate in 3 dimensions and also it works to the corresponding events which are given to that model like stand, sit, salute etc., as shown in Figure 8

  5. CONCLUSION

    Figure 9: face model with vertices and edges.

    This concept can be enhanced for using multiple task into a single object. Increasing the work done through which task throughput can be increased. The opaque intensity of merging pictures can be reduced. Clarity of an object can be self-analyzed. A 3D face reconstruction from one single image is developed which is proved to be automatic, robust, fast and accurate. Our proposed algorithm constructing 3D face model using proposed generic model as well as using Candide-3 face model. Morphing result indirectly tell the accuracy of our algorithm because it only produce smooth result if features are properly align in the respective models.

  6. REFERENCES

[1] S. Belongie, J. Malik, and J. Puzicha. Matching shapes. In Proc.

ICCV, volume 1, pages 454461, 2001.

[2] S. Belongie, J. Malik, and J. Puzicha. Shape matching and object recognition using shape contexts. IEEE Trans. PAMI, 24(4):509 522, 2002.

[3] D. Cohen-Or, D. Levin, and A. Solomovoci. Threedimensional distance field metamorphosis. ACM Trans. Graphics, 17(1):116 141, 1998.

[4] M. Elad, A. Tal, and S. Ar. Content based retrieval of VRML objects – an iterative and interactive approach. In Proc. 6th Eurographics Workshop on Multimedia, 2001.

[5] M. Hebert, K. Ikeuchi, and H. Delingette. A spherical representation for recognition of free-form surfaces. IEEE Trans. PAMI, 17(7):681690, 1995.

[6] M. Hilaga, Y. Shinagawa, T. Kohmura, and T. L. Kunii. Topology matching for fully automatic similarity estimation of 3D shapes. In Proc. SIGGRAPH, 2001.

[7] H.-P. Kriegel, T. Schmidt, and T. Seidl. 3D similarity search by shape approximation. In Proc. 5th Int. Symposium on Large Spatial Databases, volume 1262, pages 1128, 1997.

[8] H.-P. Kriegel and T. Seidl. Approximation-based similarity search for 3-D surface segments. GeoInformatica Journal, 2:113 147, 1998.

[9] G. Mori, S. Belongie, and J. Malik. Shape contexts enable efficient retrieval of similar shapes. In Proc. CVPR, 2001.

[10] M. Novotni and R. Klein. A geometric approach to 3D object comparison. In Proc. Int. Conf. on Shape Modeling and Applications, pages 167175, 2001.

[11] R. Osada, T. Funkhouser, B. Chazelle, and D. Dobkin. Matching 3D models with shape distribution. In Proc. Shape Modeling International, 2001.

[12] E. Paquet and M. Rioux. Nefertiti: A query by content system for three-dimensional model and image databases management. Image and Vision Computing, 17:157166, 1999.

[13] E. Paquet, M. Rioux, A. Murching, T. Naveen, and A. Tabatabai. Description of shape information for 2-D and 3- D objects. Signal Processing: Image Communication, 16:103122, 2000.

[14] S. Ruiz-Correa, L. G. Shapiro, and M. Melia. A new signature- based method for efficient 3D object recognition. In Proc. CVPR, 2001.

[15] D. Saupe and D. V. Vranic. 3D model retrieval with spherical harmonics and moments. In Proc. 23rd DAGM Symposium (LNCS 2191), pages 392397, 2001.

[16] H.-Y. Shum, M. Herbert, and K. Ikeuchi. On 3D shape similarity.

In Proc. CVPR, pages 526 531, 1996.

[17] M. T. Suzuki, T. Kato, and N. Otsu. A similarity retrieval of 3D polygonal models using rotation invariant shape descriptors. In Proc. IEEE Conf. SMC, pages 2946 2952, 2000.

[18] M. T. Suzuki, T. Kato, and H. Tsukune. 3D object retrieval based on subjective easures. In 9th Int. Conf. on Database and Expert Systems Applications, 1998.

[19] D. V. Vranic and D. Saupe. 3D shape descriptor based on 3D fourier transform. In Proc. of the EURASIP Conf. on Digital Signal Processing for Multimedia Communications and Services, pages 271274, 2001.

[20] D. V. Vranic, D. Saupe, and J. Richter. Tools for 3D object retrieval: Karhunen-Loeve transform and spherical harmonics. In Proc. of IEEE Workshop on Multimedia Signal Processings, pages 293298, 2001.

[21] C. Zhang and T. Chen. Efficient feature extraction for 2D/3D objects in mesh representation. In Proc. ICIP, 2001.

[22] C. Zhang and T. Chen. Indexing and retrieval of 3D model aided by active learning. In Proc. ACM Multimedia, 2

Leave a Reply