Satellite Image Classification using Dense Networks

DOI : 10.17577/IJERTV8IS120099

Download Full-Text PDF Cite this Publication

Text Only Version

Satellite Image Classification using Dense Networks

K. Navya

Master of Technology

Department of Electronics and Communication Engineering Sri Venkateswara University College of Engineering, Tirupati, India

Dr. G. Umamaheswara Reddy

Professor,

Department of Electronics and Communication Engineering Sri Venkateswara University College of Engineering, Tirupati, India

AbstractImage Classification is an important field in remote sensing community. However, remote sensing image classification problems have to deal with large amount of unprecedented complex data sources. Forthcoming tasks will soon meet the expense of great data cricks that will make terrestrial protection or procedure classification difficult. Machine-learning differentiators can assistance at this, and various methodologies are presently available. In this paper, we use dense networks for better analysis. The resulted outputs are coming from the blocks, and the merging of GAP and at the boundary there is a completely attached layer. Then the results of the system is normalized.

Keywords Global average pooling, Machine learning, Dense network, Layers.

I.INTRODUCTION

Nowadays, the monitoring of dynamic processes has become possible and biophysical parameter estimation and classification problems can be addressed with the use of several data sources. The images undergo a series of spectral and temporal distortions, all these changes in acquisition conditions and geometry, as well as differences in the properties of the sensors, produce local changes in the probability distribution function (PDF) of the images, which in turn affect the performances of the classifier when predicting data from another domain. As a consequence, the tempting direct application of a classifier optimal for one scene to another scene can lead to catastrophic results. The application of a classifier to a series of newly acquired scenes remains a priority for image analysts. This is accomplished by reducing the restoration miscalculation among the participation data at the scrambling deposit and its reconstruction. The concentration will be equivalent to the cryptograph. In order to decline the largeness of data, the auto encoder system recreates the feature symbol with rarer swellings in the unknown layers. The stimulations of the unseen coating are commonly used as the crushed features. Incline covered with vertebral transmission is used for exercise the network.

To train a multilayer arranged auto encoder, the most significant matter is in what way to reset the networks. If original loads are large, auto encoder inclines to meet to poor

resident minimum, while insignificant preliminary masses stand to miniature incline in initial coating, construction it infeasible to sequence such a multilayer network. Fortunately, a pre-training method was offered by Hinton which found that formulating bulks consuming delimited Boltzmann machines is quicker to a respectable solution.

In real applications, the above-mentioned invalid value preparation representations contribute an improved consequence for terrestrial use differentiators, particularly associated to handcrafted feature established approaches. However, the nonexistence of semantic material delivered by the class markers cannot assurance the greatest perception capability amongst components later unconfirmed feature knowledge approaches do not make expenditure of statistics class.

In all these settings where the available ground truth is not completely representative for the task at hand, we are forced to address the spectral mismatch between training and test distributions. A possible solution is to transform the radiometry of the images so that corresponding to pixels values. The traditional approaches, there are physical models aimed to derive the actual surface reflectance. However, such a process is demanding in terms of prior information required, particularly for the atmospheric correction step.

  1. EXISTING METHOD

    1. Implementation of existing method using SVM:

      The block diagram for the existing method is as follows:

      Fig. 1. Flow method for SVM

      The image categories are trained by considering the HOG features. Through this, all the features are trained for the data set. After that, we classify the new image by using the set of trained list.

      1. HOG (Histogram of oriented gradients):

        To discover the substances in supercomputer revelation and image handling we can use the histogram of sloping gradients. The HOG development is used to analyze the location of incline of the image.

        The resulting shows the presentation of the histogram of sloping inclines:

        • First, we can partition an image into number of parts called cells. Here every shows the histogram of gradient directions or edge orientations.

        • Depending on the gradient orientation we can change each cell into angular bins.

        • Each angular bin consists of the weighted gradient values.

        • Blocks is nothing but the combination of the neighborhood cells. This will be named as the spatial cells. In this the main process in dividing of cells as blocks.

        • The blocks of histogram are produced by the normalization of histogram.

      2. SVM (Support Vector Machine):

      The target of the help vector machine calculation is to discover a hyper plane in an N-dimensional space (N – the quantity of highlights) that particularly orders the information focuses. To isolate the two classes of information focuses, there are numerous probable hyper planes that could be picked. Our goal is to locate a plane that has the most extreme edge, i.e the greatest separation between information purposes of the two classes. Expanding the edge separation gives some support so future information focuses can be grouped with more certainty. Hyper planes and Support Vectors Hyper planes.

    2. Implementation of existing method using CNN:

    The block diagram for the existing method is as follows:

    Fig. 2. Flow method for CNN

    1) CNN classifier:

    Here by considering the CNN layers we take the information set. In this the function measures the dot product of the input and normal weights by adding some values to input. In this CNN classifier filter is defined as the adding of number of weights to a region in the image. Both in vertical and in horizontal directions of the image the filter will be changed and this process will be repeated for every time.

    Stride is defined as the capacity of step along with the filter. By indicating the capacity of step with the name of the stride and the pair value. Here the parts are combined with each other. This will be happened based on the ability of the filter and pair values.

    The number of values of rectangular parts of input produces the maximum number of layers. The pool Size argument measures the capability of the rectangular parts. For example, the capacity of pool is (3, 4). So here 3 indicates the height and 4 indicates the width. The pooling parts are not combined each other when pool capability is less or same as the stride.

  2. PROPOSED METHOD

    The block diagram for the proposed method is as follows:

    Fig. 3. Block diagram for proposed method

    In this section, system architectures and visualization methods are discussed in detail. Through CAM, we visualize activation parts of different inputs. Two functions: expression classification and visualization, can be realized through one optimized system

    For the further process of densenets-b is densenets-c. Sohere we decrease the lot of resulted details. The decreased value will be measured by the theta. We are considering the value of theta rather than by taking the number of appearances in a particular region. The value of theta is in between 0 to 1. While the significance of Theta is equivalent to the 1 the densenets will be same. If theta assessment is not equal to 1 then we ruminate the densenets-b.

    To be aligned with the work on ResNets, we will go into the details for the case on ImageNet dataset. This arrangement is easy for street view house numbers and Canadian institute for advanced research-10.

    1. System Architecture:

      Here we are using Dense Net-BC architecture. This consist of three dense blocks with the global normal merging (GAP) and the completely attached layers. Gray-scale images are

      the involvements and the productivities are normalized information.

      The convolution layer with 16 output channels is operated on the 64 ×64 gray-scale images. Each thick block has 16 layers. In each thick block, 3×3 convolution filters are used combining zero-padding with one pixel to keep the feature- map size fixed.

      The Normalization is combined before convolution layers in order to overcome the incline difficulties. Transition block is useful in training to drop the size and the channel of feature maps. The conversion block is composed with a 1×1 convolution layer, followed with 2×2 average pooling behind. At the conclusion of the outcomes we will join the global normal merging and completely attached layers in order to produce the data.

    2. ResNets:

      The resulted output combined with the convolutional forward networks. The layer 16 that gives rise to the following layer transition: = ( 1). ResNets is combined with the connection and moves through the non-linear transformations function is given below:

      = ( 1) + 1 (1)

      The gradient moves directly through this function from downstream layers to the upstream layers. Here, the identity function is combined with the resultant function. That blocks the data in the system.

    3. Dense connectivity:

      We are proposed a new system that is we can give direct joints between each and every layer to successive layers in order to increase the data moment between the layers. Figure 3 shows the layout of the output Dense Net. It collects the layers of totally previous layers, X0. … X l 1, as input.

      = ([0, 1, . . . , 1]) (2)

      Where [x0, x1, . . ., xl1] refers to the concatenation of the feature-maps fashioned in sheets 0 to l1.

    4. Cam technique:

      To get prediction of the system, we utilize a subjective amount of the outputs of GAP, which are spatial normal of the component maps created from the last thick square. In CAM procedure, a comparable thought is taken. We figure a weighted entirety of the component maps removed from the output of the last dense blocks.

  3. RESULTS

    The results attained from the planned technique are obtainable. Now to demonstrate that the projected scheme is improved associated with the prevailing algorithms. In training phase, we trained the images of different dataset.

    Fig. 4. Results for Airplane

    Fig. 5. Results for Beach

    Fig. 6. Results for Agriculture

    Fig. 7. Results for chaparral

    Fig. 8. Results for dense residential

    Fig. 9. Results for Intersection

    Fig. 10. Results for Harbor

    TABLE 1. Comparison of existing and proposed methods

    Metrics

    Existing method 1 (SVM)

    Existing Method 2 (CNN)

    Proposed method (Dense network)

    Elapsed time

    1.69sec

    1.35min

    35 sec

    Accuracy

    43.75

    66.67%

    92.22%

  4. CONCLUSION

Machine-learning classifiers can help at this, and numerous approaches are presently available. In this paper, we use dense networks for better classification. Three dense blocks are followed behind the inputs, mixing of GAP and at the boundaries there is a completely attached layer. The results of the system are normalized. By using dense networks execution time decreases and accuracy increases when compared with state of art approaches.

REFERENCES

    1. Gustau Camps-Valls, Jose Bioucas-Dias, and Melba Crawford, A special issue on advances in machine learning for remote sensing and geosciences [from the guest editors], IEEE Geoscience and Remote Sensing Magazine, vol. 4, no. 2, pp. 57, 2016.

    2. Devis Tuia, Claudio Persello, and Lorenzo Bruzzone, Domain adaptation for the classification of remote sensing data: An overview of recent advances, IEEE Geoscience and Remote Sensing Magazine, vol. 4, no. 2, pp. 4157, 2016.

    3. Sinno Jialin Pan and Qiang Yang, A survey on transfer learning, IEEE Transactions on knowledge and data engineering, vol. 22, no. 10, pp. 13451359, 2010.

    4. Mingsheng Long, Han Zhu, Jianmin Wang, and Michael I Jordan, Unsupervised domain adaptation with residual transfer networks, in Advances in Neural Information Processing Systems, 2016, pp. 136144.

    5. Allan Aasbjerg Nielsen, The regularized iteratively reweighted mad method for change detection in multiband hyperspectral data, IEEE Transactions on Image processing, vol. 16, no. 2, pp. 463478, 2007.

    6. Devis Tuia, Michele Volpi, Maxime Trolliet, and Gustau Camps- Valls, Semisupervised manifold alignment of multimodal remote sensing images, IEEE Transactions. on Geoscience and Remote Sensing, vol. 52, no. 12, pp. 77087720, 2014.

    7. Arthur Gretton, Karsten M Borgwardt, Malte J Rasch, Bernhard Sch¨olkopf, and Alexander Smola, A kernel two-sample test, Journal of Machine Learning Research, vol. 13, no. Mar, pp. 723 773, 2012.

    8. Giona Matasci, Michele Volpi, Mikhail Kanevski, Lorenzo Bruzzone, and Devis Tuia, Semisupervised transfer component analysis for domain adaptation in remote sensing image classification, IEEE Transactions on Geoscience and Remote Sensing, vol. 53, no. 7, pp. 35503564, 2015.

    9. Gong Cheng, Junwei Han, and Xiaoqiang Lu, Remote sensing image scene classification: Benchmark and state of the art, Proceedings of the IEEE, 2017.

Leave a Reply