International Journal of Innovative Research in Advanced Engineering (IJIRAE)
ISSN: 23492163
Volume 1 Issue 8 (September 2014
)
www.ijirae.com __________________________________________________________________________________________________________
© 2014, IJIRAE All Rights Reserved
Page 240
As image enhancement systems rely on performance of Their basic arithmetical components
Shahryar shafei
1
Shahin Shafei
2
1.2
Department of Electrical Engineering, Mahabad Branch, Islamic Azad University, Mahabad, Iran
Abstract— In this paper, using annular resonator we have designed an adding and dropping filter light based on two dimensional photonic crystals. The shape of ring resonator filter adding and dropping that we have proposed is Race Track. This filter has a hexagonal lattice structure of silicon bars with refractive index 3/46 that is located in the context of air with refractive index 1. Transmission efficiency and quality coefficient of our proposed filter are respectively 94% and 310. Finite difference method in twodimensional time domain (2D FDTD) used for normalized transmission spectra of photonic crystal ring resonator and to calculate the photonic band, plane wave expansion method (PWE) has been used. Keywords:PLIP,Noisy,MSE,Sensor,Image
I
. Introduction
As image enhancement systems rely on performance of their basic arithmetical components, we study these most basic building blocks for improved performance. It can be shown that, when linear arithmetic is used, added images are always brighter than the srcinals, which can result in images that are too bright. When classical LIP arithmetic is used, added images are always darker than the srcinals, which can result in images that are overall too dark. As addition is a form of fusion, it is natural to want to combine images in a more meaningful fashion. Ideally, added images will be representative of the srcinals without unnaturally becoming too dark or too bright [14]. By optimizing these most basic image transforms, we will have improved enhancement. Because the LIP model has been successfully used for image processing applications, one solution could be a Parameterized LIP (PLIP) model. These parameters allow for fine tuning of the model, giving the user greater control over the end result. By changing only the parameters, one is able to change the overall brightness and contrast in output images. Also, as the parameters can be problem dependent, one can modify the range depending on the amount of information to be fused, thus avoiding the loss of information problem while minimizing operational complexity and allowing it to be realized with cheaper hardware. The primary result of the training of this system for addition has already been seen in [57]. The inclusion of parameters alone may not completely solve the image processing arithmetic limitations, though, namely the loss of information and the need for a more meaningful image fusion. To address these limitations, we propose an extra constraint be added to the PLIP system. We propose a fifth requirement for an image processing framework. The model must not damage either signal. In essence, when a visually “good” image is added to another visually “good” image, the result must also be “good” [2]. This is of particular importance, for example, when receiving information from two sensors which must be fused somehow. For this case, the resulting image should appear to have the second road blocked off by the boulders. This also demonstrates the previously mentioned limitation of LIP arithmetic wherein some output images can be visually damaged; the images are too dark and do not appear natural. While it is consistent that the resulting images should be brighter when linear addition is used and darker when classical LIP is used, practically these results can be improved upon. Although classical addition tends to give resul which are characteristically too bright and LIP addition gives results which are characteristically too dark, both cases result in visually pleasing and representative images with appropriate PLIP parameters. III.
METHODS AND MATERIALS
First, the need for a trained PLIP model in image enhancement is demonstrated. PLIP arithmetic is more relevant to the image formation model – the Human Visual System (HVS) is incredibly variable from one person to another and even under different conditions for the same person. Parameterizing allows for this “personalization” while maintaining the familiar property that, if a visually “fine” image is added to another visually “fine” image, the result should also be “fine.” By insuring a visually pleasing result, this should help to improve image enhancement performance. Similar training methods have been introduced in the past and used for a number of applications. The work of Ivakhnenko demonstrates the use of a polynomial description of complex systems, and he presents methods for the tuning of parameters to train the system for any number of uses by means of an iterative regression technique based on mean square error [7]. Another work details new methods for the training of Recurrent Neural Networks using multiobjective algorithms and mean square error [4]. These methods, however, have the benefit of full training data; ie. there are welldefined inputs and the correct answer is known apriori. For the enhancement problem the input is set but there is no apriori knowledge of the optimal enhanced image. For a training problem such as this, experts’ judgmental information may have to be used at certain stages [1]. In general, the best parameters are algorithm dependent. In this paper, three methods of training the system will be focused on in order to determine the best parameters for an application. The first method is based on MeanSquared Error (MSE) measurements, which will prove important when one considers systems of differing precision, such as 16 bits, 32 bits, 64 bits etc. [2]. The second is based on the image enhancement measure, the EMEE,
International Journal of Innovative Research in Advanced Engineering (IJIRAE)
ISSN: 23492163
Volume 1 Issue 8 (September 2014
)
www.ijirae.com __________________________________________________________________________________________________________
© 2014, IJIRAE All Rights 242
which is a quantitative evaluation metric. This can be used as opposed to subjective human evaluations, giving more consistent results [23]. The third method is based on visual assessment of enhanced images to determine which are most visually pleasing for a human observer. For an imaging system which would combine many pixel values to arrive at one output value, such as a low pass filter or an edge detector, values can quickly go to saturation and information can be lost.
III.RESULTS of MINIMIZING LOSS OF INFORMATION USING MSE
In this section, the PLIP system is first trained for addition, subtraction and multiplication to minimize this loss of information. To
accomplish this, the best values for γ(M), k(M), and λ(M) are determined for the general case by attempting to maximize the
information in the result of the operations, thus minimizing information loss. To measure this, two images are added, subtracted, or multiplied using PLIP arithmetic. This is performed using both standard 64bit double precision floating point arithmetic and a 15 bit floating point approximation, and the difference is measured using the mean squared error (MSE). As this measures the energy in the lowest order bits, this MSE is multiplied by a constant to simulate a left shift so these low order bits are made to be higher
order. Finally, the MSE is plotted against γ(M), k(M), or λ(M); As the information in th
e fractional portion is important in PLIP arithmetic, it is important to maximize this information. The process described quantifies the energy in these lowest order bits by comparing the result using high precision methods to the result of a low precision approximation. By selecting values corresponding to the maximum MSE, it is possible to find the parameter values for which there is the greatest information in the output image. This is most useful when one considers the expanded range of newer imaging systems, for example medical images using 16bits, where the increased information would be utilized. Also, this MSE test can be useful when downgrading to a lesser system, where one would instead want to minimize the MSE. This study was performed for many different combinations of images using the three PLIP arithmetical operations. The data is shown in figures Figure 1Figure 2.
Fig.1:
Training γ(M) value for PLIP addition, (a)Moon image, (b)Clock image, (c)Linear addition, (d)LIP addition, γ(M) = 256, (e)PLIP addition, γ(M) = 1026, (f)MSE vs. γ(M) for the PLIP addition using a standard double and 15

bit floating point with a peak at γ(M) = 1026. Using the maximum value from the graph in (f)
yields the best visual result as shown in (e).
The result of this experiment for addition, shown in the graph in figure Figure 1.f, shows several peaks, however by far the largest
peak occurs at γ(M) = 1026. Even though the MSE values are small compared to the pixel intensity values, the goal is not t
o find statistical differences between the output image and an approximation but to find a point of interest amongst the possible parameter values. The results for subtraction and multiplication are shown the same large peak
at k(M), λ(M) = 1026. These r
esults, including the same peaks with similar relative sizes, were found in all simulations. After investigating using the values from the peaks and
other values for γ(M), k(M), and λ(M), it was determined that this is the best value for all
three of these parameters for the general case. Another interesting note is that all of the local minima correspond to values of 2n , with a large peak directly following. This suggests that a possible better function to minimize information loss for image arithmetic
could be γ(M) = 2k
, k = 1,2,3… To train
the parameters γ, k, and λ with the EMEE, two methods are used. The different atomic operations can be tested individually a
nd PLIP based enhancement algorithms can be tested as an overall system. These studies will be performed using several different values of the PLIP parameters, comparing the resulting images. First, the improved performance by changing only the k (M) value used to calculate the grey tone function, g( i, j ), for each image.
For this example, γ(
M) = 512 and k (M) is tested as the max of the
International Journal of Innovative Research in Advanced Engineering (IJIRAE)
ISSN: 23492163
Volume 1 Issue 8 (September 2014
)
www.ijirae.com __________________________________________________________________________________________________________
© 2014, IJIRAE All Rights 243
two images separately, the max of the two images collectively, k (M) = 255 and k (M) = 300. The major difference is in the background; by using the maximum value of the two images collectively the background in the Truck image is given higher values, and this helps to hide some of the ripples which can be seen in the image.
IV.CONCLUSION
In this paper, two training systems for selecting PLIP parameters have been demonstrated. The first compares the MSE of a high precision result to that of a low precision approximation in order to minimize loss of information. The second uses EMEE scores to maximize visual appeal and further reduce information loss. It was shown that, for the general case of basic addition, subtraction, or
multiplication of any two images, γ, k, and λ = 1026 and β = 2 are effective parameter values. It was also found that, for m
ore specialized cases, it can be effective to use the training systems outlined here for a more applicationspecific PLIP. Further, the case where different parameter values are used was shown, demonstrating the potential practical application of data hiding.
REFERENCES
[1] Sos Agaian, Blair Silver, and Karen Panetta, Transform Coefficient Histogram Based Image Enhancement Algorithms Using Contrast Entropy, IEEE Trans. Image Processing, 2007; 16(3): 751758. [2] M. Heat, S. Sarkar, T. Sanocki, and K. Bowyer, Comparison of Edge Detectors: A Methodology and Initial Study, Computer Vision and Image Understanding, 1998; 69(1):3854. [3] S. Agaian, K. Panetta, and A. M. Grigoryan. A New Measure of Image Enhancement, in Proc. IASTED 2000 Int. Conf. Signal Processing & Communication, Marbella, Spain, 2000. [4] M. K. Kundu and S. K. Pal, Thresholding for Edge Detection Using Human Psychovisual Phenomena, Pattern Recognition Letters, 1986; 4(6):4 33441. [5] Sos S. Agaian, Karen Panetta, and Artyom Grigoryan, Transform based image enhancement with performance measure, IEEE Transactions On Image Processing, 2001; 10(3): 367381, . [6] H. S. Kim, et al., “An Anisotropic Diffusion Based on Diagonal Edges,” in Proc. 9th Int. Conf. Advanced Communication Technology, 2007; 382388. [7] Y. Bao and H. Krim, Smart Nonlinear Diffusion: A Probabilistic Approach, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004; 26(1): 6372.