Steganography is the art of hiding high sensitive information in digital image, text, video, and audio. In this paper, authors have proposed a frequency domain steganography method operating in the Ridgelet transform. Authors engage the advantage of ridgelet transform, which represents the digital image with straight edges. In the embedding phase, the proposed hybrid edge detector acts as a preprocessing step to obtain the edge image from the cover image, then the edge image is partitioned into several blocks to operate with straight edges and Ridgelet transform is applied to each block. Then, the most significant gradient vectors (or significant edges) are selected to embed the secret data. The proposed method has shown the advantages of imperceptibility of the stego image is increased because the secret data is hidden in the significant gradient vector. Authors employed the hybrid edge detector to obtain the edge image, which increases the embedding capacity. Experimental results demonstrates that peak signal-to-noise (PSNR) ratio of stego image generated by this method versus the cover image is guaranteed to be above 49 dB. PSNR is much higher than that of all data hiding techniques reported in the literature.

Data hiding, particularly for an information security, to protect the secret message from an unauthorized person. Due to the enormous growth of internet, the issue of security over internet is also increasing. The usage of internet increased, which ends in the security problems such as modification, etc. In such a way, while transforming the information from transmitter to receiver, more security is needed. To protect the information, many data-hiding techniques such as cryptography, watermarking, and steganography have been used. Among these three techniques, steganography plays a major role in data security. Steganography is the art of hiding information in cover images. This technique provides forward and backward compatibilities by hiding the information in cover images1. Steganography performs a data hiding in spatial domain and frequency domain.

Spatial domain steganographic technique is a simple technique in which one can modify the secret information and the cover image in the spatial domain, which involves embedding at the level of least significant bit (LSB). Chan and Cheng2 have proposed a simple LSB substitution method using optimal pixel adjustment process. To resist the statistical steganalysis, Lou and Hu3 have proposed reversible histogram transformation function-based LSB steganography technique. Security is the major issue in the spatial domain, thus to increase the security some of the channel selection criteria have been proposed by Zhong4. In the frequency domain, the information is hidden in the transform coefficients. It transforms the spatial domain cover images into frequency domain cover images using discrete cosine transform (DCT) and Ridgelet transform, etc, in which the information hidden in the transform coefficients. Sajedi and Jamzad5 have proposed the information hidden in non-zero DCT coefficients based on embedding capacity. Information can also hide in the colour images using DCT for data security6.

An orthonormal part of the ridgelet transform is the finite ridgelet transform to represent the digital image with line singularities7. Liang8, et al. computed the just difference distortion in ridgelet coefficient to consider the contrast sensitivity in ridgelet transform. In another work, Liang9, et al. proposed the noise visible function in order to increase the embedding capacity by controlling the noise in ridgelet coefficient. Zhang10, et al. embedded the secret data in the middle ridgelet sub-bands of the most significant directions. Kalantari11, et al. proposed a host distribution-independent decoder to extract the secret data efficiently.

As a novel method used in this work, the authors have proposed a hybrid edge detector to obtain the edge image and secret data, hidden in the most significant gradient vector by scrambling the ridgelet coefficient to avoid the bit-ordering error problem. The ridgelet transform utilises the directional sensitivity to represent the digital image with straight edges. The edges of the image can hide more secret data without dissuading the quality of an image, as the distortion at edges also cannot be detected easily by human eye. This work employs the advantage of ridgelet transform as a key point, which increases the embedding capacity and quality of stego image when compared to other related works. As a pre- processing step, the authors force to extract the edge image by sending the cover image to the hybrid edge detector domain. After that, the edge image is partitioned into several blocks in which the ridgelet transform is applied to each block. Then, the most significant gradient vectors are selected to hide the secret data. Finally, inverse ridgelet transform is applied to obtain the stego image.

2.1 Continuous Ridgelet Transform

The continuous ridgelet transform was proposed by Candes and Donoho12. Given a 1-D wavelet transform which indicate the element Ψ(.) ( Ψ(a b θ) (u, v) ) is given by,

${\text{ψ}}_{\left(a,b,\text{θ}\right)}\left(u,v\right)={a}^{-1}{2}}\text{ψ}\left(\frac{u\mathrm{cos}\text{\hspace{0.17em}}\text{θ}+v\mathrm{sin}\text{\hspace{0.17em}}\text{θ}-b}{a}\right)$(1)

where (θ, b ) are line parameter and a>0 is a scale parameter. The line parameter u cos θ+ v sin θ = b , known as ridgelets. The continuous ridgelet transform (CRT) for given function C (input image) in R2 is defined as,

$CR{T}_{C}\left(a,b,\text{θ}\right)=\underset{{R}^{2}}{\text{∫}}{\text{ψ}}_{\left(a,b,\text{θ}\right)}\left(u,v\right)C\left(u,v\right)dudv$(2)

Its inverse formula is given by Starck13, et al.

$C\left(u,v\right)=\underset{0}{\overset{2\text{Π}}{\text{∫}}}\underset{-\infty }{\overset{\infty }{\text{∫}}}\underset{0}{\overset{\infty }{\text{∫}}}CR{T}_{C}\left(a,b,\text{θ}\right){\text{ψ}}_{\left(a,b,\text{θ}\right)}\left(u,v\right)\frac{da}{{a}^{3}}db\frac{d\text{θ}}{4\text{Π}}$(3)

from Eqns (2) and (3), CRT is calculated by applying wavelet transform in the radon transform. The radon domain equation is given by

${R}_{C}\left(\text{θ,}t\right)=\underset{{R}^{2}}{\text{∫}}C\left(u,v\right)\text{δ(}u\mathrm{cos}\text{\hspace{0.17em}}\text{θ}+v\mathrm{sin}\text{\hspace{0.17em}}\text{θ}-t\text{)}dudv$(4)

where δ is the dirac function. To obtain the RT, 1-D wavelet transform is applied to the radon transform which is given by

$CR{T}_{C}\left(a,b,\text{θ}\right)=\underset{R}{\text{∫}}{R}_{C}\left(\text{θ},t\right)\text{ψ}\left(t-b}{a}\right){a}^{-1}{2}}dt$(5)

where θ is constant and t is variant. Thus, to obtain the fast RT, the Fourier domain is introduced. The 2-D-FFT is applied to the input image, then the 1-D inverse FFT is applied to the radon transform. Finally, the 1-D wavelet transform is applied to radon transform to obtain the ridgelet coefficients.

2.2 Finite Ridgelet Transform

The finite ridgelet transform was developed from the finite radon transform as shown in Fig 1. The finite radon transform of a real function (C) defined as a 2-D grid ${Z}_{p}^{2}$ is given by

${r}_{d}\left(l\right)=FRA{T}_{C}\left(d,l\right)=\frac{1}{\sqrt{p}}\sum _{\left(x,y\right)\in {L}_{d,l}}C\left(x,y\right)$(6)

here, C(x, y) is the pixel value at x and y positions, p is the fermat prime number. Ld l is the set of points is given in eqn. (7), that create a line on the 2-D grid ${Z}_{p}^{2}$. Therefore, Zp ={0,1,2,3,4....p‒1}.

${L}_{d,l}=\left\{\begin{array}{c}\left\{\left(x,y\right):y=dx+l\left(\mathrm{mod}p\right),x\in {Z}_{p}\right\},0\le k\le p-1\\ \left\{\left(l,y\right):x\in {Z}_{p}\right\}\end{array}$(7)

Thus, the finite RT is obtained by performing the 1-D wavelet transform on each direction (d) of FRAT.

Figure 1. Finite ridgelet transform.

As a pre-processing step, to extract the edge image hybrid edge detector is proposed. The edges of image can hide more secret data without dissuading the quality of an image, as the distortion at edges also cannot be detected easily by human eye14. The edge detector used in this work is hybrid edge detector. The hybrid edge detector is the combination of prewitt edge detector and fuzzy edge detector to extract more set of edges, in which one can hide more amount of secret data. The brief explanation of Prewitt, fuzzy, and hybrid edge detectors is given in the next sub-section.

3.1 Prewitt Edge Detector

Prewitt edge detector is a discrete differentiation operator which derives the gradient vector of the digital image in horizontal and vertical directions. The computational complexity is low for this operator. The Prewitt operator consists of two 3×3 kernels to derive the approximations of the derivatives for horizontal and vertical changes. Let us consider the cover image C(x,y) and Gx and Gy are the horizontal and vertical derivative approximations is given by,

${G}_{x}=\left[\begin{array}{ccc}-1& 0& +1\\ -1& 0& +1\\ -1& 0& +1\end{array}\right]*C\left(x,y\right)\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{and}$

${G}_{y}=\left[\begin{array}{ccc}+1& +1& +1\\ 0& 0& 0\\ -1& -1& -1\end{array}\right]*C\left(x,y\right)$(8)

where * denotes the 2-D convolution operation. The x and y coordinates are described as increasing in the right direction and increasing in the down direction. The gradient magnitude and gradient direction at each pixel in the image are given by,

$G=\sqrt{{G}_{x}^{2}+{G}_{y}^{2}}$(9)

$\text{θ}=a\mathrm{tan}\text{\hspace{0.17em}}2\left({G}_{y},{G}_{x}\right)$(10)

3.2 fuzzy Edge Detector

A typical implementation of the fuzzy edge detector is briefly explained14 as follows; let us consider the cover image C of dimension B*L. The first step of fuzzy edge detector is to obtain the membership grade value ij at position (i,j). The cover image C is passed into F array of fuzzy singletons, μ ij Є [0,1] with i Є [1,B] and j Є [1,L]. The array F and μij is given by

$F=\underset{i=1}{\overset{B}{\cup }}\underset{j=1}{\overset{L}{\cup }}{\text{μ}}_{ij}$(11)

${\text{μ}}_{ij}=\frac{{c}_{ij}}{C}$(12)

where μij is the membership grade and c is the biggest grayscale value in C. Next step, is to determine the proper membership function $\overline{{\text{μ}}_{ij}}$ for each pixel cij at position (i,j) with spatial window15 (b*b). The cover image is partitioned into overlapping b*b blocks. Let B(i,j) be a b*b window, thus proper membership function is given by,

$\overline{{\text{μ}}_{ij}}=\mathrm{min}{\left(1,\frac{\text{τ}}{b}\sum _{u}\sum _{v}\mathrm{min}{\left({\text{μ}}_{uv},1-{\text{μ}}_{uv}\right)}^{p}\right)}^{1}{p}}$(13)

Now, the membership grade μ j is calculated for each b*b spatial window.

${\text{μ}}_{ij}=\frac{\left\{\mathrm{max}\left({C}_{uv}\right)-\mathrm{min}\left({C}_{uv}\right)/u,v\in \left[1,b\right]\right\}}{C}$(14)

Figure 2. The edge images generated by the prewitt, fuzzy and hybrid edge detector.

Here $\text{τ}$ and b values are taken as 9 and 3. The final step is to extract the edge image. Let us consider image F which contains all edges of F image. The F image is given below, where it is also an fuzzy array singleton $\overline{{\text{μ}}_{ij}}$,

${F}^{\text{'}}=\underset{i=1}{\overset{I}{\cup }}\underset{j=1}{\overset{J}{\cup }}\overline{{\text{μ}}_{ij}}$(15)

3.3 Hybrid Edge Detector

The hybrid edge detector is constructed by a combination of prewitt and fuzzy edge detector. Let us denote grey scale cover image as C(x, y), edge extracted by Prewitt edge detector as Cp (x, y) and edge extracted by fuzzy edge detector as Cf (x, y). The edge extracted by hybrid edge detector as C1 (x, y). Therefore, C1 (x, y) is generated by performing OR operation between Cp (x, y) and Cf (x, y). The hybrid edge detection increases the number of edge pixels and identifies the object boundaries in the cover image clearly. Figure 2 shows the comparison performance of prewitt, fuzzy and hybrid edge detector and clears out that number of edge pixels generated by hybrid edge detector is more when compared to individual prewitt and fuzzy edge detectors. The hybrid edge detector generates the gradient vector, which is used in our work.

As the first step, the cover image C(x, y) is pre-processed by hybrid edge detector to extract the edge image, as shown in Fig 3. The edge extracted by hybrid edge detector as C1 (x, y). Therefore, C1 (x, y) is generated by performing OR operation between Cp (x, y) and Cf (x, y) is given in Eqn. (16).

${C}^{1}\left(x,y\right)={C}^{p}\left(x,y\right)|{C}^{f}\left(x,y\right)$(16)

where | denotes the oR operation.

As shown in fig. 4, one can get the edge image as C1(x, y), then edge image is partitioned into small blocks whose curve image appears in straight edges and RT is applied to each block in which ridgelet coefficient is obtained. Here, the ridgelet coeffient has significant gradient vector generated by hybrid edge detector. Then, to avoid the bitordering error problem scrambling the position of gradient vector (or ridgelet coefficient) at each scale is performed. After scrambling, the most significant gradient vector is selected to hide the secret data. finally, the inverse RT is performed to obtain the stego image.

Figure 3. Block diagram of proposed hybrid edge detector.

4.1 Embedding Phase

The embedding phase is explained in the following steps;

Step 1: Pre-process the cover image C(x, y) using hybrid edge detector to extract the edge image C1(x, y).

Step 2: The edge image C1(x, y) is partitioned into several blocks (m*m) [The number of blocks is dependent on the number of curved edges] to operate in the straight edges is given by

$\begin{array}{cc}{C}^{1}\left(x,y\right)=\underset{m=1}{\overset{M}{\cup }}{C}_{m}^{1}\left(x,y\right),& 1\le x,y\le M\end{array}$(17)

Step 3: The RT is applied to each block (m*m). The matrix I has the FRIT coefficient of each block is given in Eqn. (18). The ridgelet coefficient has significant gradient vector generated by hybrid edge detector.

$\begin{array}{cc}I=FRITcoef{f}_{{C}_{m}^{1}\left(x,y\right)},& 1\le m\le M\end{array}$(18)

Step 4: The secret data used in this work is QR code. To hide the secret data, the most significant gradient vector is selected. The simple method of selecting the gradient vector is by sorting the gradient vector in descending order in which significant gradient vectors are obtained. But in the sorting method, the bit-ordering problem is obtained which leads to un-embedding capacity. To avoid this problem, scrambling the position of gradient vector at each block is obtained. So that, the gradient vectors are distributed uniformly over the frequency domain. In this work, affine modular transformation16 scrambling method is used, which is given by

$\left(\begin{array}{c}{x}^{\text{'}}\\ {y}^{\text{'}}\end{array}\right)=\left[\left(\begin{array}{ll}e\hfill & f\hfill \\ g\hfill & h\hfill \end{array}\right)\left(\begin{array}{c}x\\ y\end{array}\right)+\left(\begin{array}{c}i\\ j\end{array}\right)\right]\mathrm{mod}\left(N\right)$(19)

Step 5: After scrambling the gradient vector, the direction which has higher energy (or) maximum variance at each block (m*m) are selected as a most significant angle of gradient vector (or significant edges) to hide the secret data. p is the fermat prime number.

${C}_{m}^{1}\left(x,y\right)=\underset{v}{\mathrm{max}}\left(\mathrm{var}\left(FRITcoef{f}_{{C}_{m}^{1}\left(x,y\right)}\left(1:p+1,v\right)\right)\right)$(20)

Step 6: The selected directions are placed in the new matrix Rm for mth block.

$\begin{array}{l}{R}_{m}=\left[\begin{array}{l}FRITcoef{f}_{{C}_{m}^{1}\left(x,y\right)}\left[1,{C}_{m}^{1}\left(x,y\right)\right],FRITcoef{f}_{{C}_{m}^{1}\left(x,y\right)}\left[2,{C}_{m}^{1}\left(x,y\right)\right],\hfill \\ .....FRITcoe{f}_{{C}_{m}^{1}\left(x,y\right)}\left[p+1,{C}_{m}^{1}\left(x,y\right)\right]\hfill \end{array}\right]\hfill \\ \text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}\text{\hspace{0.17em}}1\le m\le M\hfill \end{array}$(21)

Step 7: The selected columns are placed in R matrix.

$\begin{array}{l}R=\left[{R}_{1},{R}_{2},{R}_{3}........{R}_{m}\right]=\left\{{R}_{xy}\right\}\text{\hspace{0.17em}}\text{with}\hfill \\ x=1,2,3...N\text{\hspace{0.17em}}\text{and}\text{\hspace{0.17em}}y=1,2,3...M\hfill \end{array}$(22)

Step 8: The secret data S = {S1, S2......Sm} is also scrambled to match with the ridgelet coefficient using the same scrambling method.

Step 9: The secret data S = {S1, S2......Sm } is embedded according to the following law,

${R}^{S}\left(x,y\right)={R}_{m}+\text{β}\text{\hspace{0.17em}}S{R}_{m}$(23)

where, β is the scaling factor and its value used in this work is β =0.30.

Step 10: Descramble the gradient vector.

Finally, apply the inverse RT to obtain the restored edge image.

An additional post-processing step is included to obtain the stego image from the restored edge image. The post-processing includes two steps. In the first step, the original edge image is subtracted from the cover image and the resultant subtracted image is obtained. In the second step, after applying the inverse RT, the obtained restored edge image is initially scaled with the value of 0.1. It is further added with the subtracted image to get the stego image.

Figure 4. Block diagram of proposed steganography system.

4.2 Extraction Phase

Given a corrupted image, the edge is extracted by using the hybrid edge detector. Then, the edge image is partitioned into several blocks; the ridgelet transform is applied to each block. The best direction from the ridgelet coefficient is selected. The selected coefficients are placed in a matrix R*m. Then descrambling is performed to replace the coefficient in its own-place. A correlation detector w, which gives the average correlation between each row off R* and secret data, is obtained by

$W=\frac{1}{N}\sum _{x=1}^{N}\left(\frac{1}{m}\sum _{y=1}^{m}{R}_{xy}^{{}^{*}}{S}_{y}\right)$(24)

Thus, the secret data are extracted using Eqn. (25). The cover image is restored by applying the inverse RT.

${S}_{y}^{{}^{*}}=W{S}_{y}$(25)

Many simulations were performed to evaluate the performance of the proposed scheme. The program coded in MATLAB R.2010 is run on a personal computer whose operating system is Microsoft windows 7. Two commonly used grey-level images are ‘Barbara’ and ‘Lena’ and one medical grey-level image is ‘Brain’, thus totally three grey level images are used in this experiment. After hidden the secret data, the resultant stego image is shown in Fig 5. The performance measures used in this work before extraction are peak signal-to-noise ratio (PSNR) and embedding capacity and after extraction are tamper assessment factor (TAF) and normalised absolute error (NAE) are used. Let C(x, y) be the cover image, RS (x, y) be the stego image, S(x, y) be the secret data, S (x, y) be the retrieved secret data, and C (x, y) be the restored cover image where, x and y denotes, the row and column. PSNR is performed between cover image and stego image. TAF is used to determine the credibility of image authentication which is performed between secret data and retrieved secret data. NAE is performed between cover image and restored cover image. TAF and NAE are defined in Eqns (26) and (27). The PSNR, embedding capacity, TAF and NAE values of experimental results and its comparison with other works are shown in Tables 1 and 2. Table 1 reflects that the proposed steganography system gives significant improvement wrt the imperceptibility of stego image.

$TAF=\frac{1}{{m}^{*}n}\sum _{i=1}^{m}\sum _{j=1}^{n}\left[S\left(x,y\right)\otimes {S}^{\text{'}}\left(x,y\right)\right]$(26)

Figure 5. Stego image of (a) lena, (b) Barbara, (c) Brain images.

$NAE=\frac{\sum _{i=1}^{m}\sum _{j=1}^{n}|C\left(x,y\right)-{C}^{\text{'}}\left(x,y\right)|}{\sum _{i=1}^{m}\sum _{j=1}^{n}|C\left(x,y\right)|}$(27)

The original QR coded secret data and recovered QR coded secret data is shown in fig. 6.

Table 1. PSNR, Embedding capacity, TAF and NAE values in proposed method

Table 2. Comparison of PSNR values in proposed method and other related works

Figure 6. (a) original QR code (b) recovered QR code.

5.1 ROBUSTNESS ANALYSIS

The proposed method can withstand for Chi-square attack because the secret data is embedded in the most significant gradient vector. The statement is proven using test program of Chi-square steganography by Guillermito17 to perform steganography analyses. The test result of stegoimage is shown in fig. 7.

According to the Guillermito, the program output is represented with two curves. The first one is ‘red’ curve which is the result of chi-square test. If the ‘red’ curve is near to one, the probability of a random hidden secret data is high. The second one is ‘green’ curve that reflects the average value of LSB. From fig. 7, the ‘green’ curve reflects the average of LSB varies considerably and result of chi-square test ‘red’ curve is flat-to-zero all along the image. It is clear that nothing is hidden in the stegoimage of proposed algorithm. So the unauthorised person cannot get the clue to embedded secret data.

Figure 7. Statistical attack using Chi-square analysis: Chi-square result of stego-Lena image.

The authors have proposed a frequency domain steganography method operating in the ridgelet transform (RT). To increase the embedding capacity and quality of stego image, the hybrid edge detector is proposed to extract more number of edge pixels. To operate RT with straight edges, edge image is partitioned in to several blocks depends up on the number of curved edges and the ridgelet transform is applied to each block. To solve the bit-ordering error problem, scrambling the position of significant edges are performed which also avoid the uneven-embedding capacity. To embed the secret data in most significant gradient vector, the directions with higher energy or maximum variance at each block is selected. An experimental result demonstrates that there is no visible difference between cover and stego images. Since, our proposed steganography system takes advantage of hybrid edge detector, RT characteristics and scrambling method in which the embedding capacity and quality of stego image is increased. To further show the advantages of proposed method, authors compared our work with other related works. In the scope of the future enhancement, the intelligent optimization techniques are used to enhance the embedding strategy.

1. Marvel, L.M.; Boncelet, C.G. & Retter, C.T. Spread spectrum image steganography. IEEE Trans. Image Proc., 1999, 8(8), 1075-1083. http://dx.doi.org/10.1109/83.777088

2. Chan, C.K. & Cheng, L.M. Hiding data in images by simple LSB substitution. Pattern Recognition, 2004, 37(3), 469474. http://dx.doi.org/10.1016/j.patcog.2003.08.007

3. Lou, D.C. & Hu, C.H. LSB steganographic method based on reversible histogram transformation function for resisting statistical steganalysis. Info. Sci., 2012, 188, 346-358. http://dx.doi.org/10.1016/jins.201L06.003

4. Zhong, Y.; Huang, F. & Zhang, D. New channel selection criterion for spatial domain steganography. Digital Forensics Watermaking, 2013, 7809, 1-7. http://dx.doi. org/10.1007/978-3-642-40099-5_1

5. Sajedi, H. & Jamzad, M. Secure steganography based on embedding capacity. Int. J. Inf. Secur., 2009, 8(6), 433445. http://dx.doi.org/10.1007/s10207-009-0089-y

6. Goswami, A.; Pal, D. & ghoshal, N. Two stage color image steganography using DCT (TSCIS-DCT). In Proceedings of the International Conference on (FICTA), 2013, 755763. http://dx.doi.org/10.1007/978-3-642-35314-7_86

7. Do, M.N. & Vetterli, M. The finite ridgelet transform for image representation. IEEE Trans. Image Proc., 2003, 12, 16-28. http://dx.doi.org/10.n09/TIP.2002.806252

8. Liang, X.; Zhihui, w. & Huizhong, w. Embedding image watermarks into local linear singularity coefficients in ridgelet domain. Lecture notes in computer science Berlin/ Heidelberg: Springer, 2006, 119-27.

9. Liang, X.; Zhihui, w. & Huizhong, w. Ridgelet-based robust and perceptual watermarking for images. Int. J. Comput. Sci. NetworkSecur., 2006, 6, 194-201.

10. Zhang, Z.; YU, H.; Zhang, J. & Zhang, X. Digital image watermark embedding and blind extracting in the ridgelet domain. J. Commun. Comput. USA, 2006, 3, 75-81.

11. Kalantari, N.K; Ahadi, S.M. & Vafadoost, M. A robust image watermarking in the ridgelet domain using universally optimum decoder. IEEE Trans. Circuits Sys. Video Techno., 2010, 20, 396-406. http://dx.doi.org/10.1109/TCSVT.2009.2035842

12. Candes, E.J. & Donoho, D.L. Ridgelet: A key to higher dimensional intermittency. Phil. Trans. Royal Soc. London, 1999, 2495-509. http://dx.doi.org/10.1098/rsta.1999.0444

13. Starck, J.L.; Candes, E.J. & Donoho, D.L. The curvelet transform for image denoising. IEEE Trans. Image Proc., 2002, 11, 670-684. http://dx.doi.org/10.1109/TIP.2002.1014998

14. Chen, w.J.; Chang, C.C. & Le, H.N.T. High payload steganography mechanism using hybrid edge detector. Expert Sys. Appl, 2010, 37(4), 3292-3301. http://dx.doi.org/10.1016/j.eswa.2009.09.050

15. Amarunnishad, T.M.; govindan, V.K. & Mathew, A. T. Improving BTC image compression using a fuzzy complement edge operator. Signal Processing, 2008, 88(12), 2989-2997. http://dx.doi.org/10.1016/j.sigpro.2008.07.004

16. Zou, J.; Tie, X.; ward, R.K. & Qi, D. Some novel image scrambling methods based on affine modular matrix transformation. J. Inf. Comput. Sci, 2005, 2, 223-227.

17. Guillermito. Chi-square Steganography Test program. http://www.guillermito2.net/stegano/tools/index.html

 S. uma Maheswari did programming of GA and PSo with transforms using MATLAB. Collections of papers for literature survey. Preparation of rough draft of the manuscript. Revisions are done as per the reviewer comments. D. Jude Hemanth did framed the workflow of the entire paper. Theoretical explanation of the concepts of GA and PSo. Debugging the errors in the codes written by Co-author. Refining the rough draft by editing (deleting/adding) some contents. Revisions are done as per the reviewer comments.