Skip to main content

Efficient payload communications for IoT-enabled ViSAR vehicles using discrete cosine transform-based quasi-sparse bit injection

Abstract

High-performance remote sensing payload communication is a vital problem in air-borne and space-borne surveillance systems. Among different remote sensing imaging systems, video synthetic aperture radar (ViSAR) is a new technology with lots of principal and managerial data which should be compressed, aggregated, and communicated from a radar platform (or a network of radars) to a ground station through wireless links. In this paper, a new data aggregation technique is proposed towards efficient payload transmission in a network of aerial ViSAR vehicles. Our proposed method is a combination of a recent interpolation-based data hiding (IBDH) technique and visual data transformation process using discrete cosine transform (DCT) which is able to outperform the reference method in terms of data aggregation ability.

1 Introduction

Video synthetic aperture radar (ViSAR) is a new imaging mode of SAR to generate video sequences [1, 2]. ViSAR is recently used for aerial remote sensing imaging with air-borne radar platforms. Despite the conventional SAR sensors for capturing still images, communication data rate needed for ViSAR sensors is extremely more of which the current implemented systems mostly do not send their acquired data through wireless communication links. In fact, they have to store the data into memory and after landing, data is transferred physically to remote sensing surveillance centers to be analyzed. This shortfall is caused by two reasons; at first, frame formation process (like SAR image formation) is a relatively complicated and time-consuming procedure. Thus, while the imaging system in ViSAR mode has to generate many frames, for example 16–24 frames per second, this issue would be a big challenge. Researchers who are working on ViSAR imaging techniques have a substantial focus on this point that computational complexity must be reduced alongside improving the frame acquisition quality. In addition, using powerful computers, high-performance hardware implementation and benefits of parallel programing can speed up the formation process. The second issue that can be noted is to have a large data size for video frames (including processed frames from raw data and other related data for control and managerial information) that should be compressed and aggregated to be transferable for wireless transmission through a low-bandwidth link. Otherwise, we have to use the ViSAR technology just for non-real-time applications whereas the main idea behind ViSAR is to apply it for real-time monitoring and surveillance in remote sensing, smart cities, and civil applications in all the time and all weather (for instance, natural hazards and traffic control even in dark environment without any light source). Here, we do not work on efficient image/frame formation because it is a problem for signal processing experts to process raw data of radar sensing. Instead, we try to aggregate relevant managerial and control data and embed this data into the video frames considering specific features of SAR videos. This can reduce the data size significantly and is indeed a process towards data compression.

Therefore, in order for ViSAR data to be communicated between two aerial radar platforms or an air-borne imaging radar and a ground control station (they can be assumed as a ViSAR sensor network), we should use such compression or aggregation techniques to reduce remote sensing data size. In details, remote sensing data always includes some payload information about geographic systems, control data, and so on, in addition to the main images and videos. Because of the low bandwidth in radar communication systems, there is no solution except to apply lossy/lossless data aggregation techniques to integrate the payloads and radar data (raw data or processed videos). On the other hand, sending compressed raw data is difficult and not sufficiently effective for real-time systems, so our preference is to convert raw data into formed video frames and then to compress and transmit the frames along with some payloads aggregated in them. As a consequence, the main aim of this research is ViSAR payload communication through a data hiding-based aggregation. For integrating a general bit-stream data and video frames, a recently proposed watermarking scheme is selected as the main reference method to embed bit stream into ViSAR frames. Although the selected reference method is really powerful for quasi-sparse image data like ViSAR frames, however, we wish to improve its embedding capacity while keeping the final imaging quality as much as possible. The reference method can be followed in [3] and is a method based on interpolation-based data hiding (IBDH) towards watermarking using an interpolator and error histogram computation [4, 5]. The core interpolator in our research is similar to [3], but some other interpolators can also be used, For example, in [6], the authors have provided a novel efficient optimization algorithm for tree-based classification that can be adopted as a fast interpolator. This method is the first efficient algorithm for optimizing classification trees which can help our problem, see more about spatial interpolation of ViSAR frames and its pre-processing in [7, 8]. In addition, many similar works exist that are about interpolation-based data embedding and histogram processing [9,10,11,12,13,14,15,16,17]; readers can follow them. Also, more information about interpolators can be found in [4, 18,19,20]. Our focus in this research is on histogram transformation using a decomposition transform; however, an additional histogram processing like [5] is utilized. We combine the reference method with DCT transform to change error histogram in order to find more suitable places in the frames to add hidden bits. The proposed technique can be used for lossless payload aggregation in an IoT-enabled ViSAR sensor networks (Fig. 1), since radar networks have been nowadays a hot topic of research [21,22,23]. In addition, our finding may be useful in other visual data and sensory systems [24,25,26]. This paper is organized as follows. Section 2 presents the proposed approach, Section 3 contains all simulation results, and Section 4 is the conclusions.

Fig. 1
figure 1

A typical ViSAR sensor network with air-borne radar platforms towards Internet of ViSAR vehicles

2 Proposed method

In order to extend the reference IBDH algorithm [3], we use DCT as a decomposition transform to change the error image histogram compared to the basic algorithm. In fact, we want to create a quasi-sparse frame [27] with less zero pixels (fully black pixels) and much more non-zero pixels which their gray levels are very near to zero. One of the most popular ways to modify interpolation-based data hiding techniques is to use a better interpolator or histogram modification through histogram shifting and histogram adjustment. As IBDH method in [3] is a most recent version of IBDH techniques that uses a novel interpolator alongside a histogram modification process [3], we wish to combine this method with another process based on discrete cosine transform (DCT) to improve its aggregation performance. In this regard, we use DCT with different patch sizes to make a combinational approach entitled interpolation-based data hiding using discrete cosine transform (IBDH-DCT). Our experiments show medium-sized patches are more effective. If a transform is able to create a quasi-sparse image with less zero pixels, it is probably able to improve IBDH in ViSAR frames. As we know, the mentioned transform can be invertible generally, but in the use of it to make transformed frames, we have to scale and quantize the coefficients matrix, so after re-scaling, a loss may be seen because of the quantization. However, this loss does not affect the watermark/embedded data, but the final data hiding approach might be non-reversible. In the next sub-sections, basic concepts around DCT will be reviewed at first, and then, the proposed method will be presented.

2.1 2D DCT for frame transformation

DCT is one of the most important decomposition transforms for signal and image processing. For example, JPEG compression works based on a core DCT. This transform avails the cosine basis functions which can be orthonormal. An important property of DCT is its real coefficients compared to discrete Fourier transform (DFT) or fast Fourier transform (FFT). Another property of DCT is lower computational complexity which makes it appropriate for real-time multimedia coding. Furthermore, with respect to energy compression for high-performance image coding (i.e., maximum information at the lowest file size), DCT is a powerful transform like Karhunen Loeve transform (KLT), but with a lower complexity. Equation (1) shows 2D DCT for two-dimensional data like gray-scale frames. Also, Eq. (2) denotes the inverse DCT (IDCT). X(k, l) as DCT coefficients are real and converted version of an image/patch with size of N-by-N will be N-by-N again (in below, x(m, n) shows the image pixels, size of the source image is N × N, i.e., 0 ≤ m, n ≤ N − 1). The basis functions are seen in Fig. 2 for N = 8. N is the patch size in which for an N-by-N patch, there are N2 basis functions.

Fig. 2
figure 2

Sixty-four virtually colored DCT basis functions for 8 × 8 patch size, where M = N = 8

$$ {\displaystyle \begin{array}{l}X\left(k,l\right)=\alpha (k)\;\alpha (l)\sum \limits_{m=0}^{N-1}\sum \limits_{n=0}^{N-1}x\left(m,n\right)\kern0.24em \cos\;\left(\frac{k}{N}\left(m+\frac{1}{2}\right)\pi \right)\;\cos\;\left(\frac{l}{N}\left(n+\frac{1}{2}\right)\pi \right)\\ {} where\kern0.24em \alpha (s)=\Big\{\begin{array}{c}\sqrt{\frac{1}{N}}\kern0.5em for\kern0.48em s=0\\ {}\sqrt{\frac{2}{N}}\kern0.5em Otherwise\end{array}\end{array}} $$
(1)
$$ x\left(m,n\right)=\sum \limits_{k=0}^{N-1}\sum \limits_{l=0}^{N-1}\alpha (k)\;\alpha (l)\kern0.24em X\left(k,l\right)\kern0.24em \cos\;\left(\frac{k}{N}\left(m+\frac{1}{2}\right)\pi \right)\;\cos\;\left(\frac{l}{N}\left(n+\frac{1}{2}\right)\pi \right) $$
(2)

DCT is also computable in matrix form of which \( \underset{\_}{x} \) is the image matrix, \( \underset{\_}{C} \) is the DCT matrix, and the transformed image matrix is \( \underset{\_}{X} \) from Eq. (3). The functions of DCT are generally defined as Eq. 4. Figure 3 shows virtually texturized results for different patch sizes in a sample ViSAR frame. It is obvious that each patch is different in terms of ability of creating a quasi-sparse illustration.

Fig. 3
figure 3

DCT decomposed frames with different patch sizes. This figure shows virtually texturized results for 2-by-2 to 256-by-256 patch sizes in a sample frame (256 × 256)

$$ {\displaystyle \begin{array}{l}\underset{\_}{X}=\underset{\_}{C}\;\underset{\_}{x}\;{\underset{\_}{C}}^t\\ {}\mathrm{where}\kern0.36em C\left(i,j\right)=\Big\{\begin{array}{c}\frac{1}{\sqrt{N}}\\ {}\sqrt{\frac{2}{N}}\cos \left(\frac{i}{N}\left(j+\frac{1}{2}\right)\pi \right)\;\end{array}\kern0.5em \begin{array}{c}\begin{array}{l}i=0\\ {}\end{array}\\ {}i>0\end{array}\\ {}\mathrm{and}\kern0.34em \mathrm{similarly}:\kern0.36em \underset{\_}{x}={\underset{\_}{C}}^t\underset{\_}{X}\underset{\_}{C}\end{array}} $$
(3)
$$ F\;\left(k,l,m,n,M,N\right)=\alpha (k)\;\alpha (l)\kern0.24em \cos\;\left(\frac{k}{N}\left(m+\frac{1}{2}\right)\pi \right)\cos\;\left(\frac{l}{M}\left(n+\frac{1}{2}\right)\pi \right);\kern0.36em {\displaystyle \begin{array}{c}0\le k,m\le N-1\\ {}0\le l,n\le M-1\end{array}} $$
(4)

2.2 Quasi-sparse bit injection using IBDH and DCT

The reference method of IBDH has been discussed in [3]. This method is applied to ordinary ViSAR frames, and the only histogram processing is performed using some modification or shifting techniques like [5] which a little help IBDH find more suitable places for injecting payload bits. Our experiments show that a transform that can basically change histogram of the ViSAR frames towards a quasi-sparse condition is more effective in comparison to usual histogram processing techniques which do not work on the frames to be quasi-sparse. However, we can use both histogram modification and histogram transformation concurrently. To do so, we use a basic theory like IBDH in [3], a histogram modification technique as per [5], and a DCT-based decomposition process towards histogram transformation. Our proposed method is given in Algorithms 1 and 2 for sender side and receiver side, respectively. All DCT patches are assumed as a single image because size of the original frame and its transformed version (towards quasi-sparsity) should be the same. Therefore, a plotted histogram corresponds to a transformed image, not a specific patch.

Algorithm 1: The embedding process in IBDH-DCT at the sender side.

Input: An original host frame and hidden data.

Procedure

1) Compute DCT coefficients of the original host frame.

2) Scale the DCT coefficients matrix into an interval of [0,255].

3) Quantize scaled DCT coefficients matrix according to a digital image and consider as a new host frame with quasi-sparse spatial distribution.

4) Down-sample the quasi-sparse host frame (standard down-sampling is used).

5) Calculate a reconstructed version (up-scaled interpolated frame) of quasi-sparse host frame using interpolation technique.

6) Calculate an error image by subtraction of the original quasi-sparse host frame and its interpolated version considering histogram modification.

7) Calculate four key parameters of the reference IBDH technique based on histogram of the error image.

8) Inject bits of hidden data into the quasi-sparse host frame according to key parameters in the prior step and create a watermarked frame.

9) Transfer the watermarked frame to the receiver along with all key parameters computed at sender side.

Output: The watermarked frame and key parameters related to the error image.

Algorithm 2: The extraction process in IBDH-DCT at the receiver side.

Input: Receive watermarked frame, and the key parameters in Algorithm 1.

Procedure

1) Extract the hidden bits and the error image through an inverse function in IBDH theory (see the main source for IBDH details).

2) Down-sample the watermarked frame (standard down-sampling is used to have a down-sampled version which is exactly equal to the down-sampled version of original frame in Algorithm 1).

3) Re-construct the down-sampled frame of the prior step by interpolator to generate the interpolated frame.

4) Restore the quasi-sparse host frame by adding error image and the interpolated frame.

5) Rescale the quasi-sparse host frame to generate approximate DCT coefficients.

6) Compute an approximate version of the rescaled quasi-sparse host frame through inverse DCT as the original host frame.

Output: The original host frame and injected bits.

Algorithm 1 includes all steps of data embedding process at the sender, and Algorithm 2 contains steps of the reverse process at the receiver side which is named extraction. The proposed method is not although fully reversible in terms of the host image reversibility because the frame transformation process is lossy; however, this transformation process is near-lossless with a loss that can be ignored. Since we use a real decomposition transform, near-lossless happens (for example in the case of FFT with complex basis, a huge loss happens). Therefore, all the process can be near-lossless. On the other hand, because there is a full reversibility for the hidden data, we can compute quality metrics in the transformed samples.

3 Results and discussion

As dataset, ViSAR frame with size of 256 × 256 is selected; see a sample input frame in Fig. 6 (part: main frame). For simulating the proposed algorithm, we used Matlab R2013a through a device with 2.53 GHz CPU (Intel CFI i3 350M Core i5), and 4.00 GB RAM. It is explicit that a DCT-decomposed version of each image under 1-by-1 patch is equal to itself; thus, this specific patch means “no decomposition exists.” The ViSAR frames are very low energy with a histogram density near to zero. Therefore, these frames have a different behavior in comparison to ordinary images. They may be according to Markov random field (MRF) neighborhood system, and with some textural features. We compare all the proposed method with the reference method in [3], and all results are given in Table 1 and Fig. 4 for aggregation and quality performance and also Table 2 and Fig. 5 for execution times (towards complexity). The quality assessment metrics are PSNR, SSIM, and EPI [2, 3, 6, 7] wherein the first two cases are for similarity evaluation and the third one shows ability of each method in using edges. Capacity is the main factor which we aim to increase it. All running times are presented in Table 2 to find out more things about computational complexity of both methods. Equations (5), (6), and (7) describe our quality metrics, and Eq. (8) gives average capacity index (API) which is directly computed based on the aggregation performance (embedding capacity). In these equations, x and y are the host frame and watermarked frame. ACI is a metric for video communications. A similar metric in such a situation is bits per pixel (BPP) which is usually computed for still images, not videos. BPP, of course, can be computed for a single frame, but its result is not as reliable as ACI values such that we have to introduce an average on BPPs in terms of video sequences. ACI is explicitly computed for video data as per Eq. (8). ACI is more reliable than BPP because it has an intrinsic scaling factor in itself through two parameters (α and β) to be set. In addition, relationship between BPP and ACI is a little similar to the case of MSE versus PSNR of which we know MSE cannot provide a newer thing compared to PSNR (because they are interpretation of each other). Currently, we have adjusted α = \( \frac{\sqrt{5}}{2} \) and β = 1000.

$$ \mathrm{PSNR}=20\kern0.36em \log \kern0.24em \frac{255}{\sqrt{\frac{1}{\left({256}^2\right)}\;{\sum}_{i=1}^{256}{\sum}_{j=1}^{256}{\left({x}_{ij}-{y}_{ij}\right)}^2}} $$
(5)
Table 1 Quality and aggregation performance in the reference method [3] and the proposed approaches entitled IBDH-DCT (best results are shown in italicized form)
Fig. 4
figure 4

An average of different measures for the reference method (IBDH) and proposed IBDH-DCT. Just best patches (32-by-32, 64-by-64, and 128-by-128) are shown (in terms of both quality metrics and complexity)

Table 2 Complexity analysis through execution times (best results are shown in italicized form)
Fig. 5
figure 5

Complexity for all different patches. 1-by-1 is the reference method (IBDH) and all the other approaches are related to the proposed IBDH-DCT

$$ \mathrm{SSIM}=\frac{2{u}_x{u}_y}{u_x^2+{u}_y^2}\times \frac{2{\sigma}_x{\sigma}_y}{\sigma_x^2+{\sigma}_y^2}\times \frac{\sigma_{xy}}{\sigma_x{\sigma}_y} $$
(6)
$$ \mathrm{EPI}=\frac{\sum \limits_i\sum \limits_j\mid {y}_{i-1,j-1}-{y}_{i+1,j+1}\mid }{\sum \limits_i\sum \limits_j\mid {x}_{i-1,j-1}-{x}_{i+1,j+1}\mid } $$
(7)
$$ \mathrm{Average}\kern0.17em \mathrm{Capacity}\kern0.17em \mathrm{Index}\;(ACI)={\alpha}^{\left(\frac{\raisebox{1ex}{$\sum \limits_{\mathrm{All}\;\mathrm{frames}}\mathrm{Capacity}\;\left(\mathrm{bit}\right)$}\!\left/ \!\raisebox{-1ex}{$\mathrm{Number}\kern0.17em \mathrm{of}\kern0.17em \mathrm{frames}$}\right.}{\beta}\right)} $$
(8)

The simulation results clearly show that DCT-based approach can be effective for sample frames compared to the reference method. It is noticeable that all combinational forms based on DCT decomposition are more complex than the reference method because two image transformation steps (direct + inverse) should be performed in them, in addition more time is needed to find suitable places for injecting bits because their histograms are complicated. However, this more execution time of the proposed method is a cost for having better aggregation performance. Another cost is a little loss for just host frame in combinational approaches which would be acceptable and optimized in most of real-world applications.

Table 1 shows some smaller patches cannot outperform the reference method; however, 32-by-32, 64-by-64, 128-by-128, and 256-by-256 patches have recorded the best performance in terms of similarity measures, edge handling indicator, and aggregation capacity (italicized values).

According to Table 2, among winner proposed approaches, 32-by-32, 64-by-64, and 128-by-128 patches have recorded minimum execution time. Figures 6, 7, and 8 illustrate decomposed frames from a sample frame and their corresponding histograms. Figure 6 includes small-sized patches (2-by-2 and 4-by-4), Fig. 7 is for medium-sized (8-by-8, 16-by-16 and 32-by-32), and Fig. 8 is for large-sized patches (64-by-64, 128-by-128 and 256-by-256).

Fig. 6
figure 6

Main frame alongside two DCT-decomposed frames using small-sized patches

Fig. 7
figure 7

Three DCT-decomposed frames using medium-sized patches

Fig. 8
figure 8

Three DCT-decomposed frames using large-sized patches

4 Conclusions

In this research, a new data aggregation method based on discrete cosine transform and quasi-sparse bit injection for IoT-enabled ViSAR sensor networks was proposed towards enhancing the embedding capacity (or aggregation performance). This method could outperform a recent data hiding approach which was used as a reference method in our work. We used four various metrics to evaluate efficiency of the proposed method in terms of general frame quality (similarity and edge handling) and aggregation performance, and finally, all of them approved its suitability. One of the findings of our research is to show the importance of checking different patch sizes. In our experiments, average-sized patches and upper-average cases were the best selections. Moreover, a study on complexity using execution times was performed which can help us find the best DCT patches. As a next idea of research, we can work on more suitable decomposition transforms to create a quasi-sparse space in order to improve the aggregation performance once again in SAR/ViSAR systems. In addition, finding a high-performance, fully lossless decomposition transform can make the aggregation mechanism reversible which may be important in some specific applications.

There are many decomposition techniques like KLT that can be used for this application, but the main focus of our research was on how to combine a state-of-the-art data hiding method with a powerful decomposition technique towards quasi-sparse bit injection. Of course, investigation on application of other transforms (instead of DCT) can be done as a future work. Specifically, KLT is not suitable for real-time processing because of an inherent high computational complexity compared to DCT. FFT is a complex transform and is not therefore suitable for this frame transformation towards quasi-sparsity. One of the good ideas can thus be wavelet. In the current version, just the process of extracting injected bits is fully reversible (lossless).

Availability of data and materials

All the data and computer programs are available.

Abbreviations

ViSAR:

Video synthetic aperture radars

IoT:

Internet of things

IBDH:

Interpolation-based data hiding

DCT:

Discrete cosine transform

IBDH-DCT:

Interpolation-based data hiding using discrete cosine transform

IDCT:

Inverse DCT

KLT:

Karhunen Loeve transform

API:

Average capacity index

PSNR:

Peak signal to noise ratio

SSIM:

Structural similarity

EPI:

Edge preservation index

MSE:

Mean square error

BPP:

Bit per pixel

References

  1. B. Bahri-Aliabadi, M.R. Khosravi, S. Samadi, Frame Rate Computing in Video SAR Using Geometrical Analysis, The 24th Int'l Conference on Parallel and Distributed Processing Techniques and Applications (PDPTA'18), pp. 165-167, 2018; Las Vegas. USA.

  2. M.R. Khosravi, S. Samadi, R. Mohseni, Spatial Interpolators for Intra-Frame Resampling of SAR Videos: A Comparative Study Using Real-Time HD (Medical and Radar Data, Current Signal Transduction Therapy, 2019)

    Google Scholar 

  3. M.R. Khosravi, M. Yazdi, A lossless data hiding scheme for medical images using a hybrid solution based on IBRW error histogram computation and quartered interpolation with greedy weights. Neural Computing and Applications 30, 2017–2028 (2018)

    Article  Google Scholar 

  4. L. Zhang, X. Wu, An edge-guided image interpolation algorithm via directional filtering and data fusion. IEEE Transactions on Image Processing 15(8), 2226–2238 (2006)

    Article  Google Scholar 

  5. M. Arabzadeh, H. Danyali, M. S. Helfroush, Reversible Watermarking Based on Interpolation Error Histogram Shifting, International Symposium on Telecommunications (IST'2010), pp. 840-845, 2010.

  6. M.A. Carreira-Perpinán et al., Alternating optimization of decision trees, with application to learning sparse oblique trees, 32nd Conference on Neural Information Processing Systems (Montr´eal, Canada, 2018)

    Google Scholar 

  7. M.R. Khosravi, H. Rostami, S. Samadi, "Enhancing the Binary Watermark-Based Data Hiding Scheme Using an Interpolation-Based Approach for Optical Remote Sensing Images". International Journal of Agricultural and Environmental Information Systems 9(2), 53–71 (2018). https://doi.org/10.4018/IJAEIS.2018040104.

    Article  Google Scholar 

  8. M.R. Khosravi et al., A Tutorial and Performance Analysis on ENVI Tools for SAR Image Despeckling. Current Signal Transduction Therapy (2019)

  9. L. Luo, Z. Chen, M. Chen, X. Zeng, Z. Xiong, Reversible Image Watermarking Using Interpolation Technique. IEEE Transactions on Information Forensics and Security 5(1), 187–193 (2010)

    Article  Google Scholar 

  10. C.-C. Lin, W.-L. Tai, C.-C. Chang, Multilevel reversible data hiding based on histogram modification of difference images. Pattern Recognition 41, 3582–3591 (2008)

    Article  Google Scholar 

  11. S. Zhang, T. Gao, L. Yang, A reversible data hiding scheme based on histogram modification in integer DWT domain for BTC compressed images. International Journal of Network Security 18(4), 718–727 (2016)

    Google Scholar 

  12. J. Tian, Reversible data embedding using a difference expansion. IEEE Transactions on Circuits and Systems for Video Technology 13(8), 890–896 (2003)

    Article  Google Scholar 

  13. A. Malik, G. Sikka, H. Verma, An image interpolation based reversible data hiding scheme using pixel value adjusting feature. Multimedia Tools and Applications (2016)

  14. T.-C. Lu, C.-C. Chang, Y.-H. Huang, High capacity reversible hiding scheme based on interpolation, difference expansion, and histogram shifting. Multimedia Tools and Applications 72, 417–435 (2014)

    Article  Google Scholar 

  15. X. Zhang, Z. Sun, Z. Tang, C. Yu, X Wan, High capacity data hiding based on interpolated image. Multimedia Tools and Applications 76(7), 9195–9218 (2017)

    Article  Google Scholar 

  16. A. Shaik, T. V., High capacity reversible data hiding using 2D parabolic interpolation, Multimedia Tools and Applications, vol. 78, no. 8, pp. 9717–9735, 2019.

    Article  Google Scholar 

  17. M.A. Wahed, H. Nyeem, High capacity reversible data hiding with interpolation and adaptive embedding, PLoS ONE 14(3): e0212093 (2019). https://doi.org/10.1371/journal.pone.0212093

    Book  Google Scholar 

  18. R.C. Gonzalez, R.E. Woods, Digital Image Processing, third edn. (Prentice Hall, NJ, 2008)

  19. L. Zhang, X. Wu, Color Demosaicking Via Directional Linear Minimum Mean Square-Error Estimation. IEEE Transactions on Image Processing 14(12), 2167–2178 (2005)

    Article  Google Scholar 

  20. P. Getreuer, Zhang-Wu (Directional LMMSE Image Demosaicking, Image Processing On Line (IPOL), 2011)

    Google Scholar 

  21. V. Karimi, R. Mohseni, Intelligent target spectrum estimation based on OFDM signals for cognitive radar applications. Journal of Intelligent & Fuzzy Systems 36, 2557–2569 (2019)

    Article  Google Scholar 

  22. V. Karimi, OFDM waveform design based on mutual information for cognitive radar applications. The Journal of Supercomputing (2019)

  23. S. Kafshgari, High-Performance GLR Detector for Moving Target Detection in OFDM Radar-Based Vehicular Networks. Wireless Personal Communications 108, 751–768 (2019)

    Article  Google Scholar 

  24. M. Yazdi, An Efficient Training Procedure for Viola-Jones Face Detector, International Conference on Computational Science and Computational Intelligence (ICCSCI) (Las Vegas, USA, 2017)

    Google Scholar 

  25. M. Yazdi, Robust cascaded skin detector based on AdaBoost. Multimedia Tools and Applications 78(2), 2599–2620 (2019)

    Article  Google Scholar 

  26. M. Singhal, Optimization of hierarchical regression model with application to optimizing multi-response regression k-ary trees, Association for the Advancement of Artificial Intelligence (AAAI) (Honolulu, Hawaii, USA, 2019)

    Google Scholar 

  27. M. R. Khosravi, S. Samadi, Modified Data Aggregation for Aerial ViSAR Sensor Networks in Transform Domain, 25th Int'l Conf. Par. and Dist. Proc. Tech. and Appl. (PDPTA'19), pp. 87-90, 2019.

Download references

Acknowledgements

We would like to thank Sandia National Laboratory for ViSAR data used as dataset in this research.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

MK participated in mathematical design of the proposed method and its computer implementation. SS coordinated industrial application and raw data preparation and helped out for study. MK and SS have completed the first draft of this paper. All authors have read and approved the final manuscript.

Authors’ information

Not applicable.

Corresponding author

Correspondence to Mohammad R. Khosravi.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Khosravi, M.R., Samadi, S. Efficient payload communications for IoT-enabled ViSAR vehicles using discrete cosine transform-based quasi-sparse bit injection. J Wireless Com Network 2019, 262 (2019). https://doi.org/10.1186/s13638-019-1572-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13638-019-1572-4

Keywords