Open Access

A novel unequal error protection scheme for 3-D video transmission over cooperative MIMO-OFDM systems

EURASIP Journal on Wireless Communications and Networking20122012:269

Received: 29 February 2012

Accepted: 28 July 2012

Published: 23 August 2012


Currently, there has been intensive research to drive three-dimensional (3-D) video technology over mobile devices. Most recently, multi-input multi-output (MIMO) with orthogonal frequency division multiplexing (OFDM) and cooperative diversity have been major candidates for the fourth-generation mobile TV systems. This article presents a novel unequal error protection (UEP) scheme for 3-D video transmission over cooperative MIMO-OFDM systems. Several 3-D video coding techniques are investigated to find the best method for 3-D video transmission over the error-prone wireless channels. View plus depth (VpD) has been found the best technique over other techniques such as simulcast coding (SC) and mixed-resolution stereo coding (MRSC) in terms of the performance. Various UEP schemes are proposed to protect the VpD signals with different importance levels. Seven video transmission schemes for VpD are proposed depending on partitioning the video packets or sending them directly with different levels of protection. An adaptive technique based on a classified group of pictures (GoP) packets according to their protection priority is adopted in the proposed UEP schemes. The adaptive method depends on dividing GoP to many packet groups (PG’s). Each PG is classified to high-priority (HP) and low-priority (LP) packets. This classification depends on the current signal-to-noise ratio (SNR) in the wireless channels. A concatenating form of the rate-variable low-density parity-check (LDPC) codes and the MIMO system based on diversity of space-time block codes (STBC) is employed for protecting the prioritized video packets unequally with different channel code rates. For channel adaptation, the switching operations between the proposed schemes are employed to achieve a tradeoff between complexity and performance of the proposed system. Finally, three protocols for 3-D video transmission are proposed to achieve high video quality at different SNRs with the lowest possible bandwidth.


Three-dimensional (3-D) video applications have recently emerged to offer immersive video content compared to two-dimensional (2-D) services. Currently, there has been intensive research to drive 3-D video technology over mobile devices similar to its applications in 3-D cinema and television [1]. This strong motivation is due to 3-D video environment which makes observers unable to distinguish between real media and an optical illusion [2]. The main challenges to realize this ambition are to design efficient 3-D video representations, coding and transmission methods to overcome the effects of error-prone wireless channels [1]. This article aims to transmit 3-D video signals over wireless communication systems by adopting state-of-the-art communication and signal processing techniques.

Generally, high data-rates are required for video transmission, and even more for 3-D video services. Spatial modulation multiplexing techniques such as multi-input multi-output (MIMO) have been developed to address this issue. Furthermore, due to the size and power constraints with an increased number of antennas in MIMO-mobile devices, the cooperative diversity is proposed to harness the spatial diversity without deploying multiple antennas. In addition, the combination between MIMO with one to three antennas and cooperative communications, improves the video system performance [3].

Orthogonal frequency division multiplexing (OFDM) is one of the powerful spread spectrum techniques to increase the transmission bandwidth efficiency. Furthermore, the subcarriers’ orthogonality is implemented efficiently using the inverse discrete Fourier transform (IDFT) and discrete Fourier transform (DFT) at the transmitter and receiver, respectively. In addition, inter-symbol interference (ISI), caused by multipath propagation, is overcomed with the aid of the cyclic prefix (CP). The CP represents an extension of OFDM symbols in the time domain. Meanwhile, in the frequency domain, OFDM turns a frequency-selective channel into multiple frequency-flat subchannels. Consequently, the detrimental effect of the frequency-selective fading channel is mitigated [4].

According to MIMO and OFDM principles, the combination of MIMO and OFDM is crucial in reducing the effect of frequency selectivity, improving the spectral efficiency and providing high data rates. Therefore, MIMO-OFDM becomes the chosen air interface technology for next generations of wireless networks such as WiMAX IEEE 802.16e standard [5].

Video transmission systems generally use compression techniques such as H.264/AVC based on variable-length codes (VLCs) to overcome the problem of channel bandwidth limitation. The resulting bitstream is usually very sensitive to bit errors. A single-bit error can propagate to many subsequent VLCs. Moreover, error propagation causes a synchronization loss between the encoder and decoder. In worst cases, this can lead to an entire system decoding failure. Therefore, video communication systems should use error-resilient video coding and powerful channel coding techniques to provide reliable video communication over error-prone wireless channels [6].

Many different types of error resilient video and channel coding techniques have been proposed to improve video transmission over wireless communication systems. These schemes mainly are: unequal error protection (UEP) with assistance of forward error correction (FEC) methods and joint source-channel coding (JSCC). UEP involves on partitioning the video data into different fractions of visual importance. The most important part is called the high-priority (HP) stream, while the less important stream is termed the low-priority (LP). In addition, UEP is mostly combined with FEC schemes such as turbo codes [7] or low-density parity-check (LDPC) [8] codes to achieve more robust video bit stream. Furthermore, JSCC algorithms control the encoders of source and channel coding to make the video system adaptive to wireless channels changes [9, 10].

The advantages of exploiting diversity and multiplexing gains of multi-antenna systems promotes the application of MIMO technology in wireless video communications systems. Wu et al. [11] investigated the system performance of a MPEG coding scheme with joint convolutional coding and MIMO-based space-time block codes (STBC) techniques over Rayleigh fading channels. The feedback information from the performance control unit (PCU) was employed to control the assigned rates to MPEG source code and convolutional coding stages. Although this study demonstrated that bit error rate (BER) can be improved using STBC and convolutional coding systems, it did not propose any techniques to mitigate error propagation in video signals at the video decoder. Song and Chen [12] proposed an MIMO system based on the adaptive channel selection (ACS) method. The suggested scheme was to load more important video layers to the MIMO sub-channel that has a high signal-to-noise ratio (SNR). Song and Chen [13] also proposed another method to increase the transmission throughput by reallocating the excess power of certain sub-channel to other sub-channels. Zheng et al. [14] proposed hybrid space-time coding structure to achieve the UEP scheme for multiple description coding (MDC) over MIMO-OFDM system. Besides, several hybrid MIMO systems were proposed in [15, 16]. Although these works have suggested different methods to improve video transmission over wireless channels, they are unable to achieve spatial diversity gains and therefore are ineffective in fading channel environments. Furthermore, they need to be adaptive with channel’s characteristics.

Currently, many existing works for 3-D video delivery over wireless communication channels concentrate on fixed designs such as the one proposed by Hewage et al. [17] which was based on view-plus-depth (VpD). In the article, a UEP method based on unequal power allocation (UPA) was proposed to transmit 3-D video signals over WiMAX communication channels. The VpD map was coded with backward compatibility using the scalable video coding (SVC) architecture. Akar et al. [18] utilized the previous method to transmit 3-D video signals over the Internet. Furthermore, Hewage et al. [19, 20] demonstrated that the depth map information is less important than the colour data in terms of perceived video quality. Because of the above reason, the scheme allocates more protection for the colour image than the depth map. It was also determined based on UPA method. Aksay et al. [2] studied the digital video broadcasting-handheld (DVB-H) system at different coding rates for transmitting left and right views. The study recommended to give more protection to the left than the right view. Tech et al. [21] implemented and integrated JMVC 5.0.5 using the slice interleaving method. Micallef and Debono [22] applied the same idea of the slice interleaving method with different slices size to the JMVC 8.0. Most recently, Hellge et al. [23] proposed a layer-aware FEC method to improve the MVC video performance over the DVB-H system. Moreover, the UEP method in [24] used the repetition codes and depended on partitioning the data in the video block based on VLC priority, whereas the UEP scheme in [25] depended on a restricted scheme because that the HP and LP streams represented the I-frame and the P-frame packets, respectively. It can be concluded, although the slice interleaving method is useful to minimize and isolate effects of error propagation, it is suitable only when the noise level is low. Moreover, an increase in the number of slices per frame leads to a reduction in video compression efficiency. In addition, in [24], the FEC scheme of repetition codes is much simpler than LDPC codes in this article, whereas the LDPC encoding method in [25] is more complex than the encoding method adopted in this article.

This article proposes a new JSCC technique for 3-D video transmission over cooperative MIMO-OFDM systems. The proposed scheme is designed to adapt to changes of wireless MIMO-OFDM channels to address the issues raised above. The main contributions of this work is summarized as follows.
  1. 1.

    A new video encoder and transmitter structure is proposed that adopts UEP and EEP schemes for 3-D video transmission. The proposed UEP schemes are implemented by isolating HP and LP streams depending on the current SNR in the wireless channel and the packet type.

  2. 2.

    A new classification method of video packets of GoP for the left and right views as well as color and depth sequences is proposed. The packet categorization depends on classifying GoP packets into distinct groups which each of them then classified further according to its importance and priority protection.

  3. 3.

    Switching operations between the schemes are proposed to achieve an elegant trade-off between 3-D video compression efficiency and the perceptual performance against error propagation.

  4. 4.

    An efficient algorithm called the approximate lower triangular form (ALTF) in [26] for the LDPC with different coding rates is adopted and integrated into the 3-D video system. The adopted LDPC code is adaptive to the channel state according to the proposed JSCC algorithm.


The rest of the article is organized as follows. The cooperative MIMO-OFDM system for 3-D video transmission is described in Section ‘Cooperative MIMO-OFDM design for 3-D video transmission’. The rate-distortion analysis is illustrated in Section ‘Rate-distortion analysis for 3-D video compression’. The performance analysis of the LDPC codes 3-D system is explained in Section ‘Performance analysis of the LDPC codes 3-D system’. The simulation results of the 3-D video transmission over the cooperative MIMO-OFDM system are presented in Section ‘Simulation and results of the 3-D video transmission over cooperative MIMO-OFDM systems’. Finally, Section ‘Conclusion’ concludes the article.

Cooperative MIMO-OFDM design for 3-D video transmission

In this section, the design of the proposed cooperative MIMO-OFDM system for 3-D video transmission is described in detail in the subsequent sections.

3-D video encoding with UEP

Several video representations and coding methods for 3-D video signals have been proposed [27]. The use of these methods is basically determined by underlying 3-D video applications and display techniques. The 3-D video input is generally captured by two cameras representing the left and right views.

Various source coding approaches have been considered in the literature to process the 3-D video signal. In this article, simulcast coding (SC), mixed-resolution stereo coding (MRSC) and view plus depth (VpD) representations are considered due to their suitability for low-rate applications such as mobile services [1, 28]. The MRSC method encodes the left and right views separately using H.264/AVC standard. MRSC is implemented by down-sampling one of the views and up sampling back to the original resolution at the decoder. This operation yields different views with unequal resolution and the overall 3-D video quality is almost retained. This method is similar to SC, which encodes the left and right views separately without down-sampling. The VpD method encodes one of the views such as the right view with auxiliary depth information. At the decoder, the left view can be reconstructed using the depth-image-based rendering (DIBR) technique [29]. It can be concluded that, the SC and MRSC methods decode the left and right views independently, whereas the DIBR technique reconstructs the left view depending on the relationship between the view and depth. Furthermore, this relationship is beneficial in improving the compression efficiency for the 3-D video signal. VpD is less affected by noise than other 3-D video coding techniques as will be demonstrated later in the subsequent sections. This is due to that the depth sequence, which is gray scales ranging from 0 to 255.

The block diagram of the proposed 3-D video encoder using UEP is shown in Figure 1. The proposed encoder was designed taking the following into account: (1) Three main schemes (which will explain in more detail later) for transmission are proposed to enable the 3-D video encoder to be adaptive to SNR variation in the wireless channel. The first and second schemes, which are termed partitioning-view plus depth (P-VpD) and partitioning-view (P-V), respectively, are based on packet partitioning, while the third scheme is to transmit the view and depth data directly, which is referred to as direct-view plus depth (D-VpD). (2) Switching operations between the proposed schemes are proposed. In addition, the selection between these schemes is controlled by two signals SW1 and SW2 for Switch-1 and Switch-2, respectively. (3) The circuits of Switch-3 and Switch-4 are proposed to enhance the encoder to switch from the P-VpD to P-V scheme or vice versa. (4) The partitioner blocks (Partitioner-1 and Partitioner-2) are controlled by a control signal CS. (5) The switch circuits have I N1I N2,… and I N6, which denote the input terminals, while P P1D P1,… and D P3 denote the output terminals. (6) The control signals SW1,…,SW4 and CS are generated by a control unit at the transmitter side, which will be explained in more detail later. (7) In H.264/AVC, a number of coding profiles are defined according to the codec capabilities. In this article, the baseline profile is chosen due to its suitability for low rate video applications [30]. (8) In the P-VpD and P-V schemes, the colour and depth video sequences are grouped into number of GoP’s. In addition, their packets are split to more important and less important packets. Figure 2 illustrates the video packets and their types after H.264/AVC encoding. As shown in this figure, P1and P2represent the sequence parameter set (SPS) and picture parameter set (PPS) packets. These packets contain common control parameters to the decoder which are used to identify the entire video sequence. The packets P1 and P2 are followed by I-frame packets (PI 3,…,P In ) and P-frames packets (PP 3,…,P Pm ), where n and m depend on the dimensions of the video sequence and GoP size. For example, n and m are 14 and 126, respectively, for a video sequence of 432×240 pixels with a GoP of 10.
Figure 1

Block diagram for the proposed UEP 3-D video encoder.

Figure 2

Produced video packets and their types after H.264/AVC encoding.

As shown in Figure 2, it is possible to enhance the video transmission by dividing P-frame packets per GoP into a number of packet groups (GP) (g1g2,…,g Ng ). These groups can be classified according to their relative perceptual importance. For instance, each GoP in a video sequence such as a ‘Car’ sequence [31], with 30 frames of 432x240 pixels and a GoP of 10, is divided to three groups (Ng=3) g1, g2, and g3. Noise is added to each group individually. This means that in the first test, noise is applied to g1, while g2 and g3are reconstructed perfectly. In the second and third test, the same procedure is performed on g2 and g3, respectively. Table 1 reveals the video system performance in terms of the peak signal-to-noise ratio (PSNR) and video distortion (D). As shown in the table, the worst PSNR occurs when the noise is added to g1, while the distortion is lowest when error propagation takes place within the packets in g3. Therefore, the orders of GP priority from high to low is g1g2,…,g Ng .
Table 1

PSNR and video distortion with different packet groups




g 1



g 2



g 3



Based on the above discussions, seven video transmission schemes are proposed in the presence of a varying channel SNRs. The first scheme is called P-VpD. This scheme employs packet partitioning, where SPS, PPS and I-frame packets in the color and depth sequences are classified as HP packets, while P-frame packets are considered as LP packets. The second scheme, called P-V, which also applies the packet partitioning but on the color sequence only. The HP packets are protected with a high priority in both schemes, since any error in the SPS and PPS packets may lead to entire system decoding failure. Furthermore, any error in the I-frame packets will propagate to the P-frames packets. The third scheme is a direct UEP scheme, called D-VpD. This method sends the right and the depth sequences directly without packet partition. In this scheme, the right view has higher protection than depth, due to the fact that the left view is reconstructed depending on the relationship between the right view and depth. Therefore, any error in the right view will spread to the reconstructed left view. While the previous UEP schemes are static, the other four proposed UEP methods are adaptive in terms of classifying the P-frame packets.

In the adaptive UEP schemes, the P-frame packets are classified to four groups (Ng=4). The partitioner blocks (Partitioner-1 and Partitioner-2) in Figure 1 follow four methods to classify the P-frame packets. The fourth scheme, called P-VpD-1/4, treats g1in the right and depth sequences as the HP packets, while the fifth method, called P-VpD-1/2, considers g1 and g2 as the HP packets. To evaluate the P-VpD schemes (P-VpD-1, P-VpD-1/4, and P-VpD-1/2) with other possible packet partitioning of view (color) packets, P-V schemes are proposed. The classification of the HP packets for P-V schemes as follows: the SPS, PPS and I-frame packets of the color sequence are considered HP packets in the P-V scheme. Meanwhile, HP packets represent g 1 in the P-V-1/4 scheme, and g1and g2groups in the P-V-1/2 scheme. The SPS, PPS and I-frame packets are also classified as the HP packets for the schemes.

Based on the above discussions, the control unit in the 3-D video transmitter (as shown in Figure 1) has to perform the following tasks. (1) It switches the switch circuits to the partitioning path (PP) or the direct path (DP) according to the adopted scheme for transmission. Table 2 shows the states of the switch circuits for each transmission scheme according to the control signals (S W1, S W2, S W3, and S W4). For example, if the D-VpD scheme is adopted for transmission, Switch-1 connects I N1 with D P1, while Switch-2 and Switch-4 connects I N2with D P2and I N6with D P3, respectively. Meanwhile, Switch-3 turns off. (2) The CS signal of the control unit controls the Partitioner-1 and Partitioner-2 blocks to select one of the six adaptive schemes (i.e., P-VpD, P-VpD-1/4, P-VpD-1/2, P-V, P-V-1/4, and P-V-1/2) on a GoP basis depending on the scheme complexity and required video quality. (3) It counts the transmitted frames and checks the CSI per video frame, and when the number of the transmitted video frames reaches the allocated GoP, the control unit selects the best scheme between the proposed schemes and changes S W1,…,S W4 as well as CS for that purpose. (4) It decides the code rates for LDPC encoders on a GoP basis.
Table 2

States of switch circuits for each video transmission scheme







connecting I N1

connecting I N2


connecting I N6


with D P1

with D P2


with D P3


connecting I N1

connecting I N2

connecting I N3



with P P1

with P P2

with P P3





connecting I N1

connecting I N2

connecting I N4



with P P1

with D P2

with P P3




The difference between direct and packet partitioning schemes is the isolation method of HP and LP packets. The D-VpD scheme is more reliable at low SNRs because it gives more protection for important information (color). In addition, it is simpler compared to the P-VpD schemes. On the other hand, the D-VpD scheme requires more bandwidth compared to other schemes. Therefore, the best method is to strike a trade-off between the complexity of the P-VpD schemes and the simplicity of the D-VpD scheme, which will be explained on more detail later.

It is worth mentioning that SC coding is generally considered more resilient to error propagation compared to other 3-D video techniques. It also provides a good video quality at low SNRs, due to the fact that both views are decoded separately. However, this error resilience comes at the expense of high data rates. In light of this, VpD is more suitable.

VpD has two main features. Firstly, VpD provides better compression efficiency. For example, the total data rates are 4.027 and 2.987 Mbps to transmit SC and VpD signals, respectively, for a video sequence 432×240 pixels with 30 frames per second (fps). Secondly, VpD is more sensitive to error propagation. Since error bits in color information will propagate to the reconstructed left view, however, noise effects are not substantially noticed on reconstructed 3-D video sequence when the right view is perfectly reconstructed as illustrated in Figure 3, where the right view is assumed to be reconstructed perfectly and noise effects are only on the depth sequence. As shown in this figure, VpD is better than other techniques because noise effects on depth is less compared to SC and MRSC. The performance degradation of SC and MRSC is due to error propagation on the left view, which reduces the overall 3-D video quality. Therefore, 3-D video systems have a trade-off between the required data rate and the quality of reconstructed 3-D video signal.
Figure 3

Performance of 3-D video systems at different 3-D video coding techniques.

Signal and channel models for cooperative MIMO-OFDM systems

The general architecture of a cooperative MIMO-OFDM system for 3-D video transmission is shown in Figure 4[32, 33]. In addition, Figure 5 shows the block diagram of the proposed 3-D video transmitter using MIMO-OFDM technique. The input LDPC code sequences of HP and LP data is mapped into a sequence of symbols belonging to a constant modulus constellation such as M-ary phase shift keying. In the first hop, the symbols are encoded by space-time block encoder and sent simultaneously over the channel in multiple consecutive OFDM symbol intervals to the destination and relay. Let d = [ d 0 , d 1 , , d N 1 ] T C N × 1 after M-ary phase shift keying modulation and { d i } i = 1 N TX denote the symbol vector from the i th transmit antenna (with i=1,2,…,N TX ). In this article, the MIMO encoder adopts Alamouti scheme [34] with N TX = 2. Therefore, the transmission matrix of Alamouti scheme:
d = d 1 d 2 d 2 d 1 ,
Figure 4

Cooperative MIMO-OFDM system.

Figure 5

Proposed UEP 3-D video transmitter.

For the direct link between the source and destination, the received signal at j th receive antenna is modeled as:
r j sd = F H H j sd d + n j sd

where { H j sd } j = 1 N RX is the channel frequency response between the source and destination with an independent Rayleigh fading channel, with quasi-static fading coefficients, F C N T N × N T N is the DFT matrix with its (l,m) the element given by F l , m ( 1 / N ) e j ( 2 Πml / N ) with m,l=0,1,…,N T N−1 and { n j sd } j = 1 N RX C N ( 0 , σ sd 2 ) .

For the relay link between the source and relay, the received signal at k th receive antenna is modeled as:
r k sr = F H H k sr d + n k sr

where { H k sr } k = 1 N R is the channel frequency response between the source and relay with an independent Rayleigh fading channel, with quasi-static fading coefficients and { n k sr } k = 1 N R C N ( 0 , σ sr 2 ) .

In the second hop, the relay performs the amplify-and-forward (AF) protocol on the received signals. This protocol is adopted in this article because it has a lower complexity than the decode and forward (DF) protocol [3]. In the AF protocol, the relay simply multiplies the received signals r k sr by the gain factor as shown in (4) and forwards the resultant signal to the destination.
G = ( E | r k sr | 2 ) 1 2 = ( | H k sr | 2 + σ sr 2 ) 1 2
where E [ · ] is the expectation value. The received signal r rd at the destination is given by
r j rd = H j rd G r k sr + n j rd = H j rd ( | H k sr | 2 + σ sd 2 ) 1 2 F H H k sr d + ( | H k sr | 2 + σ sr 2 ) 1 2 n k sr + n j rd

The received signal vectors { r j sd } j = 1 N RX (2) and { r j rd } j = 1 N RX (5) are applied to the DFT operation. Maximal ratio combining (MRC) is utilized in the destination to obtain cooperative diversity gains by adding the decoding samples of the direct and relay links coherently.

Rate-distortion analysis for 3-D video compression

The distortion of a video signal generally consists of source distortion (D s ) and channel distortion (D c ). D s is due to the compression process in the video encoder, and D c is caused by video packet losses introduced by the wireless channel. Hence, the total distortion of the left (D L ) and right (D R ) views can be formed as:
D L = D s L + D c L
D R = D s R + D c R
where D s L and D s R represent the mean squared errors (MSE) at the source encoder output for the left and right views, respectively. Meanwhile, D c L and D c R are the left and right sequences distortion, respectively, which are induced by the wireless channel. According to (6) and (7), the average distortion of the 3-D video signal (D T ) can be described as:
D T = D L + D R 2

To minimize D T , two methods are followed. The first method uses a rate-distortion (R-D) model to estimate the source encoding rate that minimize the D s L and D s R . The second method reduces the D c L and D c R by choosing suitable code rates of the LDPC encoder.

In the first method, the D s L and D s R can be modeled as [35]
D s L = θ L R L R 0 L + D 0 L
D s R = θ R R R R 0 R + D 0 R
where R L and R R are source encoding rates in bit per second (bps) of the left and right views, respectively. In addition, θ L , R 0 L , and D 0 L represent the sequence-dependent parameters of the R-D model of the left view encoder, and θ R , R 0 R , and D 0 R for the right view [35]. The source distortion of depth D s D can also be calculated
D s D = θ D R D R 0 D + D 0 D

where R D in bps is the encoding rate of the depth encoder.

Using some non-linear curve fitting tools, the relevant R-D curves of left, right and depth sequences for ‘Car’, ‘Hands’, ‘Horse’, ‘Bullinger’, ‘Alt Moabit’, and ‘Book arrival’ videos in [31] are plotted in Figures 6, 7, and 8, respectively. Hence, the distortion parameters (9), (10), and (11) for the adopted ‘Car’ video in the article can be solved as shown in Table 3. As can be seen from Figure 6, the variation in MSE becomes very small when R L is greater than 1.2 Mbps. Therefore, the encoding rate R L =R R =1.206 Mbps is used to encode the left and color sequences. Similarly, as can be seen in Figures 7 and 8, the variation in MSE becomes very small when R R and R D are greater than 350 kbps. In addition, the distortion effect on the depth sequence is less than on the color sequence as shown in Figure 3. Therefore, R R =R D =0.378 Mbps is utilized for encoding the MRSC and VpD. Thus, these selected rates achieve a good balance between video quality and bandwidth.
Figure 6

Rate-distortion curve for the left view.

Figure 7

Rate-distortion curve for the right view.

Figure 8

Rate-distortion curve for the depth sequence.

Table 3

Distortion factors of left, right and depth encoders

Left view

θ L

R 0 L

D 0 L





Right view

θ R

R 0 R

D 0 R





Depth sequence

θ D

R 0 D

D 0 D





Performance analysis of the LDPC codes 3-D system

The LDPC code, which has variable coding rates is employed to protect the HP and LP streams. D c L , D c R , and D c D values can be minimized with an appropriate design of LDPC codec. The operations of LDPC encoding and decoding must be efficient and simple. Hence, an encoding algorithm of the approximate lower triangular form (ALTF) and a decoding method of sum-product algorithm (SPA) are utilized to achieve this goal [26, 36].

The ALTF algorithm is based on row and column permutations only. This operation performs as many transformation as possible in order to reduce the gap (g) in the ALTF matrix, where the encoding complexity is proportional to the gap size.

SPA is a soft decision algorithm that calculates the a priori probabilities of the received code bits and uses a posteriori probabilities for decoding operation. These probabilities are known as log-likelihood ratios.

Figure 9 shows the LDPC performance for the set of code length 2048 and fifty maximum iterations with variable coding rates R=8/16,9/16,…,13/16 under BPSK modulation. The gap values are determined for each coding rate as shown in Table 4.
Figure 9

Performance of LDPC codes at different code rates.

Table 4

Gap values at various code rates

Coding rates

Column weight (j)

Row weight (k)

Gap (g)

























As can be observed from Figure 9, the decreasing of the code rates improves the BER. In addition, it definitely increases the gap value as shown in Table 4, which leads to increase the computational complexity of channel encoding and decoding. Therefore, the best method to select a suitable rate is to strike a trade-off between the channel codec complexity and video quality. These two factors are determined according to the channel state.

From the channel protection point of view, the coding rates of LP and HP streams are allocated as follows. Two LDPC codes are utilized to protect the 3-D video signal with different or equal coding rates as seen in Figure 5. The first one is allocated to protect the HP stream with a coding rate of RLDPC-HP, while the second has a coding rate of RLDPC-LP. According to the proposed 3-D video encoder in Figure 1, the total bit rate (R T ) in bps is:
R T = ( 1 + r 1 ) R HP + ( 1 + r 2 ) R LP
where r1and r2are parity bits ratios for the first and second LDPC encoders, respectively. In addition, RHP and RLP are bit rates for HP and LP streams as observed in Figure 1. The RHPand RLPare determined according to the available SNR in the wireless channel. Therefore, the RHP and RLP can be calculated as:
R HP = R H 1 + R H 2 SNR < SNR th R R SNR SNR th
R LP = R L 1 + R L 2 SNR < SNR th R D SNR SNR th
where R H 1 and R L 1 are the bit rates of HP and LP packets, respectively after Partitioner-1. In addition, R H 2 and R L 2 are the bit rates of HP and LP packets, respectively after Partitioner-2. The SNRth also represents a certain value of SNR that makes the 3-D video source node change from D-VpD to P-VpD schemes or vice versa. More details for SNRth are presented in Section ‘Simulation and results of the 3-D video transmission over cooperative MIMO-OFDM systems systems’. The data rates of the right (R R ) and depth (R D ) sequences represent:
R R = R H 1 + R L 1
R D = R H 2 + R L 2

R H 1 , R L 1 , R H 2 , and R L 2 (after Partitioner-1 and Partitioner-2) can be calculated by counting the HP and LP packets. For example, if a video packet has fixed length of 150 bytes, and if there are 42 HP packets and 378 LP packets, then R H 1 is 50.4 kbps and R L 1 is 453.6 kbps.

According to the variables above, two factors must be considered to minimize the end-to-end 3-D video distortion (D T ). Firstly, the required data rate for 3-D video signal must be equal or less than R T . Secondly, the D T should be less or equal to 650.25, which represents the maximum tolerable distortion Dmax[23].

The total source and channel distortion can be measured by calculating the objective joint peak signal to noise ratio (PSNR j ) at the output of video decoder as follows.
PSNR j = 10 log 10 25 5 2 D T
D T = ( MSE l + MSE r ) / 2 .

In this equation, MSE l and MSE r represent the mean square error between the original and reconstructed left and right sequences, respectively, [37].

From the above discussion, it can be concluded that, the encoding rates and the available SNR in the wireless channel determine the total 3-D video distortion. Moreover, the channel coding rates and the bandwidth are the main factors to minimize the channel distortion.

Simulation and results of the 3-D video transmission over cooperative MIMO-OFDM systems

The proposed 3-D video encoder is implemented using Matlab. The H.264 reference software JM version (13.2) [38] is used for encoding the right and left views. It is also utilized to encode the right (color) and depth sequences. The cooperative MIMO-OFDM system is designed according to its model in Figures 1 and 5. Table 5 shows the simulation configurations.
Table 5

Simulation configurations

System parameters


Source coding


Tested sequence


Video sequence dimensions

(432x240) pixels

Tested video frames


Down sampling factor




Fading channel

Quasi-static Rayleigh fading

Noise channel


Relay protocol


No. of antennas for source


No. of antennas for relay


No. of antennas for destination




Code rates

4/16, 8/16, and 13/16 for UEP


13/16 for EEP

Diversity technique

Alamouti scheme

Guard period ratio


OFDM sub-channels


To simulate the cooperative MIMO-OFDM system with LDPC coding, the following steps are adopted. Firstly, the model of the cooperative MIMO system in Section ‘Signal and channel models for cooperative MIMO-OFDM systems’ is implemented without OFDM. Consequently, the simulation model is compared with the model in ([39], Equation (33)) as shown in Figure 10 in terms of BER. Finally, the LDPC codes and OFDM technique are added to the simulation model.
Figure 10

Comparison between the simulation model and the model in [39].

For noisy channels, most VLCs could not be reconstructed, and in some cases, the video decoder reconstructs the wrong coefficients because it lost the synchronization with the video encoder. To overcome this problem, this article proposes two error-resilient video methods. The first method is to resynchronize the video decoder using resynchronization patterns. This method adopted in [25] and is extended to SC and VpD applications in this article. The second method is to make the 3-D video transmitter adaptive with the channel state.

In the first method, special information in the video packet header is exploited by the video decoder to isolate the effect of error propagation. The length of header information is around 20 bytes and in hexadecimal form 00 00 FF FF FF FF 80, which exists in most packets (e.g., SPS, PPS, intra and even inter frames packets). This pattern is utilized to maintain the synchronization with the video encoder by restarting the decoding operation when the error occurs in the video packet. The error propagation could be detected easily by a cyclic redundancy check (CRC) at the decoder side. In this procedure, the decoder depends on the CRC to determine the corrupt packets and discard them. Thus, restarting the video decoder is necessary to minimize the effect of error and isolate the error propagation between the video packets. It is also suitable when the noise level is low.

The second method (which will be explained in more detail later) exploits the CSI signal to achieve adaptive video transmission. In this method, the 3-D video transmitter allocates the coding rates for LDPC encoders corresponding to several UEP schemes or fixed EEP scheme.

The first test measures the required data rate to transmit SC, MRSC and VpD under different UEP and EEP schemes. Table 6 shows the required data rates for each 3-D coding methods, where the code rates 8/16 and 13/16 are adopted in this table.
Table 6

Required data rates for 3-D coding methods




R T for

R T for

















































As outlined in Table 6, the D-VpD scheme is better data rates than the D-SC and D-MRSC schemes. In addition, packet partitioning schemes for VpD have lower data rates compared to D-VpD. Moreover, the packet partitioning schemes of VpD compared to the MRSC and SC schemes provide the same quality of the 3-D video signal with lowest bandwidth.

Figure 11 plots the average decoded 3-D video quality in terms of PSNR with the direct transmission schemes. The results lead to the following observations: (1) The performance of the D-VpD-UEP scheme is better than the D-SC and D-MRSC schemes because the depth sequence is not deeply affect by noise. Hence, it can be concluded that if the right (color) view is reconstructed perfectly, the left view could be reconstructed acceptably even if the noise effects have spread in the depth sequence. This fact is clearly observed when the color receives more error protection than the depth using the UEP technique. However, the D-VpD-UEP scheme requires a higher data rate (5.3915 Mbps) compared to the D-SC and D-VpD schemes when the channel code rate of the HP stream (RLDPC-HP) is reduced to 4/16. (2) Decreasing the data rates reduces the video signal protection, which makes the video signal more sensitive to error propagation. This fact can be clearly seen in the D-VpD-UEP (RLDPC-HP=8/16) and D-VpD-EEP schemes. (3) At low SNRs (−9 to −6 dB), the D-VpD-UEP (RLDPC-HP=4/16) scheme gives better performance than other video schemes, while its performance is close to D-VpD-UEP (RLDPC-HP=2/16) and D-VpD-EEP at moderate SNRs (−6 to −3 dB) and hight SNRs (−3 to −2 dB), respectively. (4) UEP is able to enhance the quality of reconstructed 3-D video signal at low-to-moderate SNRs. However, it makes the channel encoding and decoding operations more complicated as shown in Table 4. Therefore, with suitable allocation of the channel code rates based on the channel’s SNR, the high system performance with a lower computational encoding and decoding complexity can be achieved. This is obtained by choosing the code rate (RLDPC-HP=4/16) at low SNRs, while the medium code rate (RLDPC-HP=8/16) is selected at moderate SNRs. In addition, the high code rate (RLDPC-HP=13/16) is chosen at high SNRs.
Figure 11

PSNR at different direct transmission schemes.

According to the above discussion, the 3-D video system can adopt the following 3-D video protocol to achieve high video quality at different channel states with lowest possible bandwidth:
The 3-D video scheme is D-VpD-UEP ( R LDPC - HP =4/16) SNR < SNR th 1 D-VpD-UEP ( R LDPC - HP =8/16) SNR th 1 SNR < SNR th 2 D-VpD-EEP SNR > SNR th 2

where the values of SNR th 1 and SNR th 2 are −6, −3 dB, respectively. These SNRs values are chosen to achieve high video quality with lowest possible bandwidth at moderate-to-high SNRs. Hence, when SNR < SNR th 1 , the D-VpD-UEP (RLDPC-HP=4/16) scheme is adopted to achieve a PSNR between 31.5 to 41.36 dB at the data rate of 5.3915 Mbps, while the D-VpD-UEP (RLDPC-HP=8/16) and DVpDEEP schemes are selected at SNR th 1 SNR < SNR th 2 and SNR > SNR th 2 to achieve a PSNR = 41.36 dB at the data rate 2.987 and 2.062 Mbps, respectively.

As mentioned earlier, the P-VpD schemes are proposed to reduce the data rate for transmission. The performance of these schemes compared to the D-VpD schemes shown in Figure 12. The results lead to the following observations: although the 3-D video streams are protected by LDPC codes, the recovery of the video signal is almost impossible at low SNRs (−9 to −6 dB). This is mainly due to excessive errors in the VLC bitstreams that cause severe error propagation. In this case, to overcome the error propagation effect, the video data requires high protection with the high data rate which lead to increase the complexity of the channel encoding and decoding operations. Therefore, the Switch-1 and Switch-2 as shown in Figure 1 switch to the D-VpD-UEP scheme at low SNRs, while they switch to P-VpD-UEP and D-VpD-EEP at moderate and high SNRs, respectively, to overcome this problem. This adaptive technique enhances the video system to display the 3-D video signal at low SNRs. Furthermore, it reduces the complexity of the encoding and decoding operations as well as the required data rates at moderate-to-high SNRs. In addition, the proposed technique adopts different VpD schemes that make it more flexible to achieve high 3-D video quality with low required data rates. The selection between these schemes depends on the required video quality and the complexity of the video system. Thus, the D-VpD schemes represent a restricted design between right (color) and depth sequences for the UEP scheme, while the P-VpD schemes make the 3-D video system flexible to determine the more important and less important information inside the color and depth sequences.
Figure 12

PSNR at different P-VpD schemes compared to D-VpD schemes.

Based on the results in Figure 12, the transmission protocol for the 3-D video transmission over cooperative MIMO-OFDM systems can be proposed as follows.
The 3-D video scheme is D-VpD-UEP ( R LDPC - HP =4/16) SNR < SNR th 1 P-VpD-1/2 SNR th 1 SNR < SNR th 2 P-VpD SNR th 2 SNR < SNR th 3 D-VpD-EEP SNR > SNR th 3

where SNR th 1 , SNR th 2 , and SNR th 3 are −6, −4, and −3 dB, respectively. Furthermore, SNR th 1 represents SNR th in (13) and (14). These SNRs values are chosen to achieve high video quality with lowest possible bandwidth at moderate-to-high SNRs. Hence, when SNR < SNR th 1 , the D-VpD-UEP (RLDPC-HP= 4/16) scheme is adopted to achieve a PSNR between 31.5 to 41.36 dB at the data rate of 5.3915 Mbps, while the P-VpD-1/2, P-VpD and DVpDEEP schemes are selected at SNR th 1 SNR < SNR th 2 , SNR th 2 SNR < SNR th 3 , and SNR > SNR th 3 to achieve a PSNR = 41.36 dB at the data rates 2.876, 2.364, and 2.062 Mbps, respectively.

To achieve the above protocol, the switch circuits follow different states as shown in Table 2. In addition, Partitioner-1 and Partitioner-2 change their behavior according to the control unit. It can be concluded that, the proposed 3-D video system exhibits a high level of flexibility to change its behavior for any channel state to achieve reliable video transmission.

According to Table 6, the P-V schemes (P-V, P-V-1/4, and P-V-1/2) require the lower data rates compared to the P-VpD, P-VpD-1/4, and P-VpD-1/2 schemes. The performance comparison between D-VpD, P-VpD and P-V schemes is shown in Figure 13. As observed in this figure, although the P-VpD schemes give better performance than the P-V schemes at low SNRs, both schemes (P-VpD and P-V) performance are close at moderate-to-high SNRs. Therefore, it is possible to propose another protocol for 3-D video transmission similar to the previous protocol as follows:
The 3-D video scheme is D-VpD-UEP ( R LDPC - HP =4/16) SNR < SNR th 1 P-VpD-1/2 SNR th 1 SNR < SNR th 2 P-V-1/2 SNR th 2 SNR < SNR th 3 P-V SNR th 3 SNR < SNR th 4 D-VpD-EEP SNR > SNR th 4
Figure 13

PSNR at different P-V schemes compared to P-VpD schemes.

where the SNR th 1 , SNR th 2 , SNR th 3 , and SNR th 4 are −6, −5, −4 and −2.5 dB, respectively. These values are chosen to achieve high video quality with lowest possible bandwidth at moderate-to-high SNRs. Hence, when SNR < SNR th 1 , the D-VpD-UEP (RLDPC-HP= 4/16) scheme is adopted to achieve a PSNR between 31.5 to 41.36 dB at the data rate of 5.3915 Mbps, while the P-VpD-1/2, P-V-1/2, P-V and DVpDEEP schemes are selected at SNR th 1 SNR < SNR th 2 , SNR t h 2 SNR < SNR th 3 , SNR th 3 S NR < SNR th 4 , and SNR > SNR th 4 to achieve a PSNR = 41.36 dB at the data rates 2.876, 2.653, 2.301, and 2.062 Mbps, respectively.

To achieve this protocol, the control unit switch the switch circuits according to Table 2 and change the method of packet partitioning in Partitioner-1 and Partitioner-2 according to the CSI.

For comparative purposes, Figure 14 shows the reconstructed left and right pictures for the ‘Car’ video sequence at frame 19 under different transmission schemes in the protocols at different SNRs. According to the proposed protocols and Figure 14, the proposed system is highly flexible in adapting to the quality of the underlying the wireless channel.
Figure 14

The reconstructed left and right pictures for the ‘Car’ video sequence at frame 19 under different transmission schemes in the protocols at different SNRs; (a,b) include D-VpD-UEP (RLDPC-HP = 4/16) at SNR = - 9 dB; (c,d) include D-VpD-UEP (RLDPC-HP = 8/16), P-VpD-1/2, at SNR = - 6 dB; (c,d) include P-VpD-1/2, P-VpD , P-V-1/2, P-V, and D-VpD-EEP when SNR greater than - 6 dB.


In this article, a novel UEP scheme is proposed to transmit a 3-D video signals over the cooperative MIMO-OFDM systems. In the framework, a new video encoder and cooperative MIMO-OFDM architecture for 3-D video transmission are proposed. Specifically, the 3-D video encoder adopts various UEP schemes with two error resilient methods to overcome the effects of error propagation in the 3-D video streams. The first method is proposed using the resynchronization technique, which is useful when SNRs are high. The second method adopts seven UEP schemes based on packet partitioning and direct transmission of video packets. According to the performance of the proposed schemes, three video protocols for 3-D video transmission are proposed to enhance the system performance at different states of wireless channels. They achieve a high video quality at different channel states with lowest possible bandwidth. Switching operations are proposed to achieve these protocols that are adaptive with the variation of the wireless channel. The simulation results have demonstrated the effectiveness of the proposed 3-D video protocols.

Joint source and channel rate optimization of the cooperative MIMO-OFDM system using hybrid AF and DF for 3-D video applications will be considered in the future work of this research.


Authors’ Affiliations

Faculty of Engineering and Surveying, University of Southern Queensland


  1. Gotchev A, Akar G, Capin T, Strohmeier D, Boev A: Three-dimensional media for mobile devices. Proc. IEEE 2011, 99(4):708-741.View ArticleGoogle Scholar
  2. Aksay A, Bici MO, Bugdayci D, Tikanmaki A, Gotchev A, Aksar GB: A study on the effect of MPE-FEC for 3D video broadcasting over DVB-H. In Proc. International Mobile Multimedia Communications Conference. London, UK; 2009.Google Scholar
  3. Uysal M: Cooperative Communications for Improved Wireless Network Transmission: Framework for Virtual Antenna Array Applications. IGI Global, Hershey; 2010.View ArticleGoogle Scholar
  4. Nuaymi L: WiMAX: Technology for Broadband Wireless Access. John Wiley & Sons, New York, USA; 2007.View ArticleGoogle Scholar
  5. Zhang Y, Chen HH: Mobile WiMAX: Toward Broadband Wireless Metropolitan Area Networks. Taylor & Francis Group, New York, USA; 2008.Google Scholar
  6. Wang H, Kondi L, Luthra A, Ci S: 4G Wireless Video Communications. John Wiley and Sons, New York, USA; 2009.View ArticleGoogle Scholar
  7. Thomos N, Boulgouris N, Strintzis M: Wireless image transmission using turbo codes and optimal unequal error protection. IEEE Trans. Image Process 2005, 14(11):1890-1901.View ArticleGoogle Scholar
  8. Rahnavard N, Fekri F: Unequal error protection using low-density parity-check codes. In Proc. IEEE Int. Symp. Inf. Theory, vol I11. Geneva, Switzerland; 2004.Google Scholar
  9. Yang GH, Shen D, Li V: UEP for video transmission in space-time coded OFDM systems, vol 2. In Twenty-third AnnualJoint Conference of the IEEE Computer and Communications Societies INFOCOM,. Hong Kong, China; 2004.Google Scholar
  10. Dogan S, Sadka AH, Kondoz AM: Error-resilient techniques for video transmission over wireless channels. Arabian J. Sci. Eng 1999, 24: 101-114.Google Scholar
  11. Wu JC, Li CM, Chen KH: Adaptive digital video transmission in MIMO systems. In Second International Symposium on Intelligent Information Technology Application, IITA’08,. Shanghai, China; 2008.Google Scholar
  12. Song D, Chen CW: Scalable H.264/AVC video transmission over MIMO wireless systems with adaptive channel selection based on partial channel information. IEEE Trans. Circ. Syst. Video Technol 2007, 17(9):1218-1226.MathSciNetView ArticleGoogle Scholar
  13. Song D, Chen CW: Maximum-throughput delivery of svc-based video over mimo systems with time-varying channel capacity. Visual Commun. Image Represent 2008, 19(8):520-528. 10.1016/j.jvcir.2008.06.008MathSciNetView ArticleGoogle Scholar
  14. Zheng H, Ru C, Chen W, Yu L: Video transmission over MIMO-OFDM system: MDC and space-time coding-based approaches. Adv. Multimedia 2007, 2007(1):8. Scholar
  15. Kuo CH, Kim CS, Kuo CC: Robust video transmission over wideband wireless channel using space-time coded OFDM systems. In IEEE Wireless Communications and Networking Conference, WCNC2002,. Orlando, FL, USA; 2002.Google Scholar
  16. Lin S, Stefanov A, Wang Y: On the performance of space-time block-coded MIMO video communications. IEEE Trans. Veh. Technol 2007, 56(3):1223-1229.View ArticleGoogle Scholar
  17. Hewage C, Ahmad Z, Worrall S, Dogan S, Fernando W, Kondoz A: Unequal error protection for backward compatible 3-D video transmission over WiMAX. In IEEE International Symposium on Circuits and Systems, ISCAS,. Taipei, Taiwan; 2009.Google Scholar
  18. Akar G, Tekalp A, Fehn C, Civanlar M: Transport methods in 3DTV—a survey. IEEE Trans. Circ. Syst. Video Technol 2007, 17(11):1622-1630.View ArticleGoogle Scholar
  19. Hewage C, Worrall S, Dogan S, Kodikaraarachchi H, Kondoz A: Stereoscopic TV over IP. In 4th European Conference on Visual Media Production , IETCVMP,. London, UK; 2007.Google Scholar
  20. Hewage C, Worrall S, Dogan S, Villette S, Kondoz A: Quality Evaluation of Color Plus Depth Map-Based Stereoscopic Video. IEEE J. Sel. Top. Signal Process 2009, 3(2):304-318.View ArticleGoogle Scholar
  21. Tampere University of Technology: Development and optimization of coding algorithms for mobile 3DTV. Germany; 2009.Google Scholar
  22. Micallef B, Debono C: An analysis on the effect of transmission errors in real-time H.264-MVC Bit-streams. In 15th IEEE Mediterranean Electrotechnical Conference MELECON. Valletta, Malta; 2010.Google Scholar
  23. Hellge C, Gomez-Barquero D, Schierl T, Wiegand T: Layer-aware forward error correction for mobile broadcast of layered media. IEEE Trans. Multimedia 2011, 13(3):551-562.View ArticleGoogle Scholar
  24. Salim OH, Xiang W, Leis J: Error-resilient video transmission for 3-D signal over Cooperative-MIMO system. In European Signal Processing Conference (EUSIPCO),. Barcelona, Spain; 2011.Google Scholar
  25. Salim OH, Xiang W: Prioritized 3-D Video Transmission over Cooperative MIMO-OFDM Systems. In International Conference on Digital Image Computing Techniques and Applications (DICTA). Noosa, QLD, Australia; 2011.Google Scholar
  26. Richardson T, Urbanke R: Efficient encoding of low-density parity-check codes. IEEE Trans. Inf. Theory 2001, 47(2):638-656. 10.1109/18.910579MathSciNetView ArticleMATHGoogle Scholar
  27. Smolic A, Mueller K, Merkle P, Kauff P, Wiegand T: An overview of available and emerging 3D video formats and depth enhanced stereo as efficient generic solution. In Picture Coding Symposium, PCS. Chicago, Illinois, USA; 2009.Google Scholar
  28. Brust H, Smolic A, Mueller K, Tech G, Wiegand T: Mixed resolution coding of stereoscopic video for mobile devices. In 3DTV Conference: The True Vision—Capture, Transmission and Display of 3D Video. Potsdam, Germany; 2009.Google Scholar
  29. Fehn C: Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3D-TV. In Proc. Stereoscopic Displays and Virtual Reality Systems (SPIC). San Jose, CA; 2004.Google Scholar
  30. Richardson IE: The H.264 Advanced Video Compression Standard. John Wiley and Sons, New York, USA; 2010.View ArticleGoogle Scholar
  31. Mobile 3DTV
  32. Xiao H, Dai Q, Ji X, Zhu W: A novel JSCC framework with diversity-multiplexing-coding gain tradeoff for scalable video transmission over cooperative MIMO. IEEE Trans. Circ. Syst. Video Technol 2010, 20(7):994-1006.View ArticleGoogle Scholar
  33. Hong YWP, Huang WJ, Kuo CCJ: Cooperative Communications and Networking:Technologies and System Design. Springer, New York; 2010.View ArticleGoogle Scholar
  34. Alamouti S: A simple transmit diversity technique for wireless communications. IEEE J. Sel. Areas Commun 1998, 16(8):1451-1458. 10.1109/49.730453View ArticleGoogle Scholar
  35. Stuhlmuller K, Farber N, Link M, Girod B: Analysis of video transmission over lossy channels. IEEE J. Sel. Areas Commun 2000, 18(6):1012-1032.View ArticleGoogle Scholar
  36. Introducing low-density parity-check codes
  37. Bici MO, Bugdayci D, Akar GB, Gotchev A: Mobile 3D Video Broadcast. In Proc. International Conference on Image Processing. Hong Kong; 2010.Google Scholar
  38. H.264/AVC Reference Software, JM Version 13.2
  39. Abdaoui A, Ikki S, Ahmed M, Chaatelet E: On the performance analysis of a MIMO-relaying scheme with space-time block codes. IEEE Trans. Veh. Technol 2010, 59(7):3604-3609.View ArticleGoogle Scholar


© Salim and Xiang; licensee Springer. 2012

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.