 Research
 Open Access
 Published:
Optimization for image transmission over varying channel with MCMC
EURASIP Journal on Wireless Communications and Networking volume 2012, Article number: 275 (2012)
Abstract
Abstract
Existing work in media transmission generally assumes that the channel condition is stationary. However, communication channels are often varying with time in practice. Adaptive design needs frequent feedback for channel updates, which is often impractical due to the complexity and delay. In this article, we design the unequal error protection for image transmission over noisy varying channels based on their distribution functions. Since the channel effect must be marginalized in order to find the appropriate rate allocation, the optimization problem is very complex. We propose to solve this problem using the Markov Chain Monte Carlo (MCMC) method. The cost function is first mapped into a multivariable probability distribution. Then, with the “detailed balance”, MCMC is designed to generate samples from the mapped stationary probability distribution so that the optimal solution is the one that gives the lowest data distortion. We also show that the rate allocation design considering the channel probability function works better than the design considering the mean value of the channel.
Introduction
Progressive image compression, such as SPIHT [1], is an approach that exploits the inherent similarities across the subbands in a wavelet decomposition of an image, and the algorithm codes the most important wavelet coefficients first, and transmits the bits so that an increasingly refined copy of the original image can be obtained progressively. The progressive compression is widely used in many applications, because the media can be restored with the best quality by receiving a sequence of continuous errorfree data. However in the coded data stream, any error bit due to channel noise would cause the loss of synchronization between the sender and receiver, which means that all the data after that bit error has to be completely discarded. Therefore, an important issue in image transmission is to design a protection strategy for the source data, i.e., allocating channel code rates to different data packets, based on the channel condition and the ratedistortion feature of the source, in order to optimize the overall recovery quality of image in the noise channel.
In [2], the cyclic redundancy check codes and rate compatible punctured codes (CRC/RCPC) were employed to protect the SPIHT coded data and obtained performance better than previous results in binary symmetric channels (BSCs). This work used equal error protection and was then extended to the product code protection [3] when the Gilbert–Elliot channel (GEC) model was considered. Since then, many errorcontrol solutions for progressive image transmission [3–11] have been proposed. In these methods, different codes are used, including CRC/RCPC [5, 7, 8], CRC/RCPT codes [6, 9–11], ReedSolomon (RS) codes [4, 11] and their product codes. Different channel conditions are also considered, including BSC, GEC, and packet loss channels.
All these methods consider only the fixed channel condition. In practice, however, the channel condition always changes with the time, mainly due to the mobility and multipath factors in the communication. This feature of channels has also been considered for a while in video transmission, such as [12] where the scalability of the precoded JPEG 2,000 video is considered and a fast source rate allocation through the steepest descent algorithm is designed. In this type of work, the varying feature of channel is often represented as different transmission bandwidth. The physical additive white Gaussian noise (AWGN) and channel coding are generally not considered. In this article, we consider the unequal error protection (UEP) design through channel codes for varying channels. Intuitively, one designed UEP system has to be updated with frequent feedback of the channel condition. However, this is impractical as the multidimensional optimization process for the rate allocation design is often very complicated and the channel variation can be fast. Therefore, a more realistic solution is to design the protection method considering all the channel conditions. The channel variation, which is captured by the probability density distribution (PDF) of signal to noise ratio (SNR), can often be estimated from a long period of operation of the communication network. Considering the channel PDF, the optimization problem is even more complicated than previous designs because channel effects must also be marginalized.
In this article, we apply the Markov Chain Monte Carlo (MCMC) technique to the optimal UEP system design where varying channels are considered. MCMC is a method of sampling from probability density function based on constructing a Markov chain. With the “detailed balance” mechanism, the equilibrium states of the Markov chain describe the targeted probability distribution [13]. Since the purpose is to find out the optimal channel code rate allocation so that the transmitted image has the lowest distortion in the varying channel, we first use the exponential function of the simulated annealing method to map the cost function into a probability distribution. Then we use the slice sampling of MCMC to generate samples. Since these samples drawn from MCMC approach the mapped stationary probability distribution, the MCMC method not only suggests the optimal design but also provides more probabilistic information than other heuristic optimization methods. We show that the MCMC method enables a low complexity design with a solution approaching the optimal one. Finally, we show that the system design based on the channel distribution outperforms a design that is based on a specific channel statistic such as the mean value.
The rest of this article is organized as follows. In Section “Problem description”, The image transmission problem is described and formulated. In Section “MCMC methodwith slice sampling”, the MCMC method and how it is applied to the image transmission problem is discussed. Simulation results are presented in Section “Simulationand results” and the conclusion is drawn in Section “Conclusion”.
Problem description
We consider a joint sourcechannel coding system with N coded packets with fixed packet length as shown in Figure 1. This data structure was considered in [6] and is justified in many applications. Rate compatible punctured turbo (RCPT) codes are used to provide UEP. Assume that the space of different channel code rates is $\mathcal{C}=\{{c}_{1},{c}_{2},{c}_{3}\dots ,{c}_{M}\}$ with c_{1} > c_{2} > · · · > c_{ M } and C_{ i }i = 1,…,M represents the rate of the length of source data and the length of packet. Since each packet has the fixed length L, the corresponding number of source bits for each code rate is s_{ i } = c_{ i }·L. Therefore the number of the overall source bits is $\sum _{i=1}^{N}{c}_{i}\xb7L$ and the number of overall channel parity bits is $\left(1\sum _{i=1}^{N}{c}_{i}\right)\xb7L$. After being transmitted over a specific channel with SNR = x dB, the packet error probabilities after decoding are denoted as $\mathcal{E}=\left\{{e}_{1}\right(x),{e}_{2}(x),\dots ,{e}_{M}(x\left)\right\}$, where e_{1}(x) > e_{2}(x) > · · · > e_{ M }(x)[14].
In this article, the Turbo code (15,17)_{oct} with mother code rate of 1/3 is considered. A set of code rates of {4/4, 4/5, 4/6, 4/7, 4/8, 4/9, 4/10, 4/11, 4/12} are obtained through puncturing [15]. Coded packet size is L = 4096. Figure 2 shows the residual packet error rate (PER) in different AWGN channel conditions. It needs to be noted that the PER also depends on the packet size and can be different if a different packet length is used. A larger data length increases the coding gain in turbo decoding in general.
Suppose that a message contains N coded packets, each being protected with channel code rate of ${r}_{i}\in \mathcal{C},i=1,\dots ,\mathit{N.}$N is determined by the total transmission rate and the packet length. For channel x, the corresponding PER is ${p}_{i}\left(x\right)\in \mathcal{E}$. Denote P_{ i }(x) as the probability that the first i packets are decoded without errors but the i + 1th packet is not correctly decoded. Then
Distortion between the recovered image and the original image is represented by the mean square error (MSE). Let D_{ i } be the distortion of the image restored by the first i packets that have been correctly decoded. Then, with an allocation ${\mathcal{R}}_{N}=\{{r}_{1},{r}_{2},\dots ,{r}_{N}\},{r}_{i}\in \mathcal{C}$, of channel code rates for the data packets, the expected distortion is
This distortion in Equation (2) has a recursive representation [5] as:
where D_{0} is the distortion of the case that no correct packet is received. Δ D_{ i } is the reduced distortion between D i−_{1} and D_{ i }. ${\mathcal{R}}_{N1}=\{{r}_{2},\dots ,{r}_{N}\}$ is the rate allocation for the last N− 1 packets.
When the channel SNR is given, a dynamic programming method was suggested in [5] for this distortionbased optimization problem. This method is a backwards updating process. That is, the reduced distortion of a latter packet should be used first to determine the optimal allocation. However, the reduced distortion in a latter packet actually depends on the channel code rates in former packets, which is evident in Equation (3). Therefore, although the forwardupdating process along trellis is more practically used [6, 11], the solution becomes suboptimal as indicated in [6].
For a varying channel, if the channel SNR density function f_{ X }(x) is known, then the joint sourcechannel coding problem becomes to find a channel code rate set ${\mathcal{R}}_{N}$ which minimizes, subject to a given overall transmission rate,
Apparently, this is different from finding the optimal rateallocation based on the mean channel condition, because the mean SNR is a fixed value and the distortion is
where $\stackrel{\u0304}{x}$ is the mean channel SNR. The varying channel case is more complicated than considering only on the specific channel instance as the channel variable x must be marginalized in the optimization process. In next section, we propose to solve this optimization problem with the MCMC method.
MCMC method with slice sampling
In this section, we address the mapping between the cost function and the stationary probability used in MCMC, the technique of slice sampling method and their application to the image transmission problem.
Mapping the cost function to probability
The cost function has the channel code rate of each packet as input, and the distortion value as output. When the image message is transported by N packets, there are N inputs, i.e., ${\mathcal{R}}_{N}=\{{r}_{1},\dots ,{r}_{N}\}$, need to be considered to generate the output. The optimization task is to find the ${\mathcal{R}}_{N}$ that gives the minimum distortion value. The first issue in using MCMC is the need to map the cost function J(·) into a likelihood function that could be used as the stationary probability distribution in the Markov Chain. We use the function in the simulated annealing for this purpose, i.e.,
where Z_{ τ } is a normalization constant so that any possible value is always between 0 and 1 and the sum of all probability is 1.
If the current state of the Markov chain x(t) is equal to i, suppose a neighbor j of i is visited with a probability q_{ ij } where q_{ ij } = q_{ ji } is required. Then whether j will be selected as a new sample is given by the probability of
This is termed as the Metropolis process and it can be proved that with this process, the stationary probability of the Markov Chain is (6). It can be noted that

(1)
If $J\left(j\right)\to \infty $, then exp{−[J(j) − J(i)] / τ(t)} → 0. In this case the accepting probability for j is tending to 0. That is, a state j associated with a high cost has a low chance to be accepted.

(2)
On the other hand, that J(j) → 0 means the new cost is negligible. Then −[J(j) − J(i)] / τ(t) could be positive, which implies the new state j is always selected due to its lowest cost value.
The parameter τ(t), termed as the temperature in the simulated annealing, also needs to be considered carefully. To make the algorithm approach the optimal solution, the temperature should change with the time. The temperature function is a nonincreasing function, which represents the cooling down process. τ(0) is usually set as a high value in the beginning of simulation. This makes sure that with a high probability a new state will be accepted. In this stage, system may reach more diverse and different states based on their probability contribution. When τ(t) approaches 0, the probability becomes very small. The system is reluctant to accept a new state j, which is often described as a “Frozen” stage. The process of decreasing of temperature is usually slow to give the system enough time to be “trapped” in the low cost area. In our simulation, the temperature value is reduced after a new sample is generated.
Slice sampling
Simulated annealing uses the normal Metropolis Hastings method [17] for sampling from the probability distribution. The method is based on the observation that to sample a random variable one can sample uniformly from the region under the graph of its density function. Compared with the slice sampling [13], a normal Metropolis method is inefficient for two reason. First, with a fixed step size, the simple Metropolis method may not characterize well the local property of the probability function. Second, the simple Metropolis method takes a random walk to find next step, and therefore it could take a large number of steps to reach a state which is only a few steps away. The slice sampling can be applied to wherever the Metropolis method can be applied to. It has the advantage over the simple Metropolis method in that it is more flexible to the update of parameters like the step size, and the only requirement is that the target density function P(x) can be evaluated at any state x(t)[18]. A brief description of the slice sampling in one dimension is as follows.

(1)
evaluate P[ x(t) = i]

(2)
draw a vertical coordinate,u ∼ Uniform(0,P[x(t) = i])

(3)
create a neighbor space enclosing current state i, i ∈ (I _{left}, I _{right})

(4)
start loop {

(5)
draw j from neighbor space, j ∈ (I _{left},I _{right})

(6)
evaluate P[ x(t + 1) = j]

(7)
if P[ x(t + 1) = j] ≥ u, break out loop andthe next state x(t + 1) = j is accepted

(8)
else modify the neighbor space and repeat loop

(9)
}
P[x(t) = i] is the probability function. x(t) = i represents that at time t, the state of system is i. The time is indexed as t = {1,2,3,… }.
An important improvement over the simple Metropolis method here is the “create and modify the neighbor space” in steps 3 and 8. Based on these two steps, slice sampling could change region of neighbor space to locate the next possible state in order to accelerate the process of finding a new sample. Description of steps 3 and 8 in the one dimension case is as follows,
Step 3: creating the neighbor space of one dimension problem

(1)
set a value of w as the “width”

(2)
generate rand∼Uniform(0,1)

(3)
I _{left} = i − rand ∗ w

(4)
I _{right} = i + rand ∗ w

(5)
while P[I _{right}] > u, then I _{right} = I _{right} + w

(6)
while P[I _{left}] > u, then I _{left} = I _{left} − w
where w is a constant set in the beginning by user, which is used to represent the region that the state could go each time. The (I_{left}, I_{right}) is the neighbor space of state i. In the “creating” process, the slice sampling uses two loops to find the adaptive region of neighbor space. As we can see from step 3, it is actually a process to extend the region of possible neighbor of current state.
Step 3 can also be understood by Figure 3 and its explanation. Too large a region in Figure 3 would cause high computational complexity. Therefore, it also needs a strategy to shrink the region again, which is implemented in step 8. Whenever a new state x(t + 1) = j is rejected, the method tries to “modify” the neighbor space.
Step 8: modifying the neighbor space of one dimension problem:

(1)
if j > i, then I _{right} = j

(2)
else I _{left} = j
Note that comparing j > i is to find out the relative position of the states and then shrink the neighbor space. In one dimension, i,j correspond to the values of xaxis. In multidimension, values along difference axises need to be compared separately. In summary, by the strategy of “creating and modifying”, slice sampling could adjust the neighbor space. In addition, by defining value of w, we may also control the speed of stepping out and shrinking of the space.
Since the problem considered is a multipledimensional problem, the previous onedimension slice sampling should be adjusted to handle the multidimensional case. Consider the input at time t with a set of parameters:
One way is that only one variable is changed at each time in a given order. Let x_{1}(t) be selected at time t. The method will find out the next state based on x_{1}(t) with all other variables unchanged. Then, we fix x_{1}(t) and find the next state based on the variation of x_{2}(t). This process continues when enough samples are drawn. Alternatively, we may also change all variables simultaneously in any specific time instant. An interesting view of this method is considering each variable of input as a direction that the system could head to. By combining all directions, we know which state the system should move to. In this article, the second method with parallel processing is used, which compared with the first method, has better time convergence performance when generating the same number of samples.
MCMC for UEP in image transmission
The distortion function is the cost function. The goal is to find the the appropriate channel code rates for different packets so that the lowest distortion can be obtained for image transmission. We assume that the channel SNR is in the range of [0,4] dB as shown in Figure 2. The turbo code rate is one of the 9 different numbers in the set of
${\mathcal{R}}_{N}$ is the input of the cost function. It is a vector with N elements, each representing the code rate for each packet, i.e., ${\mathcal{R}}_{N}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}\{{r}_{1},{r}_{2},\dots ,{r}_{N}\},$ and ${r}_{i}\in \mathcal{C}$. Apparently, our simulation Markov Chain has the probability distribution like
where Z_{ τ } is a normalizing constant. Because the probability values are only compared in the slice sampling, we can simply set Z_{ t } = 1. The method used is actually a combination of simulated annealing and slice sampling. We use the probability representation of simulated annealing and reduce the temperature with time, while using the slice sampling to access new states and generate new samples.
The MCMC process is the same as steps described in the previous section. Since there are multiple input corresponding to different frames, some modifications need to be considered in the following.
Step 1: The state x(t) = i becomes $x\left(t\right)={\mathcal{R}}_{N}$ in the MCMC process. $P\left[x\right(t)={\mathcal{R}}_{N}]$ is a function for the input ${\mathcal{R}}_{N}$.
Step 3: for ${\mathcal{R}}_{N}=\{{r}_{1},{r}_{2},\dots ,{r}_{N}\},{r}_{i}\in \mathcal{C}$, constructing an interval space for each variable r_{ i }. That makes the neighbor space look like:
Our simulation uses parallel process to find the neighbor space. That is, the neighbors of all r_{ i }, i= 1,…,N, are updated before the next state is drawn and checked.

(1)
set a value for w _{ i }

(2)
loop (r _{1} → r _{ N }){

(3)
generate rand ∼ Uniform(0,1)

(4)
${r}_{i}^{\text{left}}={r}_{i}\mathrm{rand}\ast {w}_{i}$

(5)
${r}_{i}^{\mathrm{right}}={r}_{i}+\mathrm{rand}\ast {w}_{i}$

(6)
check ${r}_{i}^{\text{left}},{r}_{i}^{\mathrm{right}}\in \mathcal{C}$

(7)
while $P\left[\right\{{r}_{1},\dots ,{r}_{i}^{\text{left}},\dots ,{r}_{N}\left\}\right]>u$, then we have ${r}_{i}^{\text{left}}={r}_{i}^{\text{left}}{w}_{i}$

(8)
while $P\left[\right\{{r}_{1},\dots ,{r}_{i}^{\mathrm{right}},\dots ,{r}_{N}\left\}\right]>u$, then we have r _{right}=r _{right} + w _{ i }

(9)
check ${r}_{i}^{\text{left}},{r}_{i}^{\mathrm{right}}\in \mathcal{C}$

(10)
}
The process is same as the onedimensional case, except that a loop is needed here because the neighbor space needs to be checked for each ${r}_{i}\in {\mathcal{R}}_{N}$. In addition, different w_{ i } may be used if necessary. For example, in some cases the input may have different range. A common practice is to set w_{ i } as a certain percentage of the entire range of the parameter space.
Step 8: Let ${\mathcal{R}}_{N}^{\prime}=\{{r}_{1}^{\prime},\dots ,{r}_{N}^{\prime}\}$ be the new state, then the neighbor space is modified as

(1)
loop (r _{1} → r _{ N }){

(2)
if ${r}_{i}^{\prime}>{r}_{i}$, then ${r}_{i}^{\mathrm{right}}={r}_{i}^{\prime}$

(3)
else ${r}_{i}^{\text{left}}={r}_{i}^{\prime}$

(4)
}
Since the code rate space is $\{4/4,4/5,4/6,\dots ,4/12\}$, which contains nine possible values, instead of multiplying w, a random integer, rand ∼ Rand(0,w), could be applied as ${r}_{i}^{\mathrm{right}}={r}_{i}+\mathrm{rand}$ in simulation.
Another issue is the set of the temperature function τ(t). In any MCMC simulation, an adaptive τ(t) is an important factor for the performance. While with big τ(t), system would take very long time to reach convergence, with very small τ(t), the system may be trapped in a local minimum. A reasonable τ(t) is necessary. In [19] Hajek gave a popular temperature function:
where d here is a constant number which is considered as the measurement of how difficult for the system x(t) to jump out from the current local minimum and travel to the optimal solution set. In other words, MCMC needs a large enough d to start off but reduces this value with the running of the chain.
Simulation and results
In this section, the MCMC method has been applied to design the UEP system for progressive image transmission. The Lena image with size of 512×512 and 8 bits perpixel (bpp) was passed through the SPIHT algorithm to obtain the source data. RCPT codes are then used for channel protection, where a set of numbers are used to represent the set of channel coding rates, as shown in Table 1. The distortion value is the MSE between the reconstructed image and the original image.
We first consider a case with a small number of packets, with the purpose to check whether the MCMC method can perform as expected. Then we consider cases with more number of packets and different channel probability distributions to study the performance based on the MCMC method. We also compare the design results based on the channel distribution and the mean value of the channel.
MCMC samples in the case of ten packets
The channel distribution is uniformly distributed, i.e., f_{ X }(x) ∼ Uniform[0,4] dB. MCMC is used to generate the samples of ${\mathcal{R}}_{N}$ based on the probability mapped from the cost function. Different trials are conducted, i.e., the MCMC process is repeated to find independent UEP designs. It was found that the MCMC almost always gives the optimal ${\mathcal{R}}_{N}$. It closely approaches the optimal design even if the optimal result was not reached occasionally.
To validate the results of MCMC, we exhaustively enumerate all possible ${\mathcal{R}}_{N}$ in this case. Brute force method needs to compute all the 9^{10}different rate allocations and takes significant amount of time with a PC. A few least distortion values and the number of corresponding allocations ${\mathcal{R}}_{N}$ to achieve each of these values are listed in Table 2. For example, MSE distortion 221.5 has a frequency of 68. In other words, there are 68 different inputs, ${\mathcal{R}}_{N}$, that result in the distortion of 221.5. Clearly, the global minimum MSE is 220.7.
To show that the MCMC method really draws samples from the probability function that reflects the distortion, we sample 10,000 samples and plot the histogram together with the results obtained from brute force search. The results are shown in Figure 4. In this simulation, the temperature τ(t) was set as a large constant value (i.e., 1,000 in simulation) and was not decreased after each sample is generated, which means that the simulation becomes a normal slice sampling method. To make the curves clear in a large xaxis scale, an average function was also used. That is, every x and yaxis value on this figure are the average of the a few neighbors and itself, i.e., ${x}_{i}^{\mathrm{new}}=({x}_{i4}+{x}_{i3}+\xb7\; \xb7\; \xb7+{x}_{i+4})/9$. From Figure 4, the results of MCMC and brute force method have very similar shape, which verifies that MCMC indeed generates samples according to the mapped distribution. This property illustrates that the MCMC had visited all regions of the parameter space based on their contribution to the mapped probability.
When the number of packets is 32, brute force method will no longer work as the computational complexity is too high to handle. A few best samples drawn from MCMC are shown in Table 3. Uniform distribution f_{ X }(x) is also assumed for the channel.
Because an early packet loss will cause the discard of packets starting from where the first error occurs, a packet loss in the beginning is crucial to the final performance. The beginning packets should be considered more important than the following packets. One feature that can be observed after hundreds trials is that the rate of each packet in one allocation ${\mathcal{R}}_{N}$ is nondecreasing. This result has been theoretically proved in [14, 16] for a given channel condition but has not yet been proved for the varying channel case.
Comparison with the case of using channel mean value
Unequal error protection designs based on channel distributions over SNR=[0,4] dB and based on the mean channel SNR are also compared. Results are showed in the Tables 4, 5, 6 and 7. Three different probability density functions are assumed, which are truncated Gaussian distributions $\mathcal{N}(2,0.5)$, $\mathcal{N}(2,1)$ and the uniform distribution used previously. For all these conditions, the mean value of the channel is 2 dB. Using the mean SNR value is a popular choice in existing methods where the design is often based on a specific channel condition.
Table 4 shows the different resource allocation between the source code and the channel code in terms of bpp under different overall bpp cases. The channel distribution is truncated Gaussian $\mathcal{N}(2,0.5)$. The table also contains the optimal result generated by MCMC where the notation of a(b) denotes b consecutive packets that are coded with code rate a.
Tables 5, 6 and 7 show the distortion comparison for the UEP designs. In each table, the 2nd and 3rd columns are the performance in MSE and peak signal to noise ration (PSNR) of the UEP designs based on channel distribution and the last two columns are the results of using mean SNR for the UEP design. Clearly, these results show that considering all channel distributions does produce better performance than considering a single statistic value of the channel.
In Table 8 we also compare the complexity of MCMC with the case of exhaustive search. Complexity of each case is mainly determined by how many points (i.e., how many different ${\mathcal{R}}_{N}$) in the rate allocation space have been tested. The last row shows the ratio of the complexity between MCMC and the exhaustive search. Apparently for ${\mathcal{R}}_{N}=\{{r}_{1},{r}_{2},\dots ,{r}_{N}\}$, there are 9^{N} different ${\mathcal{R}}_{N}$, since each r_{ i } has nine possible value. The table is generated by counting the time of calculation of MCMC and compared with the number of possible inputs for each bpp. Compared with the exponentially increasing number of possible allocations when more packets are considered, the cost of running MCMC increases linearly and thus results more complexity reduction.
Figure 6 shows the performance of optimal ${\mathcal{R}}_{\mathcal{N}}$ generated by the MCMC method for different channel distributions as well as the one generated based on the mean channel value. In the figure, the one based on the mean SNR has lower PSNR range below 1.75 dB approximately, but has better PSNR beyond 1.75 dB. However, the optimal solution ${\mathcal{R}}_{N}$ has the overall better average PSNR.
Conclusion
In this article, image transmission over time varying channels is considered. The varying channel, characterized by its probability distribution, makes the optimization problem of finding the best rate allocation very complicated. A MCMC method is proposed to solve this problem. It has been shown that the method can generate nearoptimal solutions with low complexity. It also provides an overall picture of the distribution of distortion versus the rate allocation. In addition, it has been shown that the design considering the channel distribution performs better than the design considering the mean value of the channel.
References
 1.
A Said, WA Pearlman: A new, fast, and efficient image codec based on set partitioning in hierarchical trees. IEEE Trans. Circ. Syst. Video Technol 1996, 6: 243250. 10.1109/76.499834
 2.
PG Sherwood, K Zeger: Progressive image coding for noisy channels. IEEE Signal Process. Lett 1997, 4: 189191.
 3.
PG Sherwood, Zeger K: Error protection for progressive image transmission over memoryless and fading channels. IEEE Trans. Commun 1998, 46: 15551559.
 4.
AE Mohr, EA Riskin, RE Ladner: Unequal loss protection: graceful degradation of image quality over packet erasure channels through forward error correction. IEEE J. Sel. Areas Commun 2000, 18: 819828.
 5.
V Chande, N Farvardin: Progressive transmission of image over memoryless noisy channels. IEEE J. Sel. Areas Commun 2000, 18: 850860.
 6.
B BA: Robust image transmission using JPEG2000 and turbocodes. IEEE Signal Process. Lett 2002, 9(6):117119.
 7.
V Stankovic, R Hamzaoui, D Saupe: Fast algorithm for ratebased optimal error protection of embedded codes. IEEE Trans. Commun 2003, 51: 17881795. 10.1109/TCOMM.2003.819235
 8.
A Nosratinia, J Liu, B Aazhang: Sourcechannel rate allocation for progressive transmission of images. IEEE Trans. Commun 2003, 51: 186196. 10.1109/TCOMM.2003.809256
 9.
V Stankovic, R Hamzaoui, Z Xiong: Efficient channel code rates selection algorithms for forward error correction of packetized multimedia bitstreams in varying channel. IEEE Trans. Multimed 2004, 6: 240248. 10.1109/TMM.2003.822789
 10.
V Stankovic, R Hamzaoui, Z Xiong: Fast algorithm for distortionbased error protection of embedded image codes. IEEE Trans. Image Process 2005, 14: 14171421.
 11.
N Thomos, NV Boulgouris, MG Strintzis: Wireless image transmission using turbo codes optimal unequal error protections. IEEE Trans. Image Process 2005, 14: 18901901.
 12.
F AuhLlinas, MW Marcellin, L JimenezRodriguez: FAST rate allocation for JPEG2000 video transmission over timevarying channels. IEEE Trans. Multimed 2012, 2012: 1.
 13.
RM Neal: Sampling, Slice. Ann. Stat 2003, 31(3):705767. 10.1214/aos/1056562461
 14.
L Cao: On the unequal error protection for progressive image transmission. IEEE Trans. Image Process 2007, 16: 23842388.
 15.
DN Rowitch, LB Milstein: On the performance of Hybrid FEC/ARQ systems using rate compatible punctured turbo (RCPT) codes. IEEE Trans. Commun 2000, 48: 948959. 10.1109/26.848555
 16.
L Cao: Joint source and channel coding for image transmission over time varying channels. IEEE International Conference on Communications, 2007. ICC ’07 2007.
 17.
N Metropolis, AW Rosenbluth, Rosenbluth M lN, AH Teller, Teller E: Equation of state calculations by fast computing machines. J. Chem. Phys 1953, 21(6):10871092. 10.1063/1.1699114
 18.
DJ MacKay: Information Theory, Inference, and Learning Algorithms. Cambridge, Cambridge University Press; 2003.
 19.
B Hajek: Cooling schedules for optimal annealing. Math. Operat. Res 1988, 1(2):311329.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
About this article
Cite this article
Wu, X., Cao, L. & Goggans, P. Optimization for image transmission over varying channel with MCMC. J Wireless Com Network 2012, 275 (2012). https://doi.org/10.1186/168714992012275
Received:
Accepted:
Published:
Keywords
 Progressive image transmission
 Optimal rateallocation
 Markov chain monte carlo