Skip to main content

Channel-optimized scalar quantizers with erasure correcting codes

Abstract

This paper investigates the design of channel-optimized scalar quantizers with erasure correcting codes over binary symmetric channels (BSC). A new scalar quantizer with uniform decoder and channel-optimized encoder aided by erasure correcting code is proposed. The lower bound for performance of the new quantizer with complemented natural code (CNC) index assignment is given. Then, in order to approach it, the single parity check code and corresponding decoding algorithm are added into the new quantizer encoder and decoder, respectively. Analytical results show that the performance of the new quantizers with CNC is better than that of the original quantizers with CNC and natural binary code (NBC) when crossover probability is at a certain range.

1 Introduction

For a uniform scalar source, it is well known from [1] that if over a noiseless channel, in terms of the mean squared distortion (MSD), the uniform scalar quantizer is optimal among all quantizers. But if across noisy channels, the uniform quantizer is no longer optimal. Therefore, joint source and channel coding has attracted much attention in [2, 3], which has been seen as a promising scheme for effective data transmission over wireless channels due to its ability to cope with varying channel conditions [4–6]. Generally, there are two approaches to improving the performance of a quantizer over a noisy channel.

One is that the encoding cells in quantizer’s encoders are designed to be affected by the transmission channel characteristics, for example [7], or the final positioning of the reconstruction in decoders depends on channel characteristics, for example [8], which is called as channel optimized quantizers. In [9–12], some necessary optimality conditions are given. Alternatively, an error detecting code can be cascaded with the quantizer at the expense of added transmission rate in [13].

The other one is index assignment, which is a mapping of source code symbols to channel code symbols and was studied in [14, 15]. The usual goal to design an index assignment for noisy channels is to minimize the end-to-end MSD over all possible index assignments. Some famous index assignments, such as natural binary code (NBC), Gray code and randomly chosen index assignments, are studied on a binary symmetric channel (BSC) in [16–21]. In [16], it is proved that NBC is optimal for uniform scalar quantizers and uniform source. In [17], McLaughlin et al. extended it to uniform vector quantizers. Farber and Zeger [8] also proved the optimality of NBC for uniform source and quantizers with uniform encoders and channel-optimized decoders. In [7], they not only studied on NBC code, but also proposed a new affine index assignment, named as complemented natural code (CNC).

Interestingly, it is known from [7] that for uniform decoder and channel-optimized encoder quantizer with CNC index assignment, as crossover probability increases to a certain value, some of the encoding cells appear to be empty, which is also discovered in [22, 23]. These empty cells can be looked upon as redundancy, just like a form of implicit channel coding. Besides, it can be discovered in [7] that there is an implicit assumption is that transmitters should know transition error probabilities. If this assumption comes true, then decoders would exactly know the encoding cells in encoders and can judge if the receiving index is one of the indices of empty cells, but [7] does not apply any methods to defeat the receiving index of empty cells.

In this paper, a genie-aided erasure code is applied into the ideal quantizer decoders, which can correct the receiving index of empty cells. The lower bound for performance of the uniform decoder and channel-optimized encoder quantizer with CNC index assignment is given. Then, the single parity check (SPC) block code is utilized to approach the lower bound. The main scheme is that, in transmitters, several indices are grouped into a block and parity check bits are appended in every block. Afterwards, if a receiving index is detected to be the index of empty cells in receivers, then it will be marked as an erasure. Then, the SPC code is used to correct erasures column by column for every block. In view of the structure, the SPC block code is a little similar to the SPC product code in [24], but their decoders are very different. The decoding scheme of the SPC block code is very simple; thus, only a little complexity will be added.

The rest of this paper is organized as follows. Section 2 gives definitions and notation. In Section 3, the lower bound for distortion of quantizers with uniform decoders and channel-optimized encoders is analyzed. Section 4 introduces the SPC block code and gives the distortion of the quantizer appending it. In Section 5, analytical results are shown. Finally, the conclusion is presented in Section 6.

2 Background

Throughout this paper, a continuous real-valued source random variable X uniformly distributed on [0,1] is considered. Some of the mathematical notations and definitions follow [7] in this section.Then, a rate n-bit quantizer is a mapping from the source to one of the real-valued codepoints (quantization level) y n (i),

Q : { X | X ∈ 0 , 1 } → { y n i | i = 0 , 1 , … , 2 n − 1 } .

A rate 2 quantizer is plotted in Figure 1 as an example. The quantizer includes a quantizer encoder that is a mapping from source X to a certain index i

Q e : { X | X ∈ 0 , 1 } → { i | i = 0 , 1 , … , 2 n − 1 } ,

and a quantizer decoder that is a mapping from the index i to codepoint y n (i),

Q d : { i | i = 0 , 1 , … , 2 n − 1 } → { y n i | i = 0 , 1 , … , 2 n − 1 } .
Figure 1
figure 1

The structure of a rate 2 quantizer.

In the encoder, the i th encoding cell can be denoted by the set

R n i = Q e − 1 i .

If R n (i)=Ï•, we say R n (i) is an empty cell. In general, for most quantizers, there are no empty cells in encoding cells. But a kind of quantizers in [7] is considered, in which the quantizer encoder may contain empty encoding cells. The centroid of the i th cell of the quantizer is presented by a conditional expectation

c n i = E X | X ∈ R n i .

For a noisy channel, an index assignment π n is often used to debate noise, which is a permutation of the set {0,1,…,2n−1}. Then, if index J is received by a quantizer decoder with index assignment, a random variable X∈[0,1] will be quantized to the quantization level

Q d π n − 1 J = y n π n − 1 J .

Then, the end-to-end mean squared error (MSE) can be written as

D π n =E X − Q d π n − 1 J 2 .
(1)

In this paper, we focus on the quantizers with uniform decoders and channel-optimized encoders[7]. A quantizer decoder is said to be uniform if for each i, the i th codepoint satisfies

y n i = i + 1 2 2 − n .
(2)

A quantizer encoder is said to be a channel-optimized encoder if it satisfies the weighted nearest neighbor condition, that is,

W i ⊂ R n i ⊂ W i ¯ ,

where

W i = x : ∑ j = 0 2 n − 1 x − y n j 2 p n π n j | π n i < ∑ j = 0 2 n − 1 x − y n j 2 p n π n j | π n k , ∀ k ≠ i .
(3)

Here, W ¯ i denotes the closure of W i , and p n (j|i) denotes the probability that index j was received, given that index i was sent. If assuming a binary symmetric channel with crossover probability ε, p n (j|i) can be defined as

p n j | i = ε H n i , j 1 − ε n − H n i , j
(4)

for 0≤ε≤1/2, where H n (i,j) is the Hamming distance between n-bit binary words i and j. Then, according to [7], a quantizer with a uniform decoder and a channel-optimized encoder satisfies, for all i,

R n ¯ i ={x∈ 0 , 1 : α n i , k x≥ β n i , k ,∀k≠i},
(5)

where

α n i , k = ∑ j = 0 2 n − 1 j p n π n j | π n i − p n π n j | π n k
(6)
β n i , k = 2 − n − 1 α n i , k + ∑ j = 0 2 n − 1 j 2 p n π n j | π n i − p n π n j | π n k .
(7)

Let an encoder-optimized uniform quantizer (EOUQ) denote a rate n quantizer with a uniform decoder and a channel-optimized encoder, along with a uniform source on [0,1], and a binary symmetric channel with crossover probability ε. For each n, the CNC index assignment [7] is defined by

π n CNC i = i , for 0 ≤ i ≤ 2 n − 1 − 1 i + 1 , for 2 n − 1 ≤ i ≤ 2 n − 2 and i even i − 1 , for 2 n − 1 + 1 ≤ i ≤ 2 n − 1 and i odd .
(8)

Denote

ε n ∗ = 2 n + 4 12 · 3 sin arctan τ / σ + π 3 − cos arctan τ / σ + π 3 − 1 + 1 2
σ = 2 n − 5 − 1 27 2 n − 2 + 1 3
τ = 2 n − 4 2 n − 6 − σ .

The encoding cells of an EOUQ with the CNC index assignment are given in [7] as follows.

If n≥3 and ε∈[0,ε∗), then

R ¯ n i = 0 , δ 2 − n − ε 2 δ , for i = 0 iδ 2 − n − ε 2 δ , i + 1 δ 2 − n + ε 2 + ε 1 + 2 ε , for 1 ≤ i ≤ 2 n − 1 − 3 , i odd iδ 2 − n + ε 2 + ε 1 + 2 ε , i + 1 δ 2 − n − ε 2 δ , for 2 ≤ i ≤ 2 n − 1 − 2 , i even 2 − 1 − 2 − n δ − ε 2 δ , 1 / 2 , for i = 2 n − 1 − 1 1 / 2 , 2 − 1 + 2 − n δ + ε 2 − 3 ε δ , for i = 2 n − 1 iδ 2 − n + ε 2 − 3 ε δ , i + 1 δ 2 − n + 3 ε 2 1 + 2 ε , for 2 n − 1 + 1 ≤ i ≤ 2 n − 3 , i odd iδ 2 − n + 3 ε 2 1 + 2 ε , i + 1 δ 2 − n + ε 2 − 3 ε δ , for 2 n − 1 + 2 ≤ i ≤ 2 n − 2 , i even 1 − 2 − n δ + ε 2 − 3 ε δ , 1 , for i = 2 n − 1
(9)

where δ=1−2ε.

If n≥3 and ε∈[ ε∗,1/2), then

R ¯ n i = 0 , δ 2 − n − ε 2 δ , for i = 0 and ε < 1 / 2 n / 2 + 2 δ 2 − n − ε 2 δ , 4 δ + δ 2 2 − n − 1 + ε ∩ 0 , 1 , for i = 1 2 δ i − 1 + δ 2 2 n + 1 + ε , 2 δ i + 1 + δ 2 2 n + 1 + ε , for 3 ≤ i ≤ 2 n − 1 − 3 , i odd 2 n − 4 δ + δ 2 + 2 n + 1 ε 2 − n − 1 , 1 / 2 , for i = 2 n − 1 − 1 1 / 2 , 2 n + 2 δ + 1 − 4 ε 2 + 2 n + 1 ε 2 − n − 1 , for i = 2 n − 1 2 δ i − 1 + 1 − 4 ε 2 2 n + 1 + ε , 2 δ i + 1 + 1 − 4 ε 2 2 n + 1 + ε , for 2 n − 1 + 2 ≤ i ≤ 2 n − 4 , i even 2 n + 1 − 6 δ + 1 − 4 ε 2 2 n + 1 + ε , 1 − 2 − n δ + ε 2 − 3 ε δ ∩ 0 , 1 , for i = 2 n − 2 1 − 2 − n δ + ε 2 − 3 ε δ , 1 ∩ 0 , 1 , for i = 2 n − 1 and ε < 1 / 2 n / 2 + 2 ϕ , else .
(10)

As above, when n≥3 and ε∈[ε∗,1/2), there exists an empty cell set E, which has 2n−1−2 elements and consists of all even numbers from 2 to 2n−1−2 and all odd numbers from 2n−1+1 to 2n−3, given by

E ≜ { i : R n i = ϕ } = { i : 2 ≤ i ≤ 2 n − 1 − 2 , i even } ∪ { i : 2 n − 1 + 1 ≤ i ≤ 2 n − 3 , i odd } .
(11)

Let D π n denote the end-to-end MSD of EOUQ with index assignment π n . The MSE of an EOUQ with the CNC index assignment [7] is

D CNC = D 1 n , ε , for 0 ≤ ε ≤ ε ∗ D 2 n , ε , for ε ∗ ≤ ε ≤ 1 2 n / 2 + 2 D 3 n , ε , for 1 2 n / 2 + 2 ≤ ε ≤ 1 / 2 ,
(12)

where

D 1 n , ε = 2 − 2 n 3 1 + 2 ε 1 4 + 2 2 n + 5 2 ε − 2 2 n + 1 − 15 · 2 n + 4 ε 2 + 6 2 2 n − 2 n + 2 − 4 ε 3 + 2 n − 4 2 n − 2 ε 4 − 12 2 n − 4 ε 5 D 2 n , ε = 2 − 3 n 3 2 n − 3 + 2 n − 3 2 2 n + 10 − 2 n + 1 + 48 ε − 2 n − 6 2 n − 5 2 n − 4 − 3 · 2 2 n ε 2 + 2 2 n − 6 2 n − 5 2 n − 4 ε 3 + 12 2 n − 5 2 n − 4 ε 4 + 24 2 n − 4 ε 5 D 3 n , ε = 2 − 3 n 3 2 n + 3 + 2 n − 3 2 2 n + 10 − 2 n − 1 ε − 2 n − 6 2 n − 5 2 n − 4 − 3 · 2 2 n ε 2 + 2 2 n − 6 2 n − 5 2 n − 4 ε 3 + 12 2 n − 5 2 n − 4 ε 4 + 24 2 n − 4 ε 5 .

3 Channel-optimized quantizers with erasure correcting codes

As to channel-optimized encoders, the implied key assumption in [7] is that channel state information (CSI) is known to transmitters. In a time division duplexing (TDD) mode system, it is easy to know CSI for transmitters, because uplink and downlink share the same channel. But in a frequency division duplexing (FDD) mode system, it is much harder to know CSI for transmitters than the TDD mode. Generally, after CSI is estimated in the receivers, it must be fed back to transmitters through an extra reliable channel. Here, in this paper, for a binary symmetric channel, CSI only includes crossover probability ε. In other words, if the above assumption comes true, for a given channel-optimized encoder in [7], receivers absolutely know temporal ε and can judge if there exist empty cells in encoding cells and whether the receiving index belongs to empty cells or not. This is because the encoding cells are determined by ε. Once the empty cell index appears, receivers should recognize that the received index is erroneously detected. However, in [7], the index that is known to be an error is still sent to quantizer decoders.

An assumption is made in this section that the decoders can correct all the receiving indices of empty cells, using a genie-aided erasure correcting code, which will be discussed in the subsequent section. In other words, all encoding cells belonging to the set E are fixed to empty in the encoder, and the receiving index of empty cells can be thought as an erasure to be marked and then recovered in the decoder. Because of this assumption, the probability that index j was received given that index i was sent will be changed as

∀i∈ E c p n ′ π n j | π n i = p n π n i | π n i + ∑ k ∈ E p n π n k | π n i j = i 0 j ∈ E p n π n j | π n i j ∈ E c and j ≠ i .
(13)

Then, similarly, if the proposed quantizer encoder is said to satisfy the weighted nearest neighbor condition, then the encoding cells should satisfy

W i ′ ⊂ R n i ⊂ W i ′ ¯ ∀ i ∈ E c ,

where

W i ′ = x : ∑ j ∈ E c x − y n j 2 p n π n j | π n i + ∑ j ∈ E x − y n i 2 p n π n j | π n i < ∑ j ∈ E c x − y n j 2 p n π n j | π n k + ∑ j ∈ E x − y n k 2 p n π n j | π n k , ∀ k ≠ i .
(14)

It is worthwhile to note that the second term in each side of the less-than operator in W i ′ is different from W i in (3). This term means that if receiving an empty cell index, our proposed quantizer decoder is able to correct it. In order to be easier to solve (14), it is rewritten as follows.

Lemma 1

For all i, the encoding cells of our proposed EOUQ satisfy,

R n ′ ¯ i = x ∈ 0 , 1 : α n ′ i , k x ≥ β n ′ i , k , ∀ k ≠ i ,
(15)

where

α n ′ i , k = ∑ j ∈ E c j p n π n j | π n i − p n π n j | π n k + ∑ j ∈ E i · p n π n j | π n i − k · p n π n j | π n k
(16)
β n ′ i , k = 2 − n − 1 α n ′ i , k + ∑ j ∈ E c j 2 p n π n j | π n i − p n π n j | π n k + ∑ j ∈ E i 2 · p n π n j | π n i − k 2 · p n π n j | π n k .
(17)

After substituting (8), α n ′ i , k and β n ′ (i,k) can be simplified as the following two corollaries.

Corollary 1

α n ′ i , k

can be simplified as follows.

  1. 1.

    i=0

  2. a.

    1≤k≤2n−1 − 1, k odd; 2n−1≤k≤2n − 2, k even

    α n ′ 0 , k = − k + − 2 n + 2 k + 2 ε + 2 n + 1 − 2 k − 4 ε 2 + 2 n − 1 ε n − 1 − 2 n − 1 ε n − 2 n − k − 1 p n 2 n − 2 | π n k + k · p n 0 | π n k
  3. b.

    k=2n − 1

    α n ′ 0 , 2 n − 1 = − 2 n + 1 + 2 ε + 2 n + 1 − 6 ε 2 + 2 n + 1 − 2 ε n − 1 − 2 n + 1 − 2 ε n
  4. 2.

    1≤i≤2n − 1, i odd; 2n−1≤i≤2n − 2, i even

  5. a.

    k=0

    α n ′ i , 0 = i + 2 n − 2 i − 2 ε + − 2 n + 1 + 2 i + 4 ε 2 + − 2 n + 1 ε n − 1 + 2 n − 1 ε n + 2 n − i − 1 p n 2 n − 2 | π n i − i p n 0 | π n i
  6. b.

    1≤k≤2n − 1, k odd; 2n−1≤k≤2n − 2, k even; k≠i

    α n ′ i , k = i − k − 2 i − k ε + 2 i − k ε 2 + 2 n − i − 1 p n 2 n − 2 | π n i − i · p n 0 | π n i − 2 n − k − 1 p n 2 n − 2 | π n k + k · p n 0 | π n k
  7. c.

    k=2n−1

    α n ′ i , 2 n − 1 = − 2 n − i − 1 + 2 n − 2 i ε + 2 i − 2 ε 2 + 2 n − 1 ε n − 1 + − 2 n + 1 ε n + 2 n − i − 1 p n 2 n − 2 | π n i − i p n 0 | π n i
  8. 3.

    i=2n−1

  9. a.

    k=0

    α n ′ 2 n − 1 , 0 = 2 n − 1 − 2 ε − 2 n + 1 − 6 ε 2 − 2 n + 1 − 2 ε n − 1 + 2 n + 1 − 2 ε n
  10. b.

    1≤k≤2n − 1, k odd; 2n−1≤k≤2n − 2, k even

    α n ′ 2 n − 1 , k = 2 n − k − 1 − 2 n − 2 k ε − 2 k − 2 ε 2 − 2 n − 1 ε n − 1 + 2 n − 1 ε n − 2 n − k − 1 p n 2 n − 2 | π n k + k p n 0 | π n k

Corollary 2

β n ′ i , k

can be simplified as seen in the Appendix (‘The simplification of

β n ′ i , k

’).

It is known from the above two corollaries that for a given i and k, α n ′ i , k and β n ′ i , k have only one variable ε, so the symbolic toolbox in Matlab can be used to solve the inequality set. The encoding cells R n (i) are solved for the 3-bit and 4-bit quantizer, as shown in the following two theorem.

Theorem 1

For 0≤ε≤0.5, the encoding cells for the 3-bit quantizer are

R 3 ′ ¯ i = 0 , 1 + 42 ε 3 − 48 ε 2 + 12 ε 8 1 + 12 ε 3 − 15 ε 2 + 3 ε , for i = 0 1 + 42 ε 3 − 48 ε 2 + 12 ε 8 1 + 12 ε 3 − 15 ε 2 + 3 ε , 5 − 30 ε 3 + 6 ε 2 − 3 ε 8 2 − 6 ε 3 + 3 ε 2 − 3 ε , for i = 1 ϕ , for i = 2 5 − 30 ε 3 + 6 ε 2 − 3 ε 8 2 − 6 ε 3 + 3 ε 2 − 3 ε , 1 + 2 ε 3 − 2 ε 2 2 ε 3 − 2 ε + 1 , for i = 3 1 + 2 ε 3 − 2 ε 2 2 ε 3 − 2 ε + 1 , 11 − 18 ε 3 + 18 ε 2 − 21 ε 8 2 − 6 ε 3 + 3 ε 2 − 3 ε , for i = 4 ϕ , for i = 5 11 − 18 ε 3 + 18 ε 2 − 21 ε 8 2 − 6 ε 3 + 3 ε 2 − 3 ε , 7 + 54 ε 3 − 72 ε 2 + 12 ε 8 1 + 12 ε 3 − 15 ε 2 + 3 ε , for i = 6 7 + 54 ε 3 − 72 ε 2 + 12 ε 8 1 + 12 ε 3 − 15 ε 2 + 3 ε , 1 , for i = 7
(18)

Theorem 2

For 0≤ε≤0.5, the encoding cells for the 4-bit quantizer are

R 4 ′ ¯ i = 0 , 1 + 240 ε 4 − 223 ε 3 − 55 ε 2 + 52 ε 16 1 + 30 ε 4 − 18 ε 3 − 23 ε 2 + 11 ε , for i = 0 1 + 240 ε 4 − 223 ε 3 − 55 ε 2 + 52 ε 16 1 + 30 ε 4 − 18 ε 3 − 23 ε 2 + 11 ε , − 5 + 240 ε 4 − 173 ε 3 + 65 ε 2 − 13 ε 16 − 2 + 30 ε 4 − 21 ε 3 + 2 ε 2 + 3 ε , for i = 1 ϕ , for i = 2 − 5 + 240 ε 4 − 173 ε 3 + 65 ε 2 − 13 ε 16 − 2 + 30 ε 4 − 21 ε 3 + 2 ε 2 + 3 ε , 9 + 37 ε 3 − 33 ε 2 − 4 ε 32 1 + ε 3 + ε 2 − 2 ε , for i = 3 ϕ , for i = 4 9 + 37 ε 3 − 33 ε 2 − 4 ε 32 1 + ε 3 + ε 2 − 2 ε , 13 + 240 ε 4 − 335 ε 3 + 115 ε 2 − 20 ε 16 2 + 30 ε 4 − 43 ε 3 + 17 ε 2 − 4 ε , for i = 5 ϕ , for i = 6 13 + 240 ε 4 − 335 ε 3 + 115 ε 2 − 20 ε 16 2 + 30 ε 4 − 43 ε 3 + 17 ε 2 − 4 ε , 1 2 , for i = 7 1 2 , 19 + 240 ε 4 − 353 ε 3 + 157 ε 2 − 44 ε 16 2 + 30 ε 4 − 43 ε 3 + 17 ε 2 − 4 ε , for i = 8 ϕ , for i = 9 19 + 240 ε 4 − 353 ε 3 + 157 ε 2 − 44 ε 16 2 + 30 ε 4 − 43 ε 3 + 17 ε 2 − 4 ε , 23 − 5 ε 3 + 65 ε 2 − 60 ε 32 1 + ε 3 + ε 2 − 2 ε , for i = 10 ϕ , for i = 11 23 − 5 ε 3 + 65 ε 2 − 60 ε 32 1 + ε 3 + ε 2 − 2 ε , − 27 + 240 ε 4 − 163 ε 3 − 33 ε 2 + 61 ε 16 − 2 + 30 ε 4 − 21 ε 3 + 2 ε 2 + 3 ε , for i = 12 ϕ , for i = 13 − 27 + 240 ε 4 − 163 ε 3 − 33 ε 2 + 61 ε 16 − 2 + 30 ε 4 − 21 ε 3 + 2 ε 2 + 3 ε , 15 + 240 ε 4 − 65 ε 3 − 313 ε 2 + 124 ε 16 1 + 30 ε 4 − 18 ε 3 − 23 ε 2 + 11 ε , for i = 14 15 + 240 ε 4 − 65 ε 3 − 313 ε 2 + 124 ε 16 1 + 30 ε 4 − 18 ε 3 − 23 ε 2 + 11 ε , 1 , for i = 15
(19)

For i∈E, define the quantities

I r i = argmin j ∈ E c c n j > c n i c n j I l i = argmax j ∈ E c c n j < c n i c n j z n i = sup R n i V = { i : 1 ∈/ R ¯ n i } ∩ E c I 1 = V c ∩ E c

Let D π n lower bound denote the end-to-end MSE of modified EOUQ with index assignment π n appending a genie-aided erasure correcting code, which is the lower bound for the MSE of modified EOUQ.

Theorem 3

The lower bound for MSE of a modified EOUQ with the CNC index assignment is

D CNC lower bound = 1 3 − 2 − n − 1 + 2 − 2 n − 2 + 2 − n ∑ i ∈ V z n 2 i · α n i , I r i − ∑ i ∈ E c j p n π n CNC j | π n CNC I 1 + ∑ i ∈ E i p n π n CNC j | π n CNC I 1 + 2 − 2 n ∑ i ∈ E c j + j 2 p n π n CNC j | π n CNC I 1 + ∑ i ∈ E i + i 2 p n π n CNC j | π n CNC I 1 .
(20)

Proof

According to (1) and the assumption we make,

D CNC lower bound = ∑ i ∈ E c ∑ j ∈ E c p n π n CNC j | π n CNC i × ∫ R n i x − y n j 2 dx + ∑ i ∈ E c ∑ j ∈ E p n π n CNC j | π n CNC i × ∫ R n i x − y n i 2 dx = ∑ i ∈ E c ∑ j ∈ E c p n π n CNC j | π n CNC i × ∫ R n i x 2 − 2 x y n j + y n 2 j dx + ∑ i ∈ E c ∑ j ∈ E p n π n CNC j | π n CNC i × ∫ R n i x 2 − 2 x y n i + y n 2 i dx = ∑ i ∈ E c ∑ j ∈ E c p n π n CNC j | π n CNC i × ∫ R n i x 2 − 2 x j + 0.5 2 n + j + 0.5 2 n 2 dx + ∑ i ∈ E c ∑ j ∈ E p n π n CNC j | π n CNC i × ∫ R n i x 2 − 2 x i + 0.5 2 n + i + 0.5 2 n 2 dx = 1 3 − 2 − n − 1 + 2 − 2 n − 2 − 2 − n ∑ i ∈ E c z n 2 i − z n 2 I l i × ∑ i ∈ E c j p n π n CNC j | π n CNC i + ∑ i ∈ E i p n π n CNC j | π n CNC i + 2 − 2 n ∑ i ∈ E c z n i − z n I l i × ∑ i ∈ E c j + j 2 p n π n CNC j | π n CNC i + ∑ i ∈ E i + i 2 p n π n CNC j | π n CNC i .

After re-expressing I l (i) the above formula with respect to I r (i), and merging like terms according to the definitions of α n ′ i , k and β n ′ i , k ,

D CNC lower bound = 1 3 − 2 − n − 1 + 2 − 2 n − 2 + 2 − n ∑ i ∈ V z n 2 · α n i , I r i − ∑ j ∈ E c j p n π n CNC j | π n CNC I 1 + ∑ j ∈ E i p n π n CNC j | π n CNC I 1 + 2 − 2 n ∑ j ∈ E c j + j 2 p n π n CNC j | π n CNC I 1 + ∑ j ∈ E i + i 2 p n π n CNC j | π n CNC I 1 .

After substituting (18) and (19), respectively, Theorem 3 gives the following results.

Theorem 4

The lower bound for MSE of a 3-bit modified EOUQ with the CNC index assignment is

D CNC lower bound = 5 + 243 ε + 135 ε 2 + 579 ε 3 − 66 , 276 ε 6 + 39 , 330 ε 5 − 13 , 932 ε 4 + 13 , 176 ε 9 − 59 , 184 ε 8 + 84 , 888 ε 7 768 1 + 12 ε 3 − 15 ε 2 + 3 ε 2 − 6 ε 3 + 3 ε 2 − 3 ε
(21)

Theorem 5

The lower bound for MSE of a 4-bit modified EOUQ with the CNC index assignment is

D CNC lower bound = 11 , 740 ε + 5 , 884 ε 2 − 476 , 023 ε 3 + 3 , 328 , 344 ε 4 − 1 , 195 , 090 , 122 ε 12 − 5 , 134 , 292 , 766 ε 15 + 4 , 009 , 702 , 089 ε 14 − 801 , 350 , 784 ε 13 + 156 , 129 , 623 ε 8 − 107 , 373 , 749 ε 7 + 47 , 124 , 939 ε 6 − 14 , 855 , 907 ε 5 + 1 , 183 , 613 , 619 ε 11 − 441 , 562 , 375 ε 10 − 43 , 375 , 642 ε 9 + 2 , 844 , 849 , 870 ε 16 + 155 , 520 , 000 ε 19 − 518 , 967 , 000 ε 18 − 143 , 067 , 600 ε 17 + 52 / 12 , 288 1 + 30 ε 4 − 18 ε 3 − 23 ε 2 + 11 ε × 2 − 30 ε 4 + 21 ε 3 − 2 ε 2 − 3 ε × 1 + ε 3 + ε 2 − 2 ε 2 + 30 ε 4 − 43 ε 3 + 17 ε 2 − 4 ε
(22)

4 EOUQ with CNC aided by single parity check block code

A good erasure code for our proposed quantizers should have excellent ability to correct erasures but no ability to correct errors. This is because the codes that have error correcting ability can improve the performance of not only our proposed quantizers but also the quantizers in [7], and the benefits are equal. Thus, in this section, we focus on SPC code to correct the empty cell index that appears in receivers, in order to approach the lower bound for EOUQ with CNC index assignment.

4.1 SPC Code

The SPC code is one of the most popular error detection codes for it is easy to be implemented, which is also able to correct a single erasure. But, if there exist multiple erasures, the typical decoding method for the single-erasure case will fail to recover them. In this paper, we present a modified decoding method to deal with the multiple-erasure case for the SPC code. The main idea is that when one of the multiple erasures is being recovered, the other erasures are restored to the original value before being marked. Then, the multiple-erasure case can be transferred to several cases for a single erasure. Thus, the typical decoding method for the single-erasure case is still to be effective. Figure 2 gives an illustration of the modified decoding algorithm for the SPC code. According to Figure 2, as multiple erasures are recovered independently if multiple erasures exist, the erasure recovery probability P ~ c with modified decoding method for the single- or multiple-erasure case is equal to the erasure recovery probability P c with typical decoding method for the single-erasure case, i.e.,

P ~ c = P c .
(23)
Figure 2
figure 2

An example of a modified decoding algorithm for SPC code.

Then, the probability P c that a single erasure can be recovered by the SPC typical decoder is given in the following theorem.

Theorem 6

If a single erasure is detected, the probability that the erasure can be recovered by SPC code is

P c = ∑ j is even j = 0 k ̄ C k − 1 j 1 − ε k − j − 1 ε j ,
(24)

where C n k denotes an n choose k function, k ̄ =2⌊ k − 1 /2⌋, and ⌊x⌋ means to round x to the nearest integers less than or equal to x.

Proof.

If a single erasure appears in the receiving SPC code, all error events can be classified into the following several cases.

Case 1. No error: when there exist no errors in the column, the erasure can be definitely recovered. Then, in this case, the recovery probability is (1−ε)k.

Case 2. One error: when there exists one error in the column, only if the single error happens to be located at the erasure, the erasure can be recovered. Then, the recovery probability, in this case, is (1−ε)k−1ε.

Case 3. Two errors: when there exist two errors in the column, only if the two errors are both not located at the erasure, the erasure can be recovered. Then, the recovery probability, in this case, is C k − 1 2 1 − ε k − 2 ε 2 .

Case 4. Odd number of errors: when there exist i (odd number) errors in the column, similar to ‘one error case’, only if one of the errors happens to be located at the erasure, the erasure can be recovered. Then, the recovery probability, in this case, is C k − 1 i − 1 1 − ε k − i ε i .

Case 5. Even number of errors: when there exist j (even number) errors in the column, similar to ‘two errors case’, only if all errors are all not located at the erasure, the erasure can be recovered. Then, the recovery probability, in this case, is C k − 1 j 1 − ε k − j ε j .

In sum, according to all cases, the probability that the single erasure can be recovered by SPC code can be written as

P c = 1 − ε k + C k − 1 2 1 − ε k − 2 ε 2 + ⋯ + C k − 1 j 1 − ε k − j ε j + ⋯ + C k − 1 1 − 1 1 − ε k − 1 ε + C k − 1 3 − 1 1 − ε k − 3 ε 3 + ⋯ + C k − 1 i − 1 1 − ε k − i ε i + ⋯ ,

where i is an odd number and j is an even number. Assuming k is odd, without of generalization, let i=j+1, so that

P c = 1 − ε k + C k − 1 2 1 − ε k − 2 ε 2 + ⋯ + C k − 1 j 1 − ε k − j ε j + ⋯ + C k − 1 1 − 1 1 − ε k − 1 ε + C k − 1 3 − 1 1 − ε k − 3 ε 3 + ⋯ + C k − 1 j + 1 − 1 1 − ε k − j − 1 ε j + 1 + ⋯ = C k − 1 0 1 − ε k + 1 − ε k − 1 ε + C k − 1 2 1 − ε k − 2 ε 2 + 1 − ε k − 3 ε 3 + ⋯ + C k − 1 j 1 − ε k − j ε j + 1 − ε k − j − 1 ε j + 1 + ⋯ + C k − 1 k − 1 1 − ε k − k − 1 ε k − 1 + 1 − ε k − k − 1 − 1 ε k − 1 + 1 = C k − 1 0 1 − ε k − 1 + C k − 1 2 1 − ε k − 3 ε 2 + ⋯ + C k − 1 j 1 − ε k − j − 1 ε j + C k − 1 k − 1 ε k − 1 = ∑ j is even j = 0 k − 1 C k − 1 j 1 − ε k − j − 1 ε j .

Supposing k is even, also let i=j+1, so that

P c = 1 − ε k + C k − 1 2 1 − ε k − 2 ε 2 + ⋯ + C k − 1 j 1 − ε k − j ε j + ⋯ + C k − 1 1 − 1 1 − ε k − 1 ε + C k − 1 3 − 1 1 − ε k − 3 ε 3 + ⋯ + C k − 1 j + 1 − 1 1 − ε k − j − 1 ε j + 1 + ⋯ = C k − 1 0 1 − ε k + 1 − ε k − 1 ε + C k − 1 2 1 − ε k − 2 ε 2 + 1 − ε k − 3 ε 3 + ⋯ + C k − 1 j 1 − ε k − j ε j + 1 − ε k − j − 1 ε j + 1 + ⋯ + C k − 1 k − 2 1 − ε k − k − 2 ε k − 2 + 1 − ε k − k − 2 − 1 ε k − 2 + 1 = C k − 1 0 1 − ε k − 1 + C k − 1 2 1 − ε k − 3 ε 2 + ⋯ + C k − 1 j 1 − ε k − j − 1 ε j + C k − 1 k − 2 1 − ε ε k − 2 = ∑ j is even j = 0 k − 2 C k − 1 j 1 − ε k − j − 1 ε j .

Therefore, for any k≥2, the probability that the single erasure can be recovered by SPC code is

P c = ∑ j is even j = 0 2 ⌊ k − 1 / 2 ⌋ C k − 1 j 1 − ε k − j − 1 ε j .
(25)

4.2 SPC block code

In this paper, k−1 transmitting indices with parity bits are grouped into a k×n SPC block code, as shown in Figure 3. In a SPC block code, every index is converted to a binary word and then placed row by row, and bits in every column are grouped to a SPC code, respectively. If an index in a row is detected to belong to the empty cell set, all entries in the row are marked as an erasure word. In this paper, erasure word denotes the whole erased bits in one row, and then, a bit in the erasure word is called as erasure bit. Then, the modified SPC decoding method shown in Figure 2 is used to recover every erasure bit column by column. Figure 4 gives an illustration of the decoding algorithm for a 3-bit quantizer aided by 5×3 SPC block code. As shown in Figure 4, if multiple rows are marked as erasure word, when the erasure words in one row are being recovered, the erasure words in other rows are restored to the original value before being marked.

Figure 3
figure 3

The structure of SPC block code.

Figure 4
figure 4

An example of decoding algorithm for a 3-bit quantizer aided by 5×3 SPC block code.

In order to avoid confusion, we define that I denotes the transmitted index, K denotes the input of the SPC block decoder, and J denotes the output of the SPC block decoder. Then, the SPC block code-aided transition probability p ~ n j | i can be defined as follows.

Theorem 7

If aided by SPC block code, the transition probability p ~ n j | i that index j is output from the SPC block decoder given that index i is transmitted can be written as

p ~ n j | i = p n j | i + ∑ k ∈ E p n k | i · 1 − P c H n i , j P c n − H n i , j j ∉ E ∑ k ∈ E p n k | i · 1 − P c H n i , j P c n − H n i , j j ∈ E .
(26)

Proof.

p ~ n j | i ≜ p ~ n J = j | I = i = p ~ n J = j , I = i p ~ n I = i = ∑ k p ~ n J = j , K = k , I = i p ~ n I = i = ∑ k p ~ n J = j | K = k , I = i · p ~ n K = k , I = i p ~ n I = i = ∑ k p ~ n J = j | K = k , I = i · p ~ n K = k | I = i = ∑ k p ~ n J = j | K = k , I = i · p n k | i ,
(27)

where p n (k|i) is defined in (4). Obviously, p ~ n (K=k|I=i)= p n k | i .

Case 1: k∉E (index input into decoders does not belong to empty cells)

p ~ n J = j | K = k , I = i = 1 k = j and j ∉ E 0 k ≠ j.
(28)

This is because if the input index does not belong to the empty cell, it will not be marked as erasure and not be changed by the SPC block decoder.

Case 2: k∈E (index input into decoders belongs to empty cells)

In this case, p ~ n J = j | K = k , I = i denotes the probability that index j is output from the SPC block code, given that index i is sent and index k, which belongs to empty cells, is received and input into the SPC block decoder. The whole bits in the row where index k lies in will be marked as erasure word. According to the proposed SPC block decoding algorithm shown in Figure 4, every erasure bit will try to be recovered by the modified SPC decoder (mentioned in Figure 2) column by column, respectively. If multiple rows are marked as erasure word, when erasure words in one row are being recovered, the erasure words in other row are restored to original value before marked. Recalled from (23), the recovery probability for one erasure bit, P ~ c , can be obtained from Theorem 6. Assuming N denotes the number of bits that fail to be recovered for an erasure word, then the number of bits that succeed to be recovered for an erasure word is equal to n−N. Then,

p ~ n J = j | K = k , I = i = 1 − P ~ c N · P ~ c n − N = 1 − P c N · P c n − N
(29)

Obviously, N=H n (i,j), where H n (i,j) is the Hamming distance between n-bit binary words i and j.

Therefore,

p ~ n J = j | K = k , I = i = 1 − P c H n i , j P c n − H n i , j .
(30)

After substituting (28) and (30), (27) can be rewritten as

p ~ n j | i = p n j | i + ∑ k ∈ E p n k | i · 1 − P c H n i , j P c n − H n i , j j ∉ E ∑ k ∈ E p n k | i · 1 − P c H n i , j P c n − H n i , j j ∈ E .
(31)

Then, MSD can be written as

D CNC SPC = ∑ i ∑ j p ~ n π n CNC j | π n CNC i × ∫ R n i x − y n j 2 dx = ∑ i ∑ j p ~ n π n CNC j | π n CNC i × x 3 3 − y n j x 2 + y n 2 j x R n i .

Now, D CNC SPC is a function of ε and k. Thus, we can use the Symbolic toolbox in Matlab to obtain the exact expression for every special case as follows.

For 3-bit quantizer, K=3

D CNC K = 3 = 7 , 962 , 624 ε 21 − 66 , 686 , 976 ε 20 + 264 , 508 , 416 ε 19 − 669 , 171 , 456 ε 18 + 1 , 226 , 192 , 256 ε 17 − 1 , 745 , 986 , 752 ε 16 + 2 , 013 , 793 , 056 ε 15 − 1 , 922 , 961 , 168 ε 14 + 1 , 526 , 564 , 664 ε 13 − 992 , 715 , 912 ε 12 + 509 , 652 , 360 ε 11 − 192 , 400 , 668 ε 10 + 44 , 669 , 958 ε 9 − 1 , 219 , 602 ε 8 − 2 , 647 , 899 ε 7 + 71 , 307 ε 6 + 553 , 449 ε 5 − 180 , 846 ε 4 + 4 , 059 ε 3 + 2 , 763 ε 2 + 501 ε + 10 / 768 − 2 + 6 ε 3 − 3 ε 2 + 3 ε 2 × 1 + 12 ε 3 − 15 ε 2 + 3 ε 2
(32)

For 3-bit quantizer, K=4

D CNC K = 4 = 63 , 700 , 992 ε 24 − 629 , 047 , 296 ε 23 + 2 , 944 , 180 , 224 ε 22 − 8 , 785 , 760 , 256 ε 21 + 18 , 977 , 504 , 256 ε 20 − 31 , 903 , 206 , 912 ε 19 + 43 , 683 , 065 , 856 ε 18 − 50 , 100 , 968 , 448 ε 17 + 48 , 911 , 316 , 480 ε 16 − 40 , 974 , 579 , 648 ε 15 + 29 , 499 , 005 , 376 ε 14 − 18 , 128 , 322 , 720 ε 13 + 9 , 340 , 103 , 424 ε 12 − 3 , 904 , 109 , 052 ε 11 + 1 , 250 , 869 , 332 ε 10 − 273 , 128 , 574 ε 9 + 26 , 813 , 814 ε 8 + 3 , 148 , 215 ε 7 + 92 , 601 ε 6 − 947 , 925 ε 5 + 277 , 830 ε 4 − 7 , 539 ε 3 − 3 , 735 ε 2 − 501 ε − 10 / − 768 1 + 12 ε 3 − 15 ε 2 + 3 ε 2 × − 2 + 6 ε 3 − 3 ε 2 + 3 ε 2
(33)

For 3-bit quantizer, K=5

D CNC K = 5 = 509 , 607 , 936 ε 27 − 5 , 796 , 790 , 272 ε 26 + 31 , 484 , 215 , 296 ε 25 − 109 , 374 , 603 , 264 ε 24 + 274 , 864 , 472 , 064 ε 23 − 535 , 778 , 758 , 656 ε 22 + 847 , 229 , 552 , 640 ε 21 − 1 , 120 , 016 , 581 , 632 ε 20 + 1 , 262 , 654 , 369 , 280 ε 19 − 1 , 229 , 166 , 254 , 592 ε 18 + 1 , 040 , 713 , 894 , 656 ε 17 − 768 , 801 , 597 , 696 ε 16 + 495 , 151 , 866 , 816 ε 15 − 276 , 513 , 260 , 880 ε 14 + 132 , 315 , 403 , 416 ε 13 − 53 , 156 , 640 , 288 ε 12 + 17 , 336 , 334 , 384 ε 11 − 4 , 320 , 019 , 992 ε 10 + 713 , 651 , 970 ε 9 − 39 , 095 , 634 ε 8 − 11 , 051 , 139 ε 7 + 193 , 347 ε 6 + 1 , 467 , 897 ε 5 − 386 , 442 ε 4 + 9 , 711 ε 3 + 4 , 707 ε 2 + 501 ε + 10 / 768 − 2 + 6 ε 3 − 3 ε 2 + 3 ε 2 1 + 12 ε 3 − 15 ε 2 + 3 ε 2
(34)

For 4-bit quantizer, K=3

D CNC K = 3 = 14 , 276 , 736 , 000 , 000 ε 39 107 , 794 , 022 , 400 , 000 ε 38 + 290 , 417 , 114 , 880 , 000 ε 37 33 , 167 , 086 , 848 , 000 ε 36 − 2 , 096 , 081 , 266 , 694 , 400 ε 35 + 7 , 146 , 138 , 215 , 312 , 640 ε 34 11 , 787 , 493 , 179 , 759 , 552 ε 33 + 4 , 294 , 650 , 642 , 726 , 528 ε 32 + 30 , 745 , 837 , 163 , 362 , 608 ε 31 93 , 407 , 604 , 210 , 871 , 728 ε 30 + 152 , 012 , 048 , 832 , 668 , 196 ε 29 160 , 041 , 493 , 676 , 832 , 384 ε 28 + 101 , 864 , 889 , 812 , 630 , 040 ε 27 − 14 , 665 , 606 , 004 , 624 , 670 ε 26 45 , 599 , 458 , 954 , 787 , 901 ε 25 + 54 , 961 , 029 , 779 , 720 , 829 ε 24 − 31 , 880 , 510 , 555 , 625 , 807 ε 23 + 7 , 071 , 644 , 674 , 463 , 064 ε 22 + 4 , 425 , 365 , 815 , 868 , 583 ε 21 − 5 , 028 , 119 , 456 , 259 , 998 ε 20 + 2 , 223 , 302 , 804 , 647 , 541 ε 19 234 , 761 , 920 , 605 , 312 ε 18 − 379 , 032 , 669 , 202 , 821 ε 17 + 329 , 357 , 542 , 510 , 982 ε 16 163 , 086 , 464 , 543 , 222 ε 15 + 57 , 945 , 094 , 589 , 773 ε 14 15 , 146 , 844 , 338 , 710 ε 13 + 2 , 665 , 837 , 646 , 460 ε 12 154 , 116 , 255 , 422 ε 11 − 97 , 607 , 466 , 956 ε 10 + 50 , 967 , 593 , 269 ε 9 16 , 469 , 653 , 107 ε 8 + 4 , 300 , 904 , 858 ε 7 − 1 , 004 , 660 , 075 ε 6 + 202 , 215 , 844 ε 5 22 , 094 , 182 ε 4 1 , 646 , 856 ε 3 + 400 , 968 ε 2 + 48 , 104 ε + 208 / 12 , 288 2 + 30 ε 4 43 ε 3 + 17 ε 2 4 ε 2 1 + 30 ε 4 18 ε 3 23 ε 2 + 11 ε 2 − 2 + 30 ε 4 21 ε 3 + 2 ε 2 + 3 ε 2 1 + ε 3 + ε 2 − 2 ε 2
(35)

For 4-bit quantizer, K=4

D CNC K = 4 = 114 , 213 , 888 , 000 , 000 ε 42 − 1 , 033 , 673 , 011 , 200 , 000 ε 41 + 3 , 285 , 980 , 835 , 840 , 000 ε 40 − 427 , 035 , 193 , 344 , 000 ε 39 − 29 , 615 , 491 , 894 , 579 , 200 ε 38 + 103 , 980 , 817 , 090 , 897 , 920 ε 37 − 164 , 725 , 822 , 832 , 887 , 296 ε 36 + 34 , 515 , 810 , 866 , 297 , 088 ε 35 + 472 , 361 , 930 , 923 , 112 , 064 ε 34 − 1 , 315 , 712 , 396 , 208 , 910 , 656 ε 33 + 2 , 125 , 947 , 487 , 263 , 270 , 816 ε 32 − 2 , 411 , 219 , 122 , 612 , 440 , 912 ε 31 + 1 , 921 , 993 , 430 , 825 , 213 , 592 ε 30 − 851 , 450 , 424 , 470 , 925 , 900 ε 29 − 285 , 553 , 775 , 320 , 155 , 900 ε 28 + 988 , 348 , 010 , 466 , 804 , 588 ε 27 − 1 , 071 , 182 , 372 , 882 , 329 , 488 ε 26 + 716 , 211 , 751 , 000 , 262 , 427 ε 25 − 268 , 380 , 717 , 446 , 800 , 437 ε 24 − 24 , 693 , 601 , 109 , 397 , 063 ε 23 + 117 , 058 , 813 , 839 , 308 , 184 ε 22 − 91 , 679 , 254 , 830 , 762 , 285 ε 21 + 40 , 953 , 822 , 541 , 585 , 406 ε 20 − 8 , 513 , 469 , 892 , 437 , 467 ε 19 − 2 , 817 , 431 , 401 , 552 , 608 ε 18 + 3 , 612 , 067 , 778 , 674 , 335 ε 17 − 1 , 903 , 032 , 054 , 185 , 672 ε 16 + 647 , 855 , 202 , 371 , 522 ε 15 − 132 , 028 , 998 , 320 , 989 ε 14 − 1 , 029 , 960 , 719 , 786 ε 13 + 13 , 574 , 222 , 503 , 848 ε 12 − 6 , 451 , 548 , 073 , 534 ε 11 + 1 , 919 , 139 , 169 , 130 ε 10 − 412 , 300 , 519 , 087 ε 9 + 64 , 933 , 542 , 813 ε 8 − 7 , 683 , 962 , 162 ε 7 + 1 , 043 , 876 , 171 ε 6 − 249 , 828 , 052 ε 5 + 40 , 140 , 886 ε 4 + 201 , 576 ε 3 − 467 , 304 ε 2 − 48 , 104 ε − 208 / − 12 , 288 1 + ε 3 + ε 2 − 2 ε 2 − 2 + 30 ε 4 − 21 ε 3 + 2 ε 2 + 3 ε 2 2 + 30 ε 4 − 43 ε 3 + 17 ε 2 − 4 ε 2 1 + 30 ε 4 − 18 ε 3 − 23 ε 2 + 11 ε 2
(36)

For 4-bit quantizer, K=5

D CNC K = 5 = 208 + 48 , 104 ε + 533 , 640 ε 2 + 1 , 165 , 656 ε 3 − 58 , 458 , 982 ε 4 + 281 , 191 , 396 ε 5 − 1 , 006 , 697 , 867 ε 6 + 15 , 450 , 699 , 362 ε 7 − 19 , 1847 , 469 , 599 ε 8 + 1 , 488 , 428 , 462 , 173 ε 9 − 8 , 189 , 718 , 621 , 524 ε 10 + 33 , 774 , 417 , 209 , 110 ε 11 − 103 , 453 , 615 , 005 , 204 ε 12 + 207 , 719 , 199 , 032 , 498 ε 13 − 66 , 906 , 562 , 649 , 267 ε 14 − 1 , 513 , 988 , 103 , 500 , 822 ε 15 + 7 , 587 , 789 , 874 , 788 , 170 ε 16 − 22 , 537 , 124 , 796 , 414 , 813 ε 17 + 44 , 795 , 080 , 053 , 420 , 036 ε 18 − 45 , 411 , 147 , 473 , 877 , 727 ε 19 − 64 , 724 , 051 , 540 , 787 , 962 ε 20 + 441 , 781 , 898 , 032 , 166 , 091 ε 21 − 1 , 209 , 986 , 072 , 300 , 463 , 732 ε 22 + 2 , 194 , 513 , 539 , 257 , 525 , 709 ε 23 − 2 , 581 , 081 , 879 , 055 , 831 , 871 ε 24 + 852 , 406 , 151 , 910 , 028 , 791 ε 25 + 4 , 532 , 585 , 756 , 743 , 770 , 546 ε 26 − 13 , 587 , 290 , 869 , 775 , 967 , 948 ε 27 + 23 , 129 , 883 , 155 , 269 , 680 , 864 ε 28 − 26 , 732 , 149 , 422 , 180 , 934 , 236 ε 29 + 17 , 882 , 765 , 153 , 323 , 206 , 432 ε 30 + 4 , 450 , 563 , 097 , 100 , 665 , 344 ε 31 − 32 , 275 , 487 , 629 , 836 , 383 , 808 ε 32 + 51 , 938 , 010 , 511 , 486 , 921 , 344 ε 33 − 53 , 591 , 598 , 103 , 908 , 331 , 776 ε 34 + 38 , 728 , 483 , 984 , 293 , 107 , 712 ε 35 − 18 , 259 , 673 , 092 , 737 , 874 , 944 ε 36 + 3 , 150 , 966 , 289 , 987 , 906 , 560 ε 37 + 2 , 903 , 688 , 409 , 553 , 842 , 176 ε 38 − 2 , 899 , 835 , 033 , 082 , 052 , 608 ε 39 + 1 , 258 , 694 , 025 , 775 , 349 , 760 ε 40 − 228 , 653 , 524 , 883 , 865 , 600 ε 41 − 47 , 498 , 124 , 460 , 032 , 000 ε 42 + 39 , 377 , 206 , 149 , 120 , 000 ε 43 − 9 , 639 , 950 , 745 , 600 , 000 ε 44 + 913 , 711 , 104 , 000 , 000 ε 45 / 12 , 288 − 2 + 30 ε 4 − 21 ε 3 + 2 ε 2 + 3 ε 2 1 + ε 3 + ε 2 − 2 ε 2 2 + 30 ε 4 − 43 ε 3 + 17 ε 2 − 4 ε 2 1 + 30 ε 4 − 18 ε 3 − 23 ε 2 + 11 ε 2
(37)

5 Distortion analysis

In this section, several figures are plotted to analyze performance of the standard [7] and proposed modified EOUQ. Distortion of the standard EOUQ and the lower bound for distortion of the modified EOUQ with CNC index assignment for 3-bit and 4-bit are shown in Figures 5 and 6, respectively, which are plotted in imitation of Figure six in [7] in order to clearly display the difference between those. According to Figures 5 and 6, when the crossover probability ε is greater than 0.01, the performance of our proposed EOUQ is better than that of the standard EOUQ with CNC index assignment, which shows the benefit obtained from a good erasure correcting code. Additionally, when ε<0.01, the proposed one is worse than the standard one. This is because the modified EOUQ would increase the quantization error due to the fixed empty cells for all ε in (18) and (19). But this problem can be solved by switching work mode in encoders and decoders. As mentioned before, the difference between the standard and proposed EOUQ is that an erasure correcting code into encoders and the corresponding decode scheme into decoders are appended. So, in a practical system, we can initially give a threshold value ε ̂ n for ε, for example, ε ̂ n =0.01 for 3-bit and 4-bit quantizers. When ε< ε ̂ n , the standard quantizer is adopted. Then, when encoders and decoders realize ε≥ ε ̂ n , an erasure correcting code is appended.

Figure 5
figure 5

The difference between EOUQ and proposed modified EOUQ in MSE achieved by CNC index assignment for rate n =3.

Figure 6
figure 6

The difference between EOUQ and proposed modified EOUQ in MSE achieved by CNC index assignment for rate n =4.

The X axis (which represents crossover probability) of Figure 5 is numbered from 0 to 0.5. However, in communication systems nowadays, the crossover probability seldom happens to reach 0.5. So, it is a preferable way to clearly display the performance that the logarithmic axis numbered from 10−3 to 10−1 should be chosen as X axis. Then, the proposed modified EOUQ appending a SPC block with K=3,4,5 in MSE achieved by CNC index assignment for rate n=3 and n=4 is shown in Figures 7 and 8, respectively. As shown in Figures 7 and 8, when crossover probability is less than 10−2, all lines are almost overlapped, that is to say that most of empty cell indices are recovered by SPC block decoders. However, when crossover probability is greater than 10−2, the curve of our proposed quantizer using SPC block code gradually deviates from the lower bound, because the SPC block decoder does not have enough capability of correcting all erasures so that a few of empty cell indices cannot be recovered according to Theorem 6. This is a performance penalty incurred by the sub-optimal erasure correcting code used in this paper. If another well-designed SPC code that can increase recovery probability in Theorem 6, for example SPC product code, or better erasure correcting code is adopted, the performance will approach the lower bound. But it is important that when crossover probability is numbered from 10−2 to 10−1, the performance of proposed quantizers with CNC index assignment using SPC block code is still better than that of standard quantizers with CNC index assignment.

Figure 7
figure 7

The MSE difference between original and modified EOUQ appending a SPC block with K =3,4,5 for rate n =3.

Figure 8
figure 8

The MSE difference between original and modified EOUQ appending a SPC block with K =3,4,5 for rate n =4.

6 Conclusion

In this paper, the lower bound for a modified uniform decoder and channel-optimized encoder quantizer with CNC index assignment is proposed. After appending the SPC block code and its decoding algorithm into encoders and decoders, respectively, the performance of this modified quantizer approaches the lower bound and is better than that of the standard quantizer when crossover error probability ε is greater than a threshold ε ̂ n .

Appendix

Proof of Lemma 1

Proof.

For any integer i,k∈[0,2n−1] but i≠k, the inequality in (14) can be rewritten as

∑ j ∈ E c x 2 − 2 x y n j + y n 2 j p n π n j | π n i + ∑ j ∈ E x 2 − 2 x y n i + y n 2 i p n π n j | π n i ≤ ∑ j ∈ E c x 2 − 2 x y n j + y n 2 j p n π n j | π n k + ∑ j ∈ E x 2 − 2 x y n k + y n 2 k p n π n j | π n k .

Merging like terms gives

x 2 ∑ j p n π n j | π n i + ∑ j ∈ E c − 2 x y n j + y n 2 j p n π n j | π n i + ∑ j ∈ E − 2 x y n i + y n 2 i p n π n j | π n i ≤ x 2 ∑ j p n π n j | π n k + ∑ j ∈ E c − 2 x y n j + y n 2 j p n π n j | π n k + ∑ j ∈ E − 2 x y n k + y n 2 k p n π n j | π n k .

Since π n is bijective and ∑ j p n j | i =1, ∀j, cancellation of terms gives

∑ j ∈ E c 2 x y n j p n π n j | π n i − p n π n j | π n k + ∑ j ∈ E 2 x y n i p n π n j | π n i − y n k p n π n j | π n k ≥ ∑ j ∈ E c y n 2 j p n π n j | π n i − p n π n j | π n k + ∑ j ∈ E y n 2 i p n π n j | π n i − y n 2 k p n π n j | π n k .

After canceling terms and substituting (2), the left side of the inequality gives

2 − n x ∑ j ∈ E c 2 j + 1 p n π n j | π n i − p n π n j | π n k + 2 − n x ∑ j ∈ E 2 i + 1 p n π n j | π n i − 2 k + 1 p n π n j | π n k = 2 − n + 1 x ∑ j ∈ E c j p n π n j | π n i − p n π n j | π n k + ∑ j ∈ E i · p n π n j | π n i − k · p n π n j | π n k + 2 − n x ∑ j p n π n j | π n i − p n π n j | π n k = 2 − n + 1 α n ′ i , k x

and similarly, the right side of the inequality gives

2 − 2 n x ∑ j ∈ E c j 2 + j + 1 4 p n π n j | π n i − p n π n j | π n k + 2 − n x ∑ j ∈ E i 2 + i + 1 4 p n π n j | π n i − k 2 + k + 1 4 p n π n j | π n k = 2 − n + 1 β n ′ i , k + 2 − 2 n − 2 ∑ j p n π n j | π n i − p n π n j | π n k = 2 − n + 1 β n ′ i , k .

Therefore, the inequality in (14) can be rewritten as

α n ′ i , k x≥ β n ′ i , k .

The simplification of β n ′ i , k

β n ′ i , k = 2 − n − 1 α n ′ i , k + ∑ j ∈ E c j 2 p n π n j | π n i − p n π n j | π n k + ∑ j ∈ E i 2 · p n π n j | π n i − k 2 · p n π n j | π n k .

Case 1. i=0

  1. a.

    1≤k≤2n − 1, k odd; 2n−1≤k≤2n − 2, k even

    β n ′ 0 , k = 2 − n − 1 α n ′ 0 , k + 2 2 n + 2 3 − 6 · 2 n + 32 3 + 4 k 2 − 2 n + 2 − 4 k ε 3 + 2 n − 8 − 8 k 2 + 3 · 2 n + 1 − 6 k ε 2 + − 2 2 n + 4 3 + 2 n + 4 k 2 + − 2 n + 1 + 2 k ε − k 2 + 2 n − 1 2 ε n − 1 − 2 n − 1 2 ε n + k 2 p n 0 | π n k − 2 n − 1 2 − k 2 × p n 2 n − 2 | π n k
  2. b.

    k=2n−1

    β n ′ 0 , 2 n − 1 = 2 − n − 1 α n ′ 0 , 2 n − 1 − 2 2 n − 1 2 ε n + 2 2 n − 1 2 ε n − 1 + 2 · 2 2 n − 8 · 2 n + 6 ε 2 + 2 n + 1 − 2 ε − 2 n − 1 2

Case 2. 1≤i≤2n − 1, i odd; 2n−1≤i≤2n − 2, i even

  1. a.

    k=0

    β n ′ i , 0 = 2 − n − 1 α n ′ i , 0 − 2 2 n + 2 3 − 6 · 2 n + 32 3 + 4 i 2 − 2 n + 2 − 4 i ε 3 − 2 n − 8 − 8 i 2 + 3 · 2 n + 1 − 6 i ε 2 − − 2 2 n + 4 3 + 2 n + 4 i 2 + − 2 n + 1 + 2 i ε + i 2 − 2 n − 1 2 × ε n − 1 + 2 n − 1 2 ε n − i 2 p n 0 | π n i + 2 n − 1 2 − i 2 p n 2 n − 2 | π n i
  2. b.

    1≤k≤2n − 1, k odd; 2n−1≤k≤2n − 2, k even; k≠i

    β n ′ i , k = 2 − n − 1 α n ′ i , k + − 4 i 2 + 4 k 2 + 2 n + 2 − 4 × i − k ε 3 + 8 i 2 − 8 k 2 + − 3 · 2 n + 1 + 6 i − k ε 2 + − 4 i 2 + 4 k 2 + 2 n + 1 − 2 i − k × ε + i 2 − k 2 − i 2 p n 0 | π n i + 2 n − 1 2 − i 2 p n 2 n − 2 | π n i + k 2 p n 0 | π n k − 2 n − 1 2 − k 2 × p n 2 n − 2 | π n k
  3. c.

    k=2n−1

    β n ′ i , 2 n − 1 = 2 − n − 1 α n ′ i , 2 n − 1 − 2 2 n + 2 3 − 6 · 2 n + 32 3 + 4 i 2 − 2 n + 2 − 4 i ε 3 + 2 2 n + 1 − 9 · 2 n + 14 + 8 i 2 + − 3 · 2 n + 1 + 6 i ε 2 + 2 2 n − 10 3 + 2 n − 4 i 2 + 2 n + 1 − 2 i × ε + i 2 − 2 n − 1 2 + 2 n − 1 2 ε n − 1 − 2 n − 1 2 × ε n − i 2 p n 0 | π n i + 2 n − 1 2 − i 2 p n 2 n − 2 | π n i

Case 3. i=2n−1

  1. a.

    k=0

    β n ′ 2 n − 1 , 0 = 2 − n − 1 α n ′ 2 n − 1 , 0 + 2 2 n − 1 2 ε n − 2 2 n − 1 2 ε n − 1 − 2 · 2 2 n − 8 · 2 n + 6 ε 2 − 2 n + 1 − 2 ε + 2 n − 1 2
  2. b.

    1≤k≤2n − 1, k odd; 2n−1≤k≤2n − 2, k even

    β n ′ 2 n − 1 , k = 2 − n − 1 α n ′ 2 n − 1 , k + 2 2 n + 2 3 − 6 · 2 n + 32 3 + 4 k 2 − 2 n + 2 − 4 k ε 3 − 2 2 n + 1 − 9 · 2 n + 14 + 8 k 2 + − 3 · 2 n + 1 + 6 k ε 2 − 2 2 n − 10 3 + 2 n − 4 k 2 + 2 n + 1 − 2 k × ε − k 2 + 2 n − 1 2 − 2 n − 1 2 ε n − 1 + 2 n − 1 2 ε n + k 2 p n 0 | π n k − 2 n − 1 2 − k 2 p n 2 n − 2 | π n k

Proof of Corollary 1

In order to prove Corollary 1, the following corollaries will be firstly evidenced.

Corollary 3

If 0≤i≤2n−1, then

∑ j = 0 2 n − 1 j p n j | i = 2 n − 1 ε+i 1 − 2 ε ,
(38)

and

∑ j = 0 2 n − 1 j p n π n CNC j | π n CNC i = 2 n − 1 ε + 1 − 2 ε i + ε , for i even 2 n − 1 ε + 1 − 2 ε i − ε , for i odd .
(39)

Corollary 3 was proved in [7].

Corollary 4

∑ j = 2 j even 2 n − 4 j p n j | i = 2 n − 2 ε n − 2 n − 2 ε n − 1 − 2 n − 2 ε 2 + 2 n − 2 ε i = 0 2 n − 2 ε 2 + i − 1 1 − 2 ε ε − 2 n − 2 p n 2 n − 2 | i 1 ≤ i ≤ 2 n − 1 and i is odd 2 n − 2 1 − ε 2 − 2 n − 2 1 − ε n i = 2 n − 2
(40)

Proof.

∑ j = 2 j even 2 n − 4 j p n j | 0 = ∑ j = 2 j even 2 n − 2 j p n j | 0 − 2 n − 2 p n 2 n − 2 | 0 = 2 1 − ε ∑ j = 0 2 n − 1 − 1 j p n − 1 j | 0 − 2 n − 2 1 − ε ε n − 1 = 2 n − 2 ε n − 2 n − 2 ε n − 1 − 2 n − 2 ε 2 + 2 n − 2 ε
∑ j = 2 j even 2 n − 4 j p n j | 2 n − 2 = ∑ j = 2 j even 2 n − 2 j p n j | 2 n − 2 − 2 n − 2 p n 2 n − 2 | 2 n − 2 = 2 1 − ε ∑ j = 0 2 n − 1 − 1 j p n − 1 j | 2 n − 1 − 1 − 2 n − 2 1 − ε n = 2 n − 2 1 − ε 2 − 2 n − 2 1 − ε n

If 1≤i≤2n−1 and i is odd, then

∑ j = 2 j even 2 n − 4 j p n j | i = ∑ j = 2 j even 2 n − 2 j p n j | i − 2 n − 2 p n 2 n − 2 | i = 2 ε ∑ j = 0 2 n − 1 − 1 j p n − 1 j i − 1 2 − 2 n − 2 p n 2 n − 2 | i = 2 n − 2 ε 2 + i − 1 1 − 2 ε ε − 2 n − 2 p n 2 n − 2 | i .

Denote a n i ≜ ∑ j ∈ E c j p n π n CNC j | π n CNC i +i ∑ j ∈ E p n π n CNC j | π n CNC i .

Corollary 5

a n i = ε + 2 n − 3 ε 2 + 2 n − 1 ε n − 1 − 2 n − 1 ε n i = 0 − 2 n + 2 i + 1 ε 2 + 2 n − 2 i − 1 ε + i + 2 n − i − 1 p n 2 n − 2 | i − i p n 0 | i 1 ≤ i ≤ 2 n − 1 − 1 and i is odd − 2 n + 2 i + 1 ε 2 + 2 n − 2 i − 1 ε + i + 2 n − i − 1 p n 2 n − 2 | i + 1 − i p n 0 | i + 1 2 n − 1 ≤ i ≤ 2 n − 2 and i is even 2 n − 1 ε n − 2 n − 1 ε n − 1 + − 2 n + 3 ε 2 − ε + 2 n − 1 i = 2 n − 1
(41)

Proof.

∑ j ∈ E c j p n π n CNC j | π n CNC i + i ∑ j ∈ E p n π n CNC j | π n CNC i = ∑ j = 0 2 n − 1 j p n π n CNC j | π n CNC i − ∑ j = 2 j even 2 n − 1 − 2 j p n j | π n CNC i − ∑ j = 2 n − 1 + 1 j odd 2 n − 3 j p n j − 1 | π n CNC i + i ∑ j = 2 j even 2 n − 1 − 2 p n j | π n CNC i + ∑ j = 2 n − 1 + 1 j odd 2 n − 3 p n j − 1 | π n CNC i = ∑ j = 0 2 n − 1 j p n π n CNC j | π n CNC i − ∑ j = 2 j even 2 n − 1 − 2 j p n j | π n CNC i − ∑ j = 2 n − 1 j even 2 n − 4 j + 1 p n j | π n CNC i + i ∑ j = 2 j even 2 n − 1 − 2 p n j | π n CNC i + ∑ j = 2 n − 1 j even 2 n − 4 p n j | π n CNC i = ∑ j = 0 2 n − 1 j p n π n CNC j | π n CNC i − ∑ j = 2 j even 2 n − 4 j p n j | π n CNC i − ∑ j = 2 n − 1 j even 2 n − 4 p n j | π n CNC i + i ∑ j = 2 j even 2 n − 4 p n j | π n CNC i

If i=0, then

∑ j ∈ E c j p n π n CNC j | π n CNC i + i ∑ j ∈ E p n π n CNC j | π n CNC i = ∑ j = 0 2 n − 1 j p n π n CNC j | 0 − ∑ j = 2 j even 2 n − 4 j p n j | 0 − ∑ j = 2 n − 1 j even 2 n − 4 p n j | 0 = ∑ j = 0 2 n − 1 j p n π n CNC j | 0 − ∑ j = 2 j even 2 n − 4 j p n j | 0 − ε 1 − ε ∑ j = 0 2 n − 2 − 1 p n − 2 j | 0 − p n 2 n − 2 | 0 = ε + 2 n − 3 ε 2 + 2 n − 1 ε n − 1 − 2 n − 1 ε n .

The proof of the case that i=2n−1 is very similar to the above.

If 1≤i≤2n−1−1 and i is odd, then

∑ j ∈ E c j p n π n CNC j | π n CNC i + i ∑ j ∈ E p n π n CNC j | π n CNC i = ∑ j = 0 2 n − 1 j p n π n CNC j | π n CNC i − ∑ j = 2 j even 2 n − 4 j p n j | i − ∑ j = 2 n − 1 j even 2 n − 4 p n j | i + i ∑ j = 2 j even 2 n − 4 p n j | i = ∑ j = 0 2 n − 1 j p n π n CNC j | π n CNC i − ∑ j = 2 j even 2 n − 4 j p n j | i − ∑ j = 2 n − 1 j even 2 n − 4 p n j | i + i ∑ j = 0 j even 2 n − 2 p n j | i − p n 0 | i − p n 2 n − 2 | i = ∑ j = 0 2 n − 1 j p n π n CNC j | π n CNC i − ∑ j = 2 j even 2 n − 4 j p n j | i − ∑ j = 2 n − 1 j even 2 n − 4 p n j | i + i ε ∑ j = 0 2 n − 1 − 1 p n − 1 j i − 1 2 − p n 0 | i − p n 2 n − 2 | i = − 2 n + 2 i + 1 ε 2 + 2 n − 2 i − 1 ε + i + 2 n − i − 1 p n 2 n − 2 | i − i p n 0 | i .

The proof of the case that 2n−1≤i≤2n−2 and i is even is very similar to the above.

Clearly,

α n ′ i , k = a n i − a n k .
(42)

After substituting (41) for a n (i) and a n (k) in (42) and merging like terms, α n ′ i , k is simplified.

Proof of Corollary 2

In order to prove Corollary 2, the following corollaries will be firstly evidenced.

Corollary 6

If 0≤i≤2n−1, then

∑ j = 0 2 n − 1 j 2 p n j | i = ε 3 4 n − 1 + 2 ε 2 3 2 n − 1 2 n − 2 + 2 iε 1 − 2 ε 2 n − 1 + i 2 1 − 2 ε 2 .
(43)

Corollary 6 was proved in [7].

Corollary 7

∑ j = 2 j even 2 n − 4 j 2 p n j π n CNC i = 2 n − 2 2 ε n − 2 n − 2 2 ε n − 1 − 8 ε 3 3 2 n − 1 − 1 2 n − 1 − 2 + ε 2 3 2 2 n + 1 − 2 2 n − 3 · 2 n + 2 + 20 + 4 3 4 n − 1 − 1 ε i = 0 8 3 2 n − 1 − 1 2 n − 1 − 2 − i − 1 2 n + 2 − 8 + 4 i − 1 2 ε 3 + 4 3 4 n − 1 − 1 + i − 1 2 n + 1 − 4 − 4 i − 1 2 ε 2 + i − 1 2 ε − 2 n − 2 2 p n 2 n − 2 | i 1 ≤ i ≤ 2 n − 1 and i odd 4 1 − ε ε 3 4 n − 1 − 1 + 2 ε 2 3 2 n − 1 − 1 2 n − 1 − 2 + 2 n − 1 − 1 2 1 − 2 ε − 2 n − 2 2 1 − ε n i = 2 n − 2
(44)

Proof.

If i=0, then

∑ j = 2 j even j 2 p n j | 0 = 4 1 − ε ∑ j = 0 2 n − 1 − 1 j 2 p n − 1 j | 0 − 2 n − 1 − 1 2 p n − 1 2 n − 1 − 1 | 0 = 2 n − 2 2 ε n − 2 n − 2 2 ε n − 1 − 8 ε 3 3 2 n − 1 − 1 2 n − 1 − 2 + ε 2 3 2 2 n + 1 − 2 2 n − 3 · 2 n + 2 + 20 + 4 3 4 n − 1 − 1 ε.

The proof of i=2n−2 case is similar to the above.

∑ j = 2 j even 2 n − 4 j 2 p n j | i = 4 ε ∑ j = 0 2 n − 1 − 1 j 2 p n − 1 j i − 1 2 − 2 n − 2 2 p n 2 n − 2 | i = 8 3 2 n − 1 − 1 2 n − 1 − 2 − i − 1 2 n + 2 − 8 + 4 i − 1 2 ε 3 + 4 3 4 n − 1 − 1 + i − 1 2 n + 1 − 4 − 4 i − 1 2 ε 2 + i − 1 2 ε − 2 n − 2 2 p n 2 n − 2 | i

Corollary 8

∑ j = 2 n − 1 j even 2 n − 4 j p n j π n CNC i = 2 n − 2 ε n − 2 n − 2 ε n − 1 + − 2 n − 1 + 2 ε 3 − 2 ε 2 + 2 n − 1 ε i = 0 2 n − 1 − 2 i ε 3 + 2 n − 1 + i − 1 ε 2 − 2 n − 2 p n 2 n − 2 | i 1 ≤ i ≤ 2 n − 1 − 1 and i odd 2 i + 2 − 3 · 2 n − 1 ε 3 + − 3 i − 2 + 3 · 2 n − 1 ε 2 + iε − 2 n − 2 p n 2 n − 2 | i + 1 2 n − 1 ≤ i ≤ 2 n − 2 and i even − 2 n − 2 1 − ε n + − 2 n − 1 + 2 ε 3 + 2 n + 1 − 6 ε 2 + − 5 · 2 n − 1 + 6 ε + 2 n − 2 i = 2 n − 1
(45)

Proof.

If i=0, then

∑ j = 2 n − 1 j even 2 n − 4 j p n j π n CNC i = ∑ j = 2 n − 1 j even 2 n − 4 j p n j | 0 = ∑ j = 0 j even 2 n − 1 − 2 j + 2 n − 1 p n j + 2 n − 1 | 0 − 2 n − 2 p n 2 n − 2 | 0 = ε ∑ j = 0 j even 2 n − 1 − 2 j + 2 n − 1 p n − 1 j | 0 − 2 n − 2 1 − ε ε n − 1 = 2 ε 1 − ε ∑ j = 0 2 n − 2 − 1 j p n − 2 j | 0 + 2 n − 1 ε 1 − ε × ∑ j = 0 2 n − 2 − 1 p n − 2 j | 0 + 2 n − 2 ε n − ε n − 1 = 2 n − 2 ε n − 2 n − 2 ε n − 1 + − 2 n − 1 + 2 ε 3 − 2 ε 2 + 2 n − 1 ε.

If 1≤i≤2n−1−1 and i odd, then

∑ j = 2 n − 1 j even 2 n − 4 j p n j π n CNC i = ∑ j = 2 n − 1 j even 2 n − 4 j p n j | i = ε ∑ j = 0 j even 2 n − 1 − 2 j p n − 1 j | i + 2 n − 1 ε ∑ j = 0 j even 2 n − 1 − 2 p n − 1 j | i − 2 n − 2 p n 2 n − 2 | i = 2 ε 2 ∑ j = 0 2 n − 2 − 1 j p n − 2 j i − 1 2 + 2 n − 1 ε 2 ∑ j = 0 2 n − 2 − 1 p n − 2 j i − 1 2 − 2 n − 2 p n 2 n − 2 | i = 2 n − 1 − 2 i ε 3 + 2 n − 1 + i − 1 ε 2 − 2 n − 2 p n 2 n − 2 | i .

The proof of the other two cases is similar to the above.

Denote b i ≜ ∑ j ∈ E c j 2 p n π n CNC j | π n CNC i + i 2 ∑ j ∈ E p n π n CNC j | π n CNC i .

Corollary 9

b n i = − 2 n − 1 2 ε n + 2 n − 1 2 ε n − 1 + 2 2 n + 1 3 − 5 · 2 n + 28 3 ε 3 + 2 2 n 3 + 2 n − 19 3 ε 2 + ε i = 0 − 2 2 n + 1 3 + 2 n − 4 3 − 4 i 2 + 2 n + 2 − 4 i ε 3 + 4 n + 5 3 + 8 i 2 + − 3 · 2 n + 1 + 6 i ε 2 + 2 2 n − 1 3 − 2 n − 4 i 2 + 2 n + 1 − 2 i ε + i 2 − i 2 p n 0 | i + 2 n − 1 2 − i 2 p 2 n − 2 | i 1 ≤ i ≤ 2 n − 1 − 1 and i odd − 2 2 n + 1 3 + 2 n − 4 3 − 4 i 2 + 2 n + 2 − 4 i ε 3 + 4 n + 5 3 + 8 i 2 + − 3 · 2 n + 1 + 6 i ε 2 + 2 2 n − 1 3 − 2 n − 4 i 2 + 2 n + 1 − 2 i ε + i 2 − i 2 p n 0 | i + 1 + 2 n − 1 2 − i 2 p 2 n − 2 | i + 1 2 n − 1 ≤ i ≤ 2 n − 2 and i even 2 n − 1 2 ε n − 2 n − 1 2 ε n − 1 + 2 2 n + 1 + 28 3 − 5 · 2 n ε 3 + − 5 · 4 n + 37 3 + 9 · 2 n ε + − 2 n + 1 + 3 ε + 2 n − 1 2 i = 2 n − 1
(46)

Proof.

∑ j ∈ E c j 2 p n π n CNC j | π n CNC i + i 2 ∑ j ∈ E p n π n CNC j | π n CNC i = ∑ j = 0 2 n − 1 j 2 p n π n CNC j | π n CNC i − ∑ j = 2 j even 2 n − 1 − 2 j 2 p n π n CNC j | π n CNC i − ∑ j = 2 n − 1 + 1 j odd 2 n − 3 j 2 p n π n CNC j | π n CNC i + i 2 ∑ j = 2 j even 2 n − 1 − 2 p n π n CNC j | π n CNC i + ∑ j = 2 n − 1 + 1 j odd 2 n − 3 p n π n CNC j | π n CNC i = ∑ j = 0 2 n − 1 j 2 p n π n CNC j | π n CNC i − ∑ j = 2 j even 2 n − 1 − 2 j 2 p n j | π n CNC i − ∑ j = 2 n − 1 j even 2 n − 4 j + 1 2 p n j | π n CNC i + i 2 ∑ j = 2 j even 2 n − 1 − 2 p n j | π n CNC i + ∑ j = 2 n − 1 j even 2 n − 4 p n j | π n CNC i = ∑ j = 0 2 n − 1 j 2 p n π n CNC j | π n CNC i − ∑ j = 2 j even 2 n − 4 j 2 p n j | π n CNC i − ∑ j = 2 n − 1 j even 2 n − 4 2 j + 1 p n j | π n CNC i + i 2 ∑ j = 2 j even 2 n − 4 p n j | π n CNC i = ∑ j = 0 2 n − 1 j 2 p n π n CNC j | π n CNC i − ∑ j = 2 j even 2 n − 4 j 2 p n j | π n CNC i − ∑ j = 2 n − 1 j even 2 n − 4 2 j + 1 p n j | π n CNC i + i 2 ∑ j = 2 j even 2 n − 4 p n j | π n CNC i

If i=0, then

b n 0 = ∑ j = 0 2 n − 1 j 2 p n π n CNC j | 0 − ∑ j = 2 j even 2 n − 4 j 2 p n j | 0 − ∑ j = 2 n − 1 j even 2 n − 4 2 j + 1 p n j | 0 = ∑ j = 0 2 n − 1 j 2 p n π n CNC j | 0 − ∑ j = 2 j even 2 n − 4 j 2 p n j | 0 − 2 ∑ j = 2 n − 1 j even 2 n − 4 j p n j | 0 − ε 1 − ε ∑ j = 0 2 n − 2 − 1 p n − 2 j | 0 − p n 2 n − 2 | 0 = − 2 n − 1 2 ε n + 2 n − 1 2 ε n − 1 + 2 2 n + 1 3 − 5 · 2 n + 28 3 ε 3 + 2 2 n 3 + 2 n − 19 3 ε 2 + ε.

If 1≤i≤2n−1−1 and i odd, then

b n i = ∑ j = 0 2 n − 1 j 2 p n π n CNC j | i − ∑ j = 2 j even 2 n − 4 j 2 p n j | i − ∑ j = 2 n − 1 j even 2 n − 4 2 j + 1 p n j | i + i 2 ∑ j = 2 j even 2 n − 4 p n j | i = ∑ j = 0 2 n − 1 j 2 p n π n CNC j | i − ∑ j = 2 j even 2 n − 4 j 2 p n j | i − 2 ∑ j = 2 n − 1 j even 2 n − 4 j p n j | i − ε 2 ∑ j = 0 2 n − 2 − 1 p n − 2 j i − 1 2 − p n 2 n − 2 | i + i 2 ∑ j = 2 j even 2 n − 4 p n j | i .

The proof of the other two cases is similar to the above.

Clearly,

β n ′ i , k = b n i − b n k .
(47)

After substituting (46) for b n (i) and b n (k) in (47) and merging like terms, β n ′ i , k is simplified.

References

  1. Gersho A, Gray R: Vector Quantization and Signal Compression. Kluwer, Norwell, MA; 1991.

    Google Scholar 

  2. Hochwald B, Zeger K: Tradeoff between source and channel coding. IEEE Trans. Inform. Theory 1997, 43(5):1412-1424. 10.1109/18.623141

    Article  MathSciNet  MATH  Google Scholar 

  3. Matloub S, Weissman T: Universal zero-delay joint source channel coding. IEEE Trans. Inform. Theory 2006, 52(12):5240-5250.

    Article  MathSciNet  MATH  Google Scholar 

  4. Wang H, Tsaftaris S, Katsaggelos A: Joint source-channel coding for wireless object-based video communications utilizing data hiding. IEEE Trans. Image Process 2006, 15(8):2158-2169.

    Article  Google Scholar 

  5. Qiao D, Li Y, Zhang Y: Energy efficient video transmission over fast fading channels. EURASIP J. Wireless Commun. Netw 2010, 2010: 1-12.

    Article  Google Scholar 

  6. Ye Li, Reisslein M, Chakrabarti C: Energy-efficient video transmission over a wireless link. IEEE Trans. on Vehicular Technol. 2009, 58(3):1229-1244.

    Article  Google Scholar 

  7. Farber B, Zeger K: Quantizers with uniform decoders and channel-optimized encoders. IEEE Trans. Inform. Theory 2006, 52(2):640-661.

    Article  MathSciNet  MATH  Google Scholar 

  8. Farber B, Zeger K: Quantizers with uniform encoders and channel optimized decoders. IEEE Trans. Inform. Theory 2004, 50(1):62-77. 10.1109/TIT.2003.821996

    Article  MathSciNet  MATH  Google Scholar 

  9. Kumazawa H, Kasahara M, Namekawa T: A construction of vector quantizers for noisy channles. Electron. Eng. Jpn 1984, 67-B(4):39-47.

    MathSciNet  Google Scholar 

  10. Kurtenbach A, Wintz P: Quantizing for noisy channels. IEEE Trans. Commun. Technol 1969, COM-17(4):291-302.

    Article  Google Scholar 

  11. Zheng J, Rao D: Analysis of vector quantizers using transformed codebooks with application to feedback-based multiple antenna systems. EURASIP J. Wireless Commun. Netw 2008, 2008: 1-13.

    Article  Google Scholar 

  12. Dunham J, Gray RM: Joint source and noisy channel trellis encoding. IEEE Trans. Inform. Theory 1981, IT-27(4):516-519.

    Article  MathSciNet  Google Scholar 

  13. Ho J, Yang E-H: Designing optimal multiresolution quantizers with error detecting codes. IEEE Trans. Wireless Com 2013, 12(7):3588-3599.

    Article  Google Scholar 

  14. Hagen R, Hedelin P: Robust vector quantization by a linear mapping of a block code. IEEE Trans. Inform. Theory 1999, 45(1):200-218. 10.1109/18.746788

    Article  MathSciNet  MATH  Google Scholar 

  15. Skoglund M: On channel-constrained vector quantization and index assignment for discrete memoryless channels. IEEE Trans. Inform. Theory 1999, 45(6):2615-2622.

    Article  MathSciNet  MATH  Google Scholar 

  16. Crimmins TR, Horwitz HM, Palermo CJ, Palermo RV: Minimization of mean-square error for data transmitted via group codes. IEEE Trans. Inform. Theory 1969, IT-15(1):72-78.

    Article  MathSciNet  Google Scholar 

  17. McLaughlin SW, Neuhoff DL, Ashley JJ: Optimal binary index assignments for a class of equiprobable scalar and vector quantizers. IEEE Trans. Inform. Theory 1995, 41(6):2031-2037. 10.1109/18.476331

    Article  MATH  Google Scholar 

  18. Knagenhjelm P, Agrell E: The Hadamard transform - a tool for index assignment. IEEE Trans. Inform. Theory 1996, 42(4):1139-1151. 10.1109/18.508837

    Article  MATH  Google Scholar 

  19. Mehes A, Zeger K: Binary lattice vector quantization with linear block codes and affine index assignments. IEEE Trans. Inform. Theory 1998, 44(1):79-94. 10.1109/18.650990

    Article  MathSciNet  MATH  Google Scholar 

  20. Mehes A, Zeger K: Randomly chosen index assignments are asymptotically bad for uniform sources. IEEE Trans. Inform. Theory 1999, 45(2):788-794. 10.1109/18.749030

    Article  MathSciNet  MATH  Google Scholar 

  21. Yu X, Wang H: E-H Yang, Design and analysis of optimal noisy channel quantization with random index assignment. IEEE Trans. Inform. Theory 2010, 56(11):5796-5804.

    Article  MathSciNet  Google Scholar 

  22. Farvardin N, Vaishampayan VA: Optimal quantizer design for noisy channel: an approach to combined source-channel coding. IEEE Trans. Inform. Theory 1987, IT-22(6):827-838.

    Article  MathSciNet  Google Scholar 

  23. Farvardin N, Vaishampayan VA: On the performance and complexity of channel-optimized vector qantizers. IEEE Trans. Inform. Theory 1991, 37(1):155-160. 10.1109/18.61130

    Article  MathSciNet  MATH  Google Scholar 

  24. Rankin DM, Gulliver TA: Single parity check product codes. IEEE Trans. Commun 2001, 49(8):1354-1362. 10.1109/26.939851

    Article  MATH  Google Scholar 

Download references

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under Grant 81101127 and in part by the Shenzhen Basic Research Funds under Grant JCYJ20120615140419045.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ye Li.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qiao, D., Li, Y. Channel-optimized scalar quantizers with erasure correcting codes. J Wireless Com Network 2014, 99 (2014). https://doi.org/10.1186/1687-1499-2014-99

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-1499-2014-99

Keywords