Skip to main content

Channel-optimized scalar quantizers with erasure correcting codes

Abstract

This paper investigates the design of channel-optimized scalar quantizers with erasure correcting codes over binary symmetric channels (BSC). A new scalar quantizer with uniform decoder and channel-optimized encoder aided by erasure correcting code is proposed. The lower bound for performance of the new quantizer with complemented natural code (CNC) index assignment is given. Then, in order to approach it, the single parity check code and corresponding decoding algorithm are added into the new quantizer encoder and decoder, respectively. Analytical results show that the performance of the new quantizers with CNC is better than that of the original quantizers with CNC and natural binary code (NBC) when crossover probability is at a certain range.

1 Introduction

For a uniform scalar source, it is well known from [1] that if over a noiseless channel, in terms of the mean squared distortion (MSD), the uniform scalar quantizer is optimal among all quantizers. But if across noisy channels, the uniform quantizer is no longer optimal. Therefore, joint source and channel coding has attracted much attention in [2, 3], which has been seen as a promising scheme for effective data transmission over wireless channels due to its ability to cope with varying channel conditions [46]. Generally, there are two approaches to improving the performance of a quantizer over a noisy channel.

One is that the encoding cells in quantizer’s encoders are designed to be affected by the transmission channel characteristics, for example [7], or the final positioning of the reconstruction in decoders depends on channel characteristics, for example [8], which is called as channel optimized quantizers. In [912], some necessary optimality conditions are given. Alternatively, an error detecting code can be cascaded with the quantizer at the expense of added transmission rate in [13].

The other one is index assignment, which is a mapping of source code symbols to channel code symbols and was studied in [14, 15]. The usual goal to design an index assignment for noisy channels is to minimize the end-to-end MSD over all possible index assignments. Some famous index assignments, such as natural binary code (NBC), Gray code and randomly chosen index assignments, are studied on a binary symmetric channel (BSC) in [1621]. In [16], it is proved that NBC is optimal for uniform scalar quantizers and uniform source. In [17], McLaughlin et al. extended it to uniform vector quantizers. Farber and Zeger [8] also proved the optimality of NBC for uniform source and quantizers with uniform encoders and channel-optimized decoders. In [7], they not only studied on NBC code, but also proposed a new affine index assignment, named as complemented natural code (CNC).

Interestingly, it is known from [7] that for uniform decoder and channel-optimized encoder quantizer with CNC index assignment, as crossover probability increases to a certain value, some of the encoding cells appear to be empty, which is also discovered in [22, 23]. These empty cells can be looked upon as redundancy, just like a form of implicit channel coding. Besides, it can be discovered in [7] that there is an implicit assumption is that transmitters should know transition error probabilities. If this assumption comes true, then decoders would exactly know the encoding cells in encoders and can judge if the receiving index is one of the indices of empty cells, but [7] does not apply any methods to defeat the receiving index of empty cells.

In this paper, a genie-aided erasure code is applied into the ideal quantizer decoders, which can correct the receiving index of empty cells. The lower bound for performance of the uniform decoder and channel-optimized encoder quantizer with CNC index assignment is given. Then, the single parity check (SPC) block code is utilized to approach the lower bound. The main scheme is that, in transmitters, several indices are grouped into a block and parity check bits are appended in every block. Afterwards, if a receiving index is detected to be the index of empty cells in receivers, then it will be marked as an erasure. Then, the SPC code is used to correct erasures column by column for every block. In view of the structure, the SPC block code is a little similar to the SPC product code in [24], but their decoders are very different. The decoding scheme of the SPC block code is very simple; thus, only a little complexity will be added.

The rest of this paper is organized as follows. Section 2 gives definitions and notation. In Section 3, the lower bound for distortion of quantizers with uniform decoders and channel-optimized encoders is analyzed. Section 4 introduces the SPC block code and gives the distortion of the quantizer appending it. In Section 5, analytical results are shown. Finally, the conclusion is presented in Section 6.

2 Background

Throughout this paper, a continuous real-valued source random variable X uniformly distributed on [0,1] is considered. Some of the mathematical notations and definitions follow [7] in this section.Then, a rate n-bit quantizer is a mapping from the source to one of the real-valued codepoints (quantization level) y n (i),

Q : { X | X 0 , 1 } { y n i | i = 0 , 1 , , 2 n 1 } .

A rate 2 quantizer is plotted in Figure 1 as an example. The quantizer includes a quantizer encoder that is a mapping from source X to a certain index i

Q e : { X | X 0 , 1 } { i | i = 0 , 1 , , 2 n 1 } ,

and a quantizer decoder that is a mapping from the index i to codepoint y n (i),

Q d : { i | i = 0 , 1 , , 2 n 1 } { y n i | i = 0 , 1 , , 2 n 1 } .
Figure 1
figure 1

The structure of a rate 2 quantizer.

In the encoder, the i th encoding cell can be denoted by the set

R n i = Q e 1 i .

If R n (i)=ϕ, we say R n (i) is an empty cell. In general, for most quantizers, there are no empty cells in encoding cells. But a kind of quantizers in [7] is considered, in which the quantizer encoder may contain empty encoding cells. The centroid of the i th cell of the quantizer is presented by a conditional expectation

c n i = E X | X R n i .

For a noisy channel, an index assignment π n is often used to debate noise, which is a permutation of the set {0,1,…,2n−1}. Then, if index J is received by a quantizer decoder with index assignment, a random variable X[0,1] will be quantized to the quantization level

Q d π n 1 J = y n π n 1 J .

Then, the end-to-end mean squared error (MSE) can be written as

D π n =E X Q d π n 1 J 2 .
(1)

In this paper, we focus on the quantizers with uniform decoders and channel-optimized encoders[7]. A quantizer decoder is said to be uniform if for each i, the i th codepoint satisfies

y n i = i + 1 2 2 n .
(2)

A quantizer encoder is said to be a channel-optimized encoder if it satisfies the weighted nearest neighbor condition, that is,

W i R n i W i ¯ ,

where

W i = x : j = 0 2 n 1 x y n j 2 p n π n j | π n i < j = 0 2 n 1 x y n j 2 p n π n j | π n k , k i .
(3)

Here, W ¯ i denotes the closure of W i , and p n (j|i) denotes the probability that index j was received, given that index i was sent. If assuming a binary symmetric channel with crossover probability ε, p n (j|i) can be defined as

p n j | i = ε H n i , j 1 ε n H n i , j
(4)

for 0≤ε≤1/2, where H n (i,j) is the Hamming distance between n-bit binary words i and j. Then, according to [7], a quantizer with a uniform decoder and a channel-optimized encoder satisfies, for all i,

R n ¯ i ={x 0 , 1 : α n i , k x β n i , k ,ki},
(5)

where

α n i , k = j = 0 2 n 1 j p n π n j | π n i p n π n j | π n k
(6)
β n i , k = 2 n 1 α n i , k + j = 0 2 n 1 j 2 p n π n j | π n i p n π n j | π n k .
(7)

Let an encoder-optimized uniform quantizer (EOUQ) denote a rate n quantizer with a uniform decoder and a channel-optimized encoder, along with a uniform source on [0,1], and a binary symmetric channel with crossover probability ε. For each n, the CNC index assignment [7] is defined by

π n CNC i = i , for 0 i 2 n 1 1 i + 1 , for 2 n 1 i 2 n 2 and i even i 1 , for 2 n 1 + 1 i 2 n 1 and i odd .
(8)

Denote

ε n = 2 n + 4 12 · 3 sin arctan τ / σ + π 3 cos arctan τ / σ + π 3 1 + 1 2
σ = 2 n 5 1 27 2 n 2 + 1 3
τ = 2 n 4 2 n 6 σ .

The encoding cells of an EOUQ with the CNC index assignment are given in [7] as follows.

If n≥3 and ε[0,ε), then

R ¯ n i = 0 , δ 2 n ε 2 δ , for i = 0 2 n ε 2 δ , i + 1 δ 2 n + ε 2 + ε 1 + 2 ε , for 1 i 2 n 1 3 , i odd 2 n + ε 2 + ε 1 + 2 ε , i + 1 δ 2 n ε 2 δ , for 2 i 2 n 1 2 , i even 2 1 2 n δ ε 2 δ , 1 / 2 , for i = 2 n 1 1 1 / 2 , 2 1 + 2 n δ + ε 2 3 ε δ , for i = 2 n 1 2 n + ε 2 3 ε δ , i + 1 δ 2 n + 3 ε 2 1 + 2 ε , for 2 n 1 + 1 i 2 n 3 , i odd 2 n + 3 ε 2 1 + 2 ε , i + 1 δ 2 n + ε 2 3 ε δ , for 2 n 1 + 2 i 2 n 2 , i even 1 2 n δ + ε 2 3 ε δ , 1 , for i = 2 n 1
(9)

where δ=1−2ε.

If n≥3 and ε[ ε,1/2), then

R ¯ n i = 0 , δ 2 n ε 2 δ , for i = 0 and ε < 1 / 2 n / 2 + 2 δ 2 n ε 2 δ , 4 δ + δ 2 2 n 1 + ε 0 , 1 , for i = 1 2 δ i 1 + δ 2 2 n + 1 + ε , 2 δ i + 1 + δ 2 2 n + 1 + ε , for 3 i 2 n 1 3 , i odd 2 n 4 δ + δ 2 + 2 n + 1 ε 2 n 1 , 1 / 2 , for i = 2 n 1 1 1 / 2 , 2 n + 2 δ + 1 4 ε 2 + 2 n + 1 ε 2 n 1 , for i = 2 n 1 2 δ i 1 + 1 4 ε 2 2 n + 1 + ε , 2 δ i + 1 + 1 4 ε 2 2 n + 1 + ε , for 2 n 1 + 2 i 2 n 4 , i even 2 n + 1 6 δ + 1 4 ε 2 2 n + 1 + ε , 1 2 n δ + ε 2 3 ε δ 0 , 1 , for i = 2 n 2 1 2 n δ + ε 2 3 ε δ , 1 0 , 1 , for i = 2 n 1 and ε < 1 / 2 n / 2 + 2 ϕ , else .
(10)

As above, when n≥3 and ε[ε,1/2), there exists an empty cell set E, which has 2n−1−2 elements and consists of all even numbers from 2 to 2n−1−2 and all odd numbers from 2n−1+1 to 2n−3, given by

E { i : R n i = ϕ } = { i : 2 i 2 n 1 2 , i even } { i : 2 n 1 + 1 i 2 n 3 , i odd } .
(11)

Let D π n denote the end-to-end MSD of EOUQ with index assignment π n . The MSE of an EOUQ with the CNC index assignment [7] is

D CNC = D 1 n , ε , for 0 ε ε D 2 n , ε , for ε ε 1 2 n / 2 + 2 D 3 n , ε , for 1 2 n / 2 + 2 ε 1 / 2 ,
(12)

where

D 1 n , ε = 2 2 n 3 1 + 2 ε 1 4 + 2 2 n + 5 2 ε 2 2 n + 1 15 · 2 n + 4 ε 2 + 6 2 2 n 2 n + 2 4 ε 3 + 2 n 4 2 n 2 ε 4 12 2 n 4 ε 5 D 2 n , ε = 2 3 n 3 2 n 3 + 2 n 3 2 2 n + 10 2 n + 1 + 48 ε 2 n 6 2 n 5 2 n 4 3 · 2 2 n ε 2 + 2 2 n 6 2 n 5 2 n 4 ε 3 + 12 2 n 5 2 n 4 ε 4 + 24 2 n 4 ε 5 D 3 n , ε = 2 3 n 3 2 n + 3 + 2 n 3 2 2 n + 10 2 n 1 ε 2 n 6 2 n 5 2 n 4 3 · 2 2 n ε 2 + 2 2 n 6 2 n 5 2 n 4 ε 3 + 12 2 n 5 2 n 4 ε 4 + 24 2 n 4 ε 5 .

3 Channel-optimized quantizers with erasure correcting codes

As to channel-optimized encoders, the implied key assumption in [7] is that channel state information (CSI) is known to transmitters. In a time division duplexing (TDD) mode system, it is easy to know CSI for transmitters, because uplink and downlink share the same channel. But in a frequency division duplexing (FDD) mode system, it is much harder to know CSI for transmitters than the TDD mode. Generally, after CSI is estimated in the receivers, it must be fed back to transmitters through an extra reliable channel. Here, in this paper, for a binary symmetric channel, CSI only includes crossover probability ε. In other words, if the above assumption comes true, for a given channel-optimized encoder in [7], receivers absolutely know temporal ε and can judge if there exist empty cells in encoding cells and whether the receiving index belongs to empty cells or not. This is because the encoding cells are determined by ε. Once the empty cell index appears, receivers should recognize that the received index is erroneously detected. However, in [7], the index that is known to be an error is still sent to quantizer decoders.

An assumption is made in this section that the decoders can correct all the receiving indices of empty cells, using a genie-aided erasure correcting code, which will be discussed in the subsequent section. In other words, all encoding cells belonging to the set E are fixed to empty in the encoder, and the receiving index of empty cells can be thought as an erasure to be marked and then recovered in the decoder. Because of this assumption, the probability that index j was received given that index i was sent will be changed as

i E c p n π n j | π n i = p n π n i | π n i + k E p n π n k | π n i j = i 0 j E p n π n j | π n i j E c and j i .
(13)

Then, similarly, if the proposed quantizer encoder is said to satisfy the weighted nearest neighbor condition, then the encoding cells should satisfy

W i R n i W i ¯ i E c ,

where

W i = x : j E c x y n j 2 p n π n j | π n i + j E x y n i 2 p n π n j | π n i < j E c x y n j 2 p n π n j | π n k + j E x y n k 2 p n π n j | π n k , k i .
(14)

It is worthwhile to note that the second term in each side of the less-than operator in W i is different from W i in (3). This term means that if receiving an empty cell index, our proposed quantizer decoder is able to correct it. In order to be easier to solve (14), it is rewritten as follows.

Lemma 1

For all i, the encoding cells of our proposed EOUQ satisfy,

R n ¯ i = x 0 , 1 : α n i , k x β n i , k , k i ,
(15)

where

α n i , k = j E c j p n π n j | π n i p n π n j | π n k + j E i · p n π n j | π n i k · p n π n j | π n k
(16)
β n i , k = 2 n 1 α n i , k + j E c j 2 p n π n j | π n i p n π n j | π n k + j E i 2 · p n π n j | π n i k 2 · p n π n j | π n k .
(17)

After substituting (8), α n i , k and β n (i,k) can be simplified as the following two corollaries.

Corollary 1

α n i , k

can be simplified as follows.

  1. 1.

    i=0

  2. a.

    1≤k≤2n−1 − 1, k odd; 2n−1k≤2n − 2, k even

    α n 0 , k = k + 2 n + 2 k + 2 ε + 2 n + 1 2 k 4 ε 2 + 2 n 1 ε n 1 2 n 1 ε n 2 n k 1 p n 2 n 2 | π n k + k · p n 0 | π n k
  3. b.

    k=2n − 1

    α n 0 , 2 n 1 = 2 n + 1 + 2 ε + 2 n + 1 6 ε 2 + 2 n + 1 2 ε n 1 2 n + 1 2 ε n
  4. 2.

    1≤i≤2n − 1, i odd; 2n−1≤i≤2n − 2, i even

  5. a.

    k=0

    α n i , 0 = i + 2 n 2 i 2 ε + 2 n + 1 + 2 i + 4 ε 2 + 2 n + 1 ε n 1 + 2 n 1 ε n + 2 n i 1 p n 2 n 2 | π n i i p n 0 | π n i
  6. b.

    1≤k≤2n − 1, k odd; 2n−1≤k≤2n − 2, k even; ki

    α n i , k = i k 2 i k ε + 2 i k ε 2 + 2 n i 1 p n 2 n 2 | π n i i · p n 0 | π n i 2 n k 1 p n 2 n 2 | π n k + k · p n 0 | π n k
  7. c.

    k=2n−1

    α n i , 2 n 1 = 2 n i 1 + 2 n 2 i ε + 2 i 2 ε 2 + 2 n 1 ε n 1 + 2 n + 1 ε n + 2 n i 1 p n 2 n 2 | π n i i p n 0 | π n i
  8. 3.

    i=2n−1

  9. a.

    k=0

    α n 2 n 1 , 0 = 2 n 1 2 ε 2 n + 1 6 ε 2 2 n + 1 2 ε n 1 + 2 n + 1 2 ε n
  10. b.

    1≤k≤2n − 1, k odd; 2n−1≤k≤2n − 2, k even

    α n 2 n 1 , k = 2 n k 1 2 n 2 k ε 2 k 2 ε 2 2 n 1 ε n 1 + 2 n 1 ε n 2 n k 1 p n 2 n 2 | π n k + k p n 0 | π n k

Corollary 2

β n i , k

can be simplified as seen in the Appendix (‘The simplification of

β n i , k

’).

It is known from the above two corollaries that for a given i and k, α n i , k and β n i , k have only one variable ε, so the symbolic toolbox in Matlab can be used to solve the inequality set. The encoding cells R n (i) are solved for the 3-bit and 4-bit quantizer, as shown in the following two theorem.

Theorem 1

For 0≤ε≤0.5, the encoding cells for the 3-bit quantizer are

R 3 ¯ i = 0 , 1 + 42 ε 3 48 ε 2 + 12 ε 8 1 + 12 ε 3 15 ε 2 + 3 ε , for i = 0 1 + 42 ε 3 48 ε 2 + 12 ε 8 1 + 12 ε 3 15 ε 2 + 3 ε , 5 30 ε 3 + 6 ε 2 3 ε 8 2 6 ε 3 + 3 ε 2 3 ε , for i = 1 ϕ , for i = 2 5 30 ε 3 + 6 ε 2 3 ε 8 2 6 ε 3 + 3 ε 2 3 ε , 1 + 2 ε 3 2 ε 2 2 ε 3 2 ε + 1 , for i = 3 1 + 2 ε 3 2 ε 2 2 ε 3 2 ε + 1 , 11 18 ε 3 + 18 ε 2 21 ε 8 2 6 ε 3 + 3 ε 2 3 ε , for i = 4 ϕ , for i = 5 11 18 ε 3 + 18 ε 2 21 ε 8 2 6 ε 3 + 3 ε 2 3 ε , 7 + 54 ε 3 72 ε 2 + 12 ε 8 1 + 12 ε 3 15 ε 2 + 3 ε , for i = 6 7 + 54 ε 3 72 ε 2 + 12 ε 8 1 + 12 ε 3 15 ε 2 + 3 ε , 1 , for i = 7
(18)

Theorem 2

For 0≤ε≤0.5, the encoding cells for the 4-bit quantizer are

R 4 ¯ i = 0 , 1 + 240 ε 4 223 ε 3 55 ε 2 + 52 ε 16 1 + 30 ε 4 18 ε 3 23 ε 2 + 11 ε , for i = 0 1 + 240 ε 4 223 ε 3 55 ε 2 + 52 ε 16 1 + 30 ε 4 18 ε 3 23 ε 2 + 11 ε , 5 + 240 ε 4 173 ε 3 + 65 ε 2 13 ε 16 2 + 30 ε 4 21 ε 3 + 2 ε 2 + 3 ε , for i = 1 ϕ , for i = 2 5 + 240 ε 4 173 ε 3 + 65 ε 2 13 ε 16 2 + 30 ε 4 21 ε 3 + 2 ε 2 + 3 ε , 9 + 37 ε 3 33 ε 2 4 ε 32 1 + ε 3 + ε 2 2 ε , for i = 3 ϕ , for i = 4 9 + 37 ε 3 33 ε 2 4 ε 32 1 + ε 3 + ε 2 2 ε , 13 + 240 ε 4 335 ε 3 + 115 ε 2 20 ε 16 2 + 30 ε 4 43 ε 3 + 17 ε 2 4 ε , for i = 5 ϕ , for i = 6 13 + 240 ε 4 335 ε 3 + 115 ε 2 20 ε 16 2 + 30 ε 4 43 ε 3 + 17 ε 2 4 ε , 1 2 , for i = 7 1 2 , 19 + 240 ε 4 353 ε 3 + 157 ε 2 44 ε 16 2 + 30 ε 4 43 ε 3 + 17 ε 2 4 ε , for i = 8 ϕ , for i = 9 19 + 240 ε 4 353 ε 3 + 157 ε 2 44 ε 16 2 + 30 ε 4 43 ε 3 + 17 ε 2 4 ε , 23 5 ε 3 + 65 ε 2 60 ε 32 1 + ε 3 + ε 2 2 ε , for i = 10 ϕ , for i = 11 23 5 ε 3 + 65 ε 2 60 ε 32 1 + ε 3 + ε 2 2 ε , 27 + 240 ε 4 163 ε 3 33 ε 2 + 61 ε 16 2 + 30 ε 4 21 ε 3 + 2 ε 2 + 3 ε , for i = 12 ϕ , for i = 13 27 + 240 ε 4 163 ε 3 33 ε 2 + 61 ε 16 2 + 30 ε 4 21 ε 3 + 2 ε 2 + 3 ε , 15 + 240 ε 4 65 ε 3 313 ε 2 + 124 ε 16 1 + 30 ε 4 18 ε 3 23 ε 2 + 11 ε , for i = 14 15 + 240 ε 4 65 ε 3 313 ε 2 + 124 ε 16 1 + 30 ε 4 18 ε 3 23 ε 2 + 11 ε , 1 , for i = 15
(19)

For iE, define the quantities

I r i = argmin j E c c n j > c n i c n j I l i = argmax j E c c n j < c n i c n j z n i = sup R n i V = { i : 1 ∈/ R ¯ n i } E c I 1 = V c E c

Let D π n lower bound denote the end-to-end MSE of modified EOUQ with index assignment π n appending a genie-aided erasure correcting code, which is the lower bound for the MSE of modified EOUQ.

Theorem 3

The lower bound for MSE of a modified EOUQ with the CNC index assignment is

D CNC lower bound = 1 3 2 n 1 + 2 2 n 2 + 2 n i V z n 2 i · α n i , I r i i E c j p n π n CNC j | π n CNC I 1 + i E i p n π n CNC j | π n CNC I 1 + 2 2 n i E c j + j 2 p n π n CNC j | π n CNC I 1 + i E i + i 2 p n π n CNC j | π n CNC I 1 .
(20)

Proof

According to (1) and the assumption we make,

D CNC lower bound = i E c j E c p n π n CNC j | π n CNC i × R n i x y n j 2 dx + i E c j E p n π n CNC j | π n CNC i × R n i x y n i 2 dx = i E c j E c p n π n CNC j | π n CNC i × R n i x 2 2 x y n j + y n 2 j dx + i E c j E p n π n CNC j | π n CNC i × R n i x 2 2 x y n i + y n 2 i dx = i E c j E c p n π n CNC j | π n CNC i × R n i x 2 2 x j + 0.5 2 n + j + 0.5 2 n 2 dx + i E c j E p n π n CNC j | π n CNC i × R n i x 2 2 x i + 0.5 2 n + i + 0.5 2 n 2 dx = 1 3 2 n 1 + 2 2 n 2 2 n i E c z n 2 i z n 2 I l i × i E c j p n π n CNC j | π n CNC i + i E i p n π n CNC j | π n CNC i + 2 2 n i E c z n i z n I l i × i E c j + j 2 p n π n CNC j | π n CNC i + i E i + i 2 p n π n CNC j | π n CNC i .

After re-expressing I l (i) the above formula with respect to I r (i), and merging like terms according to the definitions of α n i , k and β n i , k ,

D CNC lower bound = 1 3 2 n 1 + 2 2 n 2 + 2 n i V z n 2 · α n i , I r i j E c j p n π n CNC j | π n CNC I 1 + j E i p n π n CNC j | π n CNC I 1 + 2 2 n j E c j + j 2 p n π n CNC j | π n CNC I 1 + j E i + i 2 p n π n CNC j | π n CNC I 1 .

After substituting (18) and (19), respectively, Theorem 3 gives the following results.

Theorem 4

The lower bound for MSE of a 3-bit modified EOUQ with the CNC index assignment is

D CNC lower bound = 5 + 243 ε + 135 ε 2 + 579 ε 3 66 , 276 ε 6 + 39 , 330 ε 5 13 , 932 ε 4 + 13 , 176 ε 9 59 , 184 ε 8 + 84 , 888 ε 7 768 1 + 12 ε 3 15 ε 2 + 3 ε 2 6 ε 3 + 3 ε 2 3 ε
(21)

Theorem 5

The lower bound for MSE of a 4-bit modified EOUQ with the CNC index assignment is

D CNC lower bound = 11 , 740 ε + 5 , 884 ε 2 476 , 023 ε 3 + 3 , 328 , 344 ε 4 1 , 195 , 090 , 122 ε 12 5 , 134 , 292 , 766 ε 15 + 4 , 009 , 702 , 089 ε 14 801 , 350 , 784 ε 13 + 156 , 129 , 623 ε 8 107 , 373 , 749 ε 7 + 47 , 124 , 939 ε 6 14 , 855 , 907 ε 5 + 1 , 183 , 613 , 619 ε 11 441 , 562 , 375 ε 10 43 , 375 , 642 ε 9 + 2 , 844 , 849 , 870 ε 16 + 155 , 520 , 000 ε 19 518 , 967 , 000 ε 18 143 , 067 , 600 ε 17 + 52 / 12 , 288 1 + 30 ε 4 18 ε 3 23 ε 2 + 11 ε × 2 30 ε 4 + 21 ε 3 2 ε 2 3 ε × 1 + ε 3 + ε 2 2 ε 2 + 30 ε 4 43 ε 3 + 17 ε 2 4 ε
(22)

4 EOUQ with CNC aided by single parity check block code

A good erasure code for our proposed quantizers should have excellent ability to correct erasures but no ability to correct errors. This is because the codes that have error correcting ability can improve the performance of not only our proposed quantizers but also the quantizers in [7], and the benefits are equal. Thus, in this section, we focus on SPC code to correct the empty cell index that appears in receivers, in order to approach the lower bound for EOUQ with CNC index assignment.

4.1 SPC Code

The SPC code is one of the most popular error detection codes for it is easy to be implemented, which is also able to correct a single erasure. But, if there exist multiple erasures, the typical decoding method for the single-erasure case will fail to recover them. In this paper, we present a modified decoding method to deal with the multiple-erasure case for the SPC code. The main idea is that when one of the multiple erasures is being recovered, the other erasures are restored to the original value before being marked. Then, the multiple-erasure case can be transferred to several cases for a single erasure. Thus, the typical decoding method for the single-erasure case is still to be effective. Figure 2 gives an illustration of the modified decoding algorithm for the SPC code. According to Figure 2, as multiple erasures are recovered independently if multiple erasures exist, the erasure recovery probability P ~ c with modified decoding method for the single- or multiple-erasure case is equal to the erasure recovery probability P c with typical decoding method for the single-erasure case, i.e.,

P ~ c = P c .
(23)
Figure 2
figure 2

An example of a modified decoding algorithm for SPC code.

Then, the probability P c that a single erasure can be recovered by the SPC typical decoder is given in the following theorem.

Theorem 6

If a single erasure is detected, the probability that the erasure can be recovered by SPC code is

P c = j is even j = 0 k ̄ C k 1 j 1 ε k j 1 ε j ,
(24)

where C n k denotes an n choose k function, k ̄ =2 k 1 /2, and x means to round x to the nearest integers less than or equal to x.

Proof.

If a single erasure appears in the receiving SPC code, all error events can be classified into the following several cases.

Case 1. No error: when there exist no errors in the column, the erasure can be definitely recovered. Then, in this case, the recovery probability is (1−ε)k.

Case 2. One error: when there exists one error in the column, only if the single error happens to be located at the erasure, the erasure can be recovered. Then, the recovery probability, in this case, is (1−ε)k−1ε.

Case 3. Two errors: when there exist two errors in the column, only if the two errors are both not located at the erasure, the erasure can be recovered. Then, the recovery probability, in this case, is C k 1 2 1 ε k 2 ε 2 .

Case 4. Odd number of errors: when there exist i (odd number) errors in the column, similar to ‘one error case’, only if one of the errors happens to be located at the erasure, the erasure can be recovered. Then, the recovery probability, in this case, is C k 1 i 1 1 ε k i ε i .

Case 5. Even number of errors: when there exist j (even number) errors in the column, similar to ‘two errors case’, only if all errors are all not located at the erasure, the erasure can be recovered. Then, the recovery probability, in this case, is C k 1 j 1 ε k j ε j .

In sum, according to all cases, the probability that the single erasure can be recovered by SPC code can be written as

P c = 1 ε k + C k 1 2 1 ε k 2 ε 2 + + C k 1 j 1 ε k j ε j + + C k 1 1 1 1 ε k 1 ε + C k 1 3 1 1 ε k 3 ε 3 + + C k 1 i 1 1 ε k i ε i + ,

where i is an odd number and j is an even number. Assuming k is odd, without of generalization, let i=j+1, so that

P c = 1 ε k + C k 1 2 1 ε k 2 ε 2 + + C k 1 j 1 ε k j ε j + + C k 1 1 1 1 ε k 1 ε + C k 1 3 1 1 ε k 3 ε 3 + + C k 1 j + 1 1 1 ε k j 1 ε j + 1 + = C k 1 0 1 ε k + 1 ε k 1 ε + C k 1 2 1 ε k 2 ε 2 + 1 ε k 3 ε 3 + + C k 1 j 1 ε k j ε j + 1 ε k j 1 ε j + 1 + + C k 1 k 1 1 ε k k 1 ε k 1 + 1 ε k k 1 1 ε k 1 + 1 = C k 1 0 1 ε k 1 + C k 1 2 1 ε k 3 ε 2 + + C k 1 j 1 ε k j 1 ε j + C k 1 k 1 ε k 1 = j is even j = 0 k 1 C k 1 j 1 ε k j 1 ε j .

Supposing k is even, also let i=j+1, so that

P c = 1 ε k + C k 1 2 1 ε k 2 ε 2 + + C k 1 j 1 ε k j ε j + + C k 1 1 1 1 ε k 1 ε + C k 1 3 1 1 ε k 3 ε 3 + + C k 1 j + 1 1 1 ε k j 1 ε j + 1 + = C k 1 0 1 ε k + 1 ε k 1 ε + C k 1 2 1 ε k 2 ε 2 + 1 ε k 3 ε 3 + + C k 1 j 1 ε k j ε j + 1 ε k j 1 ε j + 1 + + C k 1 k 2 1 ε k k 2 ε k 2 + 1 ε k k 2 1 ε k 2 + 1 = C k 1 0 1 ε k 1 + C k 1 2 1 ε k 3 ε 2 + + C k 1 j 1 ε k j 1 ε j + C k 1 k 2 1 ε ε k 2 = j is even j = 0 k 2 C k 1 j 1 ε k j 1 ε j .

Therefore, for any k≥2, the probability that the single erasure can be recovered by SPC code is

P c = j is even j = 0 2 k 1 / 2 C k 1 j 1 ε k j 1 ε j .
(25)

4.2 SPC block code

In this paper, k−1 transmitting indices with parity bits are grouped into a k×n SPC block code, as shown in Figure 3. In a SPC block code, every index is converted to a binary word and then placed row by row, and bits in every column are grouped to a SPC code, respectively. If an index in a row is detected to belong to the empty cell set, all entries in the row are marked as an erasure word. In this paper, erasure word denotes the whole erased bits in one row, and then, a bit in the erasure word is called as erasure bit. Then, the modified SPC decoding method shown in Figure 2 is used to recover every erasure bit column by column. Figure 4 gives an illustration of the decoding algorithm for a 3-bit quantizer aided by 5×3 SPC block code. As shown in Figure 4, if multiple rows are marked as erasure word, when the erasure words in one row are being recovered, the erasure words in other rows are restored to the original value before being marked.

Figure 3
figure 3

The structure of SPC block code.

Figure 4
figure 4

An example of decoding algorithm for a 3-bit quantizer aided by 5×3 SPC block code.

In order to avoid confusion, we define that I denotes the transmitted index, K denotes the input of the SPC block decoder, and J denotes the output of the SPC block decoder. Then, the SPC block code-aided transition probability p ~ n j | i can be defined as follows.

Theorem 7

If aided by SPC block code, the transition probability p ~ n j | i that index j is output from the SPC block decoder given that index i is transmitted can be written as

p ~ n j | i = p n j | i + k E p n k | i · 1 P c H n i , j P c n H n i , j j E k E p n k | i · 1 P c H n i , j P c n H n i , j j E .
(26)

Proof.

p ~ n j | i p ~ n J = j | I = i = p ~ n J = j , I = i p ~ n I = i = k p ~ n J = j , K = k , I = i p ~ n I = i = k p ~ n J = j | K = k , I = i · p ~ n K = k , I = i p ~ n I = i = k p ~ n J = j | K = k , I = i · p ~ n K = k | I = i = k p ~ n J = j | K = k , I = i · p n k | i ,
(27)

where p n (k|i) is defined in (4). Obviously, p ~ n (K=k|I=i)= p n k | i .

Case 1: kE (index input into decoders does not belong to empty cells)

p ~ n J = j | K = k , I = i = 1 k = j and j E 0 k j.
(28)

This is because if the input index does not belong to the empty cell, it will not be marked as erasure and not be changed by the SPC block decoder.

Case 2: kE (index input into decoders belongs to empty cells)

In this case, p ~ n J = j | K = k , I = i denotes the probability that index j is output from the SPC block code, given that index i is sent and index k, which belongs to empty cells, is received and input into the SPC block decoder. The whole bits in the row where index k lies in will be marked as erasure word. According to the proposed SPC block decoding algorithm shown in Figure 4, every erasure bit will try to be recovered by the modified SPC decoder (mentioned in Figure 2) column by column, respectively. If multiple rows are marked as erasure word, when erasure words in one row are being recovered, the erasure words in other row are restored to original value before marked. Recalled from (23), the recovery probability for one erasure bit, P ~ c , can be obtained from Theorem 6. Assuming N denotes the number of bits that fail to be recovered for an erasure word, then the number of bits that succeed to be recovered for an erasure word is equal to nN. Then,

p ~ n J = j | K = k , I = i = 1 P ~ c N · P ~ c n N = 1 P c N · P c n N
(29)

Obviously, N=H n (i,j), where H n (i,j) is the Hamming distance between n-bit binary words i and j.

Therefore,

p ~ n J = j | K = k , I = i = 1 P c H n i , j P c n H n i , j .
(30)

After substituting (28) and (30), (27) can be rewritten as

p ~ n j | i = p n j | i + k E p n k | i · 1 P c H n i , j P c n H n i , j j E k E p n k | i · 1 P c H n i , j P c n H n i , j j E .
(31)

Then, MSD can be written as

D CNC SPC = i j p ~ n π n CNC j | π n CNC i × R n i x y n j 2 dx = i j p ~ n π n CNC j | π n CNC i × x 3 3 y n j x 2 + y n 2 j x R n i .

Now, D CNC SPC is a function of ε and k. Thus, we can use the Symbolic toolbox in Matlab to obtain the exact expression for every special case as follows.

For 3-bit quantizer, K=3

D CNC K = 3 = 7 , 962 , 624 ε 21 66 , 686 , 976 ε 20 + 264 , 508 , 416 ε 19 669 , 171 , 456 ε 18 + 1 , 226 , 192 , 256 ε 17 1 , 745 , 986 , 752 ε 16 + 2 , 013 , 793 , 056 ε 15 1 , 922 , 961 , 168 ε 14 + 1 , 526 , 564 , 664 ε 13 992 , 715 , 912 ε 12 + 509 , 652 , 360 ε 11 192 , 400 , 668 ε 10 + 44 , 669 , 958 ε 9 1 , 219 , 602 ε 8 2 , 647 , 899 ε 7 + 71 , 307 ε 6 + 553 , 449 ε 5 180 , 846 ε 4 + 4 , 059 ε 3 + 2 , 763 ε 2 + 501 ε + 10 / 768 2 + 6 ε 3 3 ε 2 + 3 ε 2 × 1 + 12 ε 3 15 ε 2 + 3 ε 2
(32)

For 3-bit quantizer, K=4

D CNC K = 4 = 63 , 700 , 992 ε 24 629 , 047 , 296 ε 23 + 2 , 944 , 180 , 224 ε 22 8 , 785 , 760 , 256 ε 21 + 18 , 977 , 504 , 256 ε 20 31 , 903 , 206 , 912 ε 19 + 43 , 683 , 065 , 856 ε 18 50 , 100 , 968 , 448 ε 17 + 48 , 911 , 316 , 480 ε 16 40 , 974 , 579 , 648 ε 15 + 29 , 499 , 005 , 376 ε 14 18 , 128 , 322 , 720 ε 13 + 9 , 340 , 103 , 424 ε 12 3 , 904 , 109 , 052 ε 11 + 1 , 250 , 869 , 332 ε 10 273 , 128 , 574 ε 9 + 26 , 813 , 814 ε 8 + 3 , 148 , 215 ε 7 + 92 , 601 ε 6 947 , 925 ε 5 + 277 , 830 ε 4 7 , 539 ε 3 3 , 735 ε 2 501 ε 10 / 768 1 + 12 ε 3 15 ε 2 + 3 ε 2 × 2 + 6 ε 3 3 ε 2 + 3 ε 2
(33)

For 3-bit quantizer, K=5

D CNC K = 5 = 509 , 607 , 936 ε 27 5 , 796 , 790 , 272 ε 26