 Research
 Open access
 Published:
Bootstrapped low complex iterative LDPC decoding algorithms for freespace optical communication links
EURASIP Journal on Wireless Communications and Networking volume 2023, Article number: 78 (2023)
Abstract
In recent years, much research has been devoted to freespace optical communication (FSO). The unregulated spectrum, low implementation costs, and robust security of FSO systems all are of great importance which lead to a wide range of applications for FSO links, from terrestrial communications to satellite communications. However, the fundamental limitation with FSO links is atmospheric turbulence (AT) caused by fading, significantly reducing link performance. Random phenomena are the best characteristic of atmospheric turbulence caused by changes in the air’s refractive index over time. Numerous probability density functions of the AT models were presented to model the randomness in AT channels. The LogNormal (LN) channel model is for weak atmospheric turbulence, while the Gamma–Gamma (G–G) channel is selected for moderate and strong atmospheric turbulence. The impacts of geometric losses, attenuation due to weather, and errors due to misalignment are addressed using LN and G–G channels. Channel coding is one of the possible solutions for mitigating such FSO channel impairments as the lowdensity parity check (LDPC) codes. In this article, the Weighted Bit Flipping (Algorithm (1)), Implementation Efficient Reliability Ratio Weighted Bit Flipping (Algorithm (2)), and MinSum (Algorithm (3)) algorithms are compared and evaluated against FSO atmospheric turbulence channels. In addition, two novel algorithms are proposed to enhance the complexity or Bit Error Rate performance of LDPC decoding over FSO channels. The results showed an impressive improvement of the coded FSO system by employing the proposed algorithms compared to the existing LDPC decoders for FSO communications from the point of all comparison parameters.
1 Introduction
Optical carriers make it possible to explore new opportunities in wireless communications that still need to be explored. As integrating optical carriers with electromagnetic wavebased wireless communication systems, it has a significant impact on enabling and supporting futuregeneration heterogeneous wireless communications. In addition, it supports an expanded range of applications and services.
Significant technical and operational leverages are achieved through several technologies for example, freespace optical (FSO), wireless systems using radio frequency (RF) [1,2,3,4], communications maintaining satellite [5], backup operations employing optical fiber, and transmission using HDTV.
FSO systems still suffer from various limitations. Including turbulences due to the atmosphere, attenuation impacts of weather, and geometric losses. Also, scintillation of the laser beam caused by differences in the refractive index, temperature, pressure, and wind variations [6]. Additionally, atmospheric turbulence is modeled using statistical models that suit the experimental results. Taking the LogNormal (LN) [7] model for weak turbulence and the Gamma–Gamma (G–G) [8] model for moderate and strong atmospheric turbulence regimes. Turbulence caused by atmospheric and weather conditions due to dust, rain, fog, snow, and haze causing fading has a significant impact on the performance of the FSO system [9, 10].
Various modulation techniques are maintained to minimize atmospheric turbulence according to energy, spectral efficiencies and noncoherent or coherent detection, for example, pulse position modulation (PPM) [11], onoff keying (OOK) [12], pulse width modulation (PWM) [13], multiple PPM [14], binary phaseshift keying (BPSK) [15], space shift keying [16], digital pulse interval modulation (DPIM) [17]. Moreover, a new multipoint tomultipoint signalspace diversity (SSD) cooperative FSO scheme is analyzed under G–G and LN channel models for various users utilizing variant modulation levels in [16].
In [18], different modulation scheme performances combined or not combined with Space Diversity Reception Technique (SDRT) employed in FSO systems are investigated. Also, the EnergyperSpectral Noise Ratio (\(E_{b}/N_{o}\)), the results are examined in terms of bit error rate (BER) and outage probability (OP). Therefore, an mary Quadrature Amplitude Modulation (Mary QAM) and Orthogonal FrequencyDivision Multiplexing (OFDM) centered FSO system can communicate at longdistance (7000 m) and reach data rates varying from 4.1 Gbits/s (minimum) to 10.94 Gbits/s (maximum).
A receiving systembased optical communication was proposed in [19] utilizing the detector gain factorbased regulation control. It computed the deviation of scintillation based on a realtime receiving signal. It also established a transformation function relationship between the gain and the scintillation variance factor allowing closedloop control of the detector gain factor and improved the receiving system’s \(E_{b}/N_{o}\).
The significant analysis of FSObased systems was examined in [20] by changing the beam’s divergence under weather attenuating conditions like snowpack and rainwater. The simulations are performed using singlechannel CSRZFSO (carriersuppressed returntozero/freespace optical) systems retaining a 40 Gbps capacity within two different separation fluctuating distance transceivers. Furthermore, the meteorological atmospheric turbulence (wet snow, dry snow, light, medium, and heavy rain) is considered for modeling the communication link. It was established that dry snow and heavy rain have a prominent elevated attenuation in terms of Qfactor.
A cooperative FSObased communication system employing polarization code was proposed in [21]. The average upper limit of the bit error rate of the atmospheric turbulence channel was applied to create the polarization code’s frozen bit set. Consequently, the bit set’s information about the Source (S) to Relay (R) link is retrieved by using the connection between the frozen bit set of the S to Destination (D). Eventually, the initial information is retrieved by decoding including combined S–R link and S–D connection’s information bits and applying the equivalent gain combination. The performance of the bit error rate of the FSObased cooperative communication system can be enhanced by at least 0.5 dB. Also, the outage probability can be lowered to less than \(10^{7}\).
The error control coding algorithms are the most promising mitigating processes for the atmospheric turbulence of FSO channels. Polar codes are analyzed in [22] to select the code required to achieve a \(10^{9}\) bit error rate at weak atmospheric turbulence. The scintillation indices of 0.12 and 0.2 are demonstrated by manipulating Monte Carlo simulations. The decoder delivers coding gains of 2.5 and 0.3 dB by employing a successive cancelation list (SCL) in comparison to the SC decoder. The SC decoder with the FSO channel has a 0.31 scintillation index; it delivers a 2.5 dB coding gain compared to the SCL decoder. Also, in [23], polar codes are introduced and compared with LDPC codes. The authors maintained longdistance experimentation utilizing terrestrial FSO communication separation distance equal to 7.8 km. LDPC codes achieved lower BER than polar codes. At the same time, polar codes have lower complexity than LDPC codes.
Authors in [24] assessed uncoded and coded FSO communication system performance as they adopted Bose Chaudhuri Hocquenghem (BCH) and lowdensity parity check codes. Direct detection and intensity modulation are considered by both systems. The performance of the coded communication system was examined using the channel’s various conditions (G–G with fast fading and slow fading channels representing various atmospheric turbulence cases). Additionally, the authors derived the mathematical equations for the capacity of a channel characterized by a fast fading channel and a slow fading channel’s outage capacity. Finally, they discovered that keeping error control codes (ECCs) in a slow fading channel reduced bit error rate performance. At the same time, in this channel, the mixture of interleaved and ECCs lessens the burst errors and enhances the performance and the coding gain. The authors also illustrate that there is no more enhancement in the coding gain in channels with fast fading, and the same impact is seen in various atmospheric turbulence conditions.
The study performed in [25] showed that dynamically adjusted loglikelihood ratio (LLR) algorithm is a soft decision algorithm. Despite their enormous complexity, soft decision algorithms are wellknown for their high coding gain performance. The proposed algorithm was termed by a dynamically adjusted loglikelihood ratio (LLR) algorithm and had perfect immunity against various atmospheric turbulence. In addition, the authors of [26] proposed a quasicyclic (QC) LDPC code with multiplepulseposition modulation (MPPM) combined with iterative decoding using a simplified approximation to mitigate fading caused by atmospheric turbulence.
Most recent published researchers revealed that soft decision LDPC decoding algorithms are the best candidates for FSO channels. However, more enhanced hard decision algorithms were proposed in [27] to improve FSO channel performance and perform close to soft decision algorithms. The algorithms introduced in this study are Weighed Bit Filliping (Algorithm (1)) and Implementation Efficient Reliability Ratio Weighed Bit Filliping (Algorithm (2)). The latter performed better than Algorithm (1) against weak, moderate, and strong atmospheric turbulence.
The FSO communication channels are characterized by intense channel impairments which impose the need for more enhancement besides FSO communication links required high data rate transmission with the lowest errors. Also, previous hard and soft decision LDPC decoding algorithms suffered from low BER and high complexity. So, there is a need for more robust LDPC decoders.
A novel hard and hybrid LDPC decoding algorithms are presented in this paper, termed Modified Implementation Efficient Reliability Ratio Weighted Bit Filliping (Proposed Algorithm (1)) and Modified Bootstrapped Modified Implementation Efficient Reliability Ratio Weighted Bit Filliping (Proposed Algorithm (2)), to decrease the complexity or enhance BER performance of the whole FSO coded system.
However, no recent attempts in recently published works concern recently proposed LDPC decoding algorithms such as Proposed Algorithm (1) and Proposed Algorithm (2) for enhancing the FSO atmospheric turbulent channels. Furthermore, atmospheric weather impacts like attenuation, geometric losses, and misalignment errors are considered for G–G and LN atmospheric channels. Nevertheless, the Monte Carlo simulation results for both recently proposed algorithms result in impressive enhancements in the coded FSO communication system compared with the existing ones in terms of BER performance and complexity.
The paper’s organization is as follows: Sect. 3 shows the FSO system model. Then, in Sect. 4, the FSO channel models are presented. In Sect. 5, LDPC encoding and decoding algorithms are illustrated. Simulation results are presented in Sect. 6. Finally, the paper is concluded in Sect. 8.
2 Methods/experimental
This study targets enhancements of FSO communication systems. It is proposed to maintain forward error correction represented by LDPC. Two novel LDPC decoding algorithms are proposed to enhance different reported channel models for FSO communications. The maintained comparison parameters are BER performance and complexity of the whole system. The simulation tool used to express the new contributions in FSO channels is MATLAB 2018a.
3 FSO communication system model
As shown in Fig. 1, the system model of FSO communication illustrated that the binary source data will be LDPC coded and mapped by the onoff keying technique. Also, the resultant electrical signal will be transformed into an optical beam using the laser beam on the transmitter side. The transmitted optical beam will be exposed to weather attenuation, causing atmospheric turbulence and path losses. The analytical expression of the electrical received signal r(t) is
So, y(t) is the electrical signal transmitted, \(\eta\) represents the responsivity of the detector, n(t) is the additive white Gaussian noise (AWGN) with variance \(\sigma ^{2}_{n} = N_{o}/2\) and mean equal to zero, h represents the channel gain, \(I_{o}\) is the received light intensity without the effects of the channel turbulence, and I is the received signal intensity is given by [28].
The photodiode collected the transmitted optical beam in the receiver, and the maximum likelihood detector transformed the received electrical signal into a binary format. After that, the LDPC decoders performed their iterative decoding until they reached the corrected decoded data or a more enhanced version as the predetermined number of iterations is reached.
4 Channel models
Multiple mathematical channel models have been proposed to specify atmospheric turbulence in weaktostrong cases. The channel models are described by their probability density functions (pdf) and are shown next.
4.1 Weak atmospheric turbulence FSO channel model
For modeling, the weak atmospheric turbulence LN channel model is maintained. The pdf of LN is given by [29]
as \(h_i=\exp {(2Z)}\) is the channel coefficient with Z being an independent and identically distributed (i.i.d.) Gaussian random variable (RV) with mean \(\mu\), standard deviation \(\sigma\), and variance \(\sigma ^2\). To ensure that the fading channel does not amplify or attenuate the average power, the fading coefficients are normalized as \({\textbf{E}}[{h_i}^2] =e^{2(\mu +\sigma ^2)}= 1\).
4.2 Moderatetostrong channel model
The G–G channel model is used to model moderatetostrong atmospheric turbulence, and \(h_i\) is presented as a G–Gdistributed RV with the following pdf:
where \(G^{m,n}_{p,q}\)[.] is the Meijer’s Gfunction [30, Eq.(9.301)]^{Footnote 1}, and \(\Gamma (.)\) is the Gamma function [30, Eq. (8.310)]. In a scattering environment, \(\alpha\) and \(\beta\) are sufficient numbers of largescale and smallscale eddies, respectively, and they are related by Rytov variance (\(\sigma _l^2\)).
4.3 Pointing error
For the pointing error effect, \(\xi\) denotes the ratio between the equivalent beam radius at the receiver \(W_{L_{\text {eq}}}\) and the pointing error displacement standard deviation \(\sigma _{s}\) is shown in Eq. 5
where \(W_{L_{\text {eq}}}\) is calculated by \(W^{2}_{L\text {eq}} = W^{2}_{L} \sqrt{\pi } \,\)erf\((v) / (2v \,\)erf\((v^{2})\) as \(v = \sqrt{\pi }\,a / (\sqrt{2}\,W_{L})\), erf(.) is the error function and a represents the radius of a circular detector aperture [31].
5 LDPC encoding and decoding algorithms
As illustrated, a serious impairments of optical channel models were noticed badly effect on FSO communications. So, LDPC codes are selected due to its brilliant performance in case of sever channels for mitigating these impairments. Besides, it has higher flexibility as it attained different decision schemes. LDPC decoders are classified into three categories hard, soft and hybrid decisions [32]. This will make more flexibility in selecting the proper decoder in case of optical communications links.
5.1 LDPC encoding
The construction of LDPC codes mainly relied on a parity check matrix characterized by sparseness features. So, an efficient encoding procedure proposed by Thomas J. Richardson in [33] is maintained using the parity check matrix, which is applied as an alternative for converting the matrix of parity check into a generator matrix, which did not affect its feature of sparseness associated with H matrix. However, it produces extra complexity in encoding [33]. As a result, [33] proposes a technique for encoding that is maintained through simulations. The Gaussian elimination technique is the primary method for constructing an encoder for LDPCs. The outcome of this process is an exact matrix shape with lower triangular as appeared in Fig. 2. The vector is split into \({\textbf {x}}\) vectors by a systematic part \({\textbf {s}}\) and a parity part \({\textbf {p}}\) to achieve the equation \({\textbf {x}} = [{\textbf {s}}, {\textbf {p}}]\). Then, build the same encoder as follows: (1) Filling the vector \({\textbf {s}}\) with (N–M) data symbols, (2) Using the “back substitution” technique, calculate the symbols of a parity check.
Using \(O(n^{3})\) preprocessing calculations, the encoding algorithm converted the matrix H into its required form. As a result of the subsequent preprocessing, the encoding process consumed \(O(n^{2})\) operations. So, the matrix lost its sparseness. It was estimated that required approximately \(n^{2} = \frac{r(1r)}{2}\) operations of XOR to finalize the process of encoding, as the rate of the code is “r”. Furthermore, [33] proposed encoding algorithm required quadratic calculations for encoding because the prior \(n^{2}\) term is a constant factor that is usually trivial. The practicability of encoding complexity will be achieved in extended blocklength cases.
5.2 LDPC decoding algorithms
There are three types of LDPC decoding algorithms. The first category concerned algorithms with a hard decision. They are described by the fundamental algorithm termed “BitFlipping” (BF). It is proposed, as well as its variants, in [32]. It has low hardware complexity in addition to a lower capability of error correction ability. The second category is the soft decision algorithms characterized by immense complexity with impressive BER. Finally, the third category is the hybrid decoding algorithms which compromise between the lower complexity of hard ones and the outstanding BER performance of soft ones.
5.3 Hybrid decision LDPC decoding algorithms
5.3.1 Algorithm (1)
The algorithm (1) termed by Weighted Bit Flipping is proposed in [34]. It aimed to improve the “BF” decoding algorithm’s error correction ability by retaining the correct ability for data symbols in its decoding decisions. Therefore, the additional complexity of decoding is obligatory to enhance performance.
The algorithm (1) decoding started by recognizing the considerably unreliable variable nodes connected to every check node. The following equation defines this step:
as the \(n_{\text {min}}\) represents the index of the lesser soft value of variable nodes connected to the check node m.
The absolute minimum component in the received sequence is calculated as [34], where \(\mid y_{n} \mid\) is the absolute value of \(y_{n}\), characterizing the received message’s reliability calculation. because the binary counterpart \(b_{n}\) to \(y_{n}\) with formidable reliability is \(\mid y_{n} \mid\). For the hard decision, the \(b_{n}\) digit is leveled up. Each variable node’s errorterm \(E_{n}\) determination is expressed as follows:
as the syndrome associated bit \(s_{m}\) belongs to check node m. The \(E_{n}\) represents that the weight checksum is connected to the n code bit. The procedure of the WBF algorithm is thoroughly explained as follows.
5.3.2 Algorithm (2)
It is observed that the reliability ratiobased bit flipping (RRWBF) algorithm proposed by [35] consumes lots of operations, so a vital modification was performed to lower the complexity of the RRWBF algorithm and keep the improvement in BER compared to the WBF algorithm. So, a lower complexity calculation term \(T_{m}\) was proposed in [36] results in algorithm termed Implementation Efficient Reliability Ratio WBF (Algorithm (2)). By using \(T_{m}\) instead of the reliability ratio factor, this term aimed to reduce the decoding time in the RRWBF algorithm:
and calculation of the errorterm \(E_{n}\) by:
Algorithm 2 is briefly explained as follows.
5.4 Soft decision LDPC decoding algorithms
5.4.1 Algorithm (3)
Soft decision algorithms are extracted from the algorithm proposed by [37], known as Belief Propagation (BP). These algorithms are distinguished by the complexity of \(O(2\,M\rho + 4N\gamma )\) for each decoding iteration [37]. Decoding algorithms with lessening complexity are extracted from the BP algorithm, which has a lower complexity.
The MinSum Algorithm (Algorithm (3)) is a soft decision algorithm with a low level of complexity. In [37], it was derived from the BP algorithm. The procedure of MinSum decoding is as follows [37]:
5.5 Proposed LDPC decoding algorithms
5.5.1 Proposed algorithm (1)
The main shortcoming of the latter iterative decoders was the expenditure of extended time in the decoding process, especially at the check node and variable node steps. The decoding algorithm (2) proposed in [36] needs to be revised to address this concern as it was observed that at low \(E_{b}/N_{o}\)s the decoding of algorithm (2) consumes large number of iterations without any improvement of its resultant BER. When the number of iterations of the specified decoding algorithm increases, more computation processing time is required per iteration.
The proposed algorithm (1) added a decision step to figure out situations illustrated in the last paragraph. It restricted the loop of iterations by selecting either to proceed with decoding or to terminate the loop of iterations if the latter oscillation phenomena occurred. So, a procedure proposed for maintaining different conditions was started by executing a syndrome check at every iteration. So, a threeentry register was proposed as two iterations were required to get the same syndrome vector, starting from the initial vector, if the same bit in the decoded code word is flipped twice to return to the initial state. Thus, this clarifies that three entries are required to detect oscillation phenomena. As a result, three flipflops are the smallest register size that could be used for accumulating received binary vectors to correlate the first and third ones. Each renewed entry is accumulated at the first flipflop, and others are spanned down to replace the last input. Consequently, no more performance improvement will occur when the oscillation is reached, and decoding will end.
This additional condition influenced the lowering of the complexity outside of any impact on performance corresponding to the algorithm (2). Finally, the oscillation phenomena examined in the last paragraph are demonstrated in Fig. 3 for additional clarification. Also, an algorithm procedure is illustrated for more clarification of proposed algorithm (1).
5.5.2 Proposed algorithm (2)
A significant deficiency of the Bootstrapped WBF (BWBF) algorithm proposed in [38], which determined the threshold value \(\beta\) offline to differentiate between reliable and unreliable variable nodes, leads to massive complexity. Besides, this method does not guarantee that the precision of calculating threshold \(\beta\) which determined according to the channel state at the initial decoding step. Moreover, the channel condition of FSO suffered from excessive variations. So, the requirement for a new method to discriminate between unreliable and reliable variable nodes is vital in FSO channels. A substituted method is critical to reduce the duration consumed due to the precalculating bootstrap threshold, which causes a lower efficiency for the FSO channels.
The proposed algorithm (2) was started by promoting the bootstrap step to achieve this requirement by substituting the method of distinguishing between the reliability of variable nodes using threshold beta, which takes longer duration because it was precalculated offline before the proposed algorithm (1) was started. Then, the proposed algorithm (2) inspected the unreliable check nodes by computing the syndrome vector, which discriminates between reliable and unreliable check nodes. Therefore, each syndrome bit \(s_{m}\) with a nonzero value was linked to an unreliable check node.
Every unreliable check node was linked to variable nodes defined by N(m). Variable node with the lowest soft values was connected to every unreliable check node m; this was accounted as unreliable variable node causing the unreliability of the check node to which they were connected. By locating the most subordinate soft values of variable nodes linked to each unreliable check node, the bootstrap step will start substituting the soft values of unreliable variable nodes with better ones. Finally, all unreliable variable nodes were extracted with greater precision, without identifying the channel state or preset certain thresholds as in the BWBF algorithm. The complexity of the proposed algorithm was identical to the BWBF algorithm, excluding the revoked comparator, which employed a predetermined threshold. Similarly, small steps will be counted to distinguish untrustworthy bits. The complexity of proposed algorithm (2) will be \(O((2M'\rho + 4N' \gamma ) + N_{h}(M\rho + N\gamma ))\) where \(M'< M\) and \(N' < N\). To obtain the most inferior soft value for variable nodes, the values of \(N'\) and \(M'\) were precalculated in addition to the minimum function for each check node. For additional clarification, the proposed algorithm (2) demonstrated the modified algorithm.
6 Simulation results
Simulation results were presented in this section to validate the derived analysis in this paper and prove improvements due to recently proposed decoding algorithms. Besides, all reported FSO channel models are considered in the comparison between all presented LDPC decoders, including the proposed algorithms. As in real life, the communication channel model is unknown at the receiver of the FSO communication link. The following parameters were considered in the maintained simulation in this paper in all conducted analyses: The BER targeted for FSO channels is \(10^{6}\), \(\lambda\) = 1550 nm, \(\ell\) = 1000 m, \(\alpha =\) 0.43 dB/km for clear weather conditions and intense sunlight conditions. In simulation results, each \(E_{b}/N_{o}\) value receives no more than \(10^{7}\) bits. The parameters utilized in the simulation results are illustrated in Table 1.
In Fig. 4, BER is compared for recently proposed algorithms and other published algorithms concerning enhancing the BER of FSO weak atmospheric turbulence channel. As delineated in Fig. 4, the proposed algorithm (1) achieved the same BER levels as the algorithm (2) at all maintained \(E_{b}/N_{o}\)s. Also, the BER of proposed algorithm (2) enhanced the BER compared to other maintained algorithms by at least 3 dB, especially algorithm (1). In contrast, for the algorithm (2) and the proposed algorithm (2), it is about 1 dB. Besides, it gets close to the soft decision algorithm (3), characterized by superior BER performance.
The BER comparison between the employed algorithms is presented in Fig. 5 for the moderate atmospheric turbulence. As shown in Fig. 5, the proposed algorithm (1) reached the same levels of BER as algorithm (2), as expected. The proposed algorithm (2) was improved BER by at least 3 dB compared to the latter algorithms at all maintained \(E_{b}/N_{o}\)s, demonstrating algorithm (2)’s superiority over other algorithms and bringing BER performance closer to the soft decision algorithm denoted by the algorithm (3). It was observed that the gap between MinSum and the proposed algorithm (2) was expanded due to excessive impairments imposed in moderate atmospheric turbulence.
As presented in Fig. 6, the BER comparison was performed under a strong atmospheric turbulence channel model. It is shown that the gap between algorithm (3) and other hard or hybrid algorithms increased due to the excessive channel impairments caused by strong atmospheric turbulence. The proposed algorithm (1) achieved the exact BER of the algorithm (2) at all utilized \(E_{b}/N_{o}\)s in simulations. The main contribution required from the proposed algorithm (1) is to minimize the decoding time and maintain the same BER performance as the original algorithm (2). The proposed algorithm (2) outperformed other maintained algorithms, especially at higher \(E_{b}/N_{o}\) values. This late performance is due to severe impairments imposed by a strong atmospheric turbulence channel.
The average number of iterations consumed by each decoder was another factor to consider when evaluating LDPC decoding algorithms over FSO turbulent channels. This parameter will indicate the reduction imposed by these algorithms to get the required BER with a certain number of iterations. Figure 7 depicts the average number of iterations versus \(E_{b}/N_{o}\) for weak atmospheric turbulence channels. It is observed that the required average number of iterations belonging to the proposed algorithm (2) reached the bottom of the number of iterations compared to other algorithms under study, especially at the \(E_{b}/N_{o}\)s from 8 to 10 dB. For higher \(E_{b}/N_{o}\)s, the proposed algorithm (2) reached a second lower average number of iterations after the proposed algorithm (1). The algorithm (1), as shown in Fig. 7, consumed the top level of the average number of iterations compared to other algorithms due to its inferior decoding performance in the case of FSO atmospheric turbulent channels.
There is an exciting issue in Fig. 8, the average number of iterations for all LDPC decoding algorithms in this atmospheric turbulent channel keeps the exact value of iteration consumption strictly at low \(E_{b}/N_{o}\)s. At \(E_{b}/N_{o}\) = 12 dB, and both algorithm (2) and the proposed algorithm (2) had the lowest average number of iterations consumed when compared to other algorithms in the study. Besides, the lowest level of the average number of iterations consumed was achieved by the proposed algorithm (1) algorithm at \(E_{b}/N_{o}\) = 16 dB, and the proposed algorithm (2) had the second level after proposed algorithm (1) at same \(E_{b}/N{o}\).
The average number of iterations used by decoding algorithms in this work in the case of a strong atmospheric turbulence channel is delineated in Fig. 9. It showed that all maintained algorithms consumed the same average number of iterations due to excessive impairments that occur in such FSO turbulent channels. At the same time, the algorithm (1) maintained the lowest level in consumption of average in iterations compared to other ones at high \(E_{b}/N_{o}\) from 19 to 30 dB. Proposed algorithm (1) got the secondorder at the same high \(E_{b}/N_{o}\)s, second place is achieved by proposed algorithm (2), and third place is for algorithm (2).
Convergence is a vital parameter that concerns iterative decoding algorithms evaluation. It is noticed from Figs. 10, 11 and 12 that proposed algorithm (2) achieved the fastest convergence at the three atmospheric turbulence FSO channels. It is also recognized that the BER that converged by maintained LDPC decoders declined according to the type of FSO atmospheric channel. The BER of the weak atmospheric had the lowest converged BER, and approximately \(4 \times 10^{5}\) next is for moderate atmospheric turbulence by \(6 \times 10^{4}\). Finally, the highest BER occurred at a strong atmospheric turbulence channel.
The decoding computation time for all maintained algorithms was compared along all atmospheric turbulence channels. Figure 13 shows the comparison between all maintained LDPC decoders from the point of decoding computation time for the weak atmospheric turbulence channel. It is observed that the lowest level of decoding computation time belongs to proposed algorithm (1) compared to other algorithms all over the \(E_{b}/N_{o}\)s, saving the wasted computation time at other algorithms discussed in the later paragraphs due to its successful stopping criterion illustrated in the later sections. The proposed algorithm (2) attained the secondlowest computation time due to using modified bootstrap, which enhanced the reliability of received bits leading to fast convergence and saving the computation decoding time. The algorithm (2) obtained the third level of decoding computation time, while the algorithm (1) is the highest consumer of decoding computation time.
It is observed in Fig. 14 that there is a saturation for all algorithms at lower \(E_{b}/N_{o}\)s as all algorithms maintained same decoding computation time at the range from \(E_{b}/N_{o}\) = 8 dB to \(E_{b}/N_{o}\) = 12 dB. These phenomena were expected as the same resultant performance occurred at the same atmospheric turbulent channel in the case of the average number of iterations. For the range of \(E_{b}/N_{o}\)s 13 to 16 dB, the decoding computation time of the algorithm (1) is the lowest. In contrast, the proposed algorithm (1) consumed slightly more time than algorithm (1) in the same range as it is more complex, targeting enhancing BER rather than the algorithm (1). The algorithm (2) is the highest in decoding computation time, while the proposed algorithm (2) was consumed at a close rate to proposed algorithm (1) as it uses a complex step to improve its BER performance rather than the latter one.
For the strong atmospheric turbulence case in Fig. 15 the decoding computation time is varying along all \(E_{b}/N_{o}\)s. This phenomenon is due to excessive impairments of the strong atmospheric case, which randomly affects the decoding algorithms. In contrast, every algorithm maintains the same performance as shown in other atmospheric turbulence channels. This case is demonstrated in Fig. 15 as the algorithm (1) consumed the lowest level of decoding computation time compared to other algorithms along with all \(E_{b}/N_{o}\)s. Proposed algorithm (1) attains a second place in decoding computation time due to inserted new stopping step over the algorithm (2), which leads to this reduction. The proposed algorithm (2) performs in third place as it targets enhancing the algorithm (1) by applying the Modified bootstrap step, which improved the BER of proposed algorithm (1) without gaining excessive complexity.
The resultant average throughput was a crucial parameter in evaluating LDPC decoding algorithms against FSO atmospheric turbulence channels. The average throughput comparison for LDPC decoding algorithms under study at weak atmospheric turbulence channel is presented in Fig. 16. According to Fig. 16, proposed algorithm (2) achieved the top level of average throughput in comparison to other algorithms especially at high \(E_{b}/N_{o}\)s. At \(E_{b}/N_{o}\) = 11 to 13 dB the proposed algorithm (1) reached the highest average throughput over all maintained algorithm. This variation is due to the variant performance of the moderate atmospheric turbulence channel. The average throughput at the same turbulent channel all algorithms under study saturated by the same average throughput value exactly from \(E_{b}/N_{o}\) = 8 to 10 dB. The algorithm (2) maintained the lowest average throughput at most of \(E_{b}/N_{o}\)s.
The average throughput comparison between LDPC decoding algorithms utilized in moderate atmospheric turbulence channel is displayed in Fig. 17. As shown in Fig. 17, the algorithm (1) maintained the highest values of average throughput all over \(E_{b}/N_{o}\)s. The proposed algorithm (1) achieved second place in average throughput compared to other algorithms. For algorithm(2) and proposed algorithm (2), they maintained same average throughput a long all \(E_{b}/N_{o}\)s. Its is clear that all algorithms have saturated average throughput levels exactly at \(E_{b}/N_{o}\) = 8 to 13 dB.
Fig. 18 a comparison of the average throughput for all algorithms is performed. It is shown that saturation for all algorithms was attained all over the \(E_{b}/N_{o}\)s. The algorithm (1) accomplished the top level of average throughput for all \(E_{b}/N_{o}\)s. This case is due to the same performance that occurred in strong atmospheric turbulence concern decoding computation time, which, as mentioned before, is due to excessive impairments in such channel. On the other hand, the rest algorithms attained constant average throughput due to inferior capability of mitigating strong atmospheric turbulence channel impairments.
7 Discussion
In this work, two novel LDPC decoding algorithms are proposed to enhance FSO communication performance and lower the complexity of the whole system. All results show the improvement due to the proposed LDPC algorithms over various FSO channel models reported in the literature. The factors of comparison are BER, the number of consumed iterations, decoding time, the convergence of LDPC decoding algorithms, and the resultant throughput for proposed algorithms.
8 Conclusion
This paper evaluated and proposed various LDPC decoding algorithms in atmospheric turbulence channels of FSO communication systems. LDPC decoding algorithms have three decision categories: hard, soft, and hybrid; all are considered in this evaluation. The evaluated performance considered crucial parameters for comparing LDPC decoders performance metrics based on BER, the average number of iterations, convergence, decoding computation time, and average throughput. All atmospheric turbulence channel models were considered in this evaluation. Furthermore, two novel algorithms are proposed to gain more improvement in alloptical communication channel models considered in this work. According to the simulation results, proposed algorithms maintained impressive performances against all evaluation parameters considered in this work compared to existing ones.
Availability of data and materials
Data sharing not applicable to this article as no data sets were generated or analyzed during the current study.
Notes
The MeijerG function is a standard builtin function available in most popular mathematical software packages, such as Maple and Mathematica.
Abbreviations
 LDPC:

Low density parity check
 FSO:

Free space optical
 WBF:

Weighted Bit Filliping
 RRWBF:

Reliability Ration Weighted Bit Filliping
References
M.Z. Chowdhury, M.K. Hasan, M. Shahjalal, M.T. Hossan, Y.M. Jang, Optical wireless hybrid networks: trends, opportunities, challenges, and research directions. IEEE Commun. Surv. Tutor. 22(2), 930–966 (2020)
M. Chowdhury, M. Hossan, A. Islam, Y. Jang, A comparative survey of optical wireless technologies: architectures and applications. IEEE Access 6, 9819–9840 (2018)
A. Mansour, R. Mesleh, M. Abaza, New challenges in wireless and free space optical communications. Opt. Lasers Eng. 89, 95–108 (2017)
M.A. Khalighi, M. Uysal, Survey on free space optical communication: a communication theory perspective. IEEE Commun. Surv. Tutor. 16(4), 2231–2258 (2014)
J. Yin, T. Dong, Y. Su, H. Shi, Y. Zhou, K. Xu, A novel 16Gb/s free space optical communication scheme for the integration of satellite communication and ranging, in 2019 Asia Communications and Photonics Conference (ACP), Chengdu (IEEE, 2019)
K. Kiasaleh, Performance of APDbased, PPM freespace optical communication systems in atmospheric turbulence. IEEE Trans. Commun. 53(9), 1455–1461 (2005)
D.A. Luong, T.C. Thang, A.T. Pham, Effect of Avalanche photodiode and thermal noises on the performance of binary phaseshift keying subcarrierintensity modulation/freespace optical systems over turbulence channels. IET Commun. 7(8), 738–744 (2013)
G.K. Rodrigues, V.G.A. Carneiro, A.R. da Cruz, M.T.M.R. Giraldi, Evaluation of the strong turbulence impact over freespace optical links. Opt. Commun. 305, 42–47 (2013)
E. Bayaki, D.S. Michalopoulos, R. Schober, EDFAbased alloptical relaying in freespace optical systems. IEEE Trans. Commun. 60(12), 3797–3807 (2012)
C.K. Datsikas, K.P. Peppas, N.C. Sagias, G.S. Tombras, Serial freespace optical relaying communications over Gamma–Gamma atmospheric turbulence channels. J. Opt. Commun. Netw. 2(8), 576–586 (2010)
S.G. Wilson, M. BrandtPearce, Q. Cao, J.H. Leveque, Freespace optical MIMO transmission with Qary PPM. IEEE Trans. Commun. 53(8), 1402–1412 (2005)
Z. Ghassemlooy, W.O. Popoola, Terrestrial freespace optical communications, in Mobile and Wireless Communications Network Layer and Circuit Level Design (2010), pp. 355–392
Y. Fan, R.J. Green, Comparison of pulse position modulation and pulse width modulation for application in optical communications. SPIE Opt. Eng. 46(6), 065001 (2007)
H. Sugiyama, K. Nosu, MPPM: a method for Improving the bandutilization efficiency in optical PPM. IEEE/OSA J. Lightwave Technol. 7(3), 465–471 (1989)
R. Lange, B. Smutny, B. Wandernoth, R. Czichy, D. Giggenbach, 142 km, 5.625 Gbps freespace optical link based on homodyne BPSK modulation, in Proceedings of SPIE, vol. 6105 (2006)
A. Jaiswal, M. Abaza, M.R. Bhatnagar, R. Mesleh, Multipointtomultipoint cooperative multiuser SIM freespace optical communication: a signalspace diversity approach. IEEE Access 8, 159244–159259 (2020)
Z. Ghassemlooy, A.R. Hayes, N.L. Seed, E.D. Kaluarachchi, Digital pulse interval modulation for optical communications. IEEE Commun. Mag. 36(12), 95–99 (1998)
R. Sangeetha, C. Hemanth, I. Jaiswal, Performance of different modulation scheme in free space optical transmission—a review. Optik 254, 168675 (2022)
T. Wang, X. Zhao, Y. Song, J. Wang, X. Yu, Y. Zhang, Atmospheric laser communication technology based on detector gain factor regulation control. IEEE Access 9, 43339–43348 (2021)
M.H. Ali, R.I. Ajel, S.A.K. Hussain, Performance analysis of beam divergence propagation through rainwater and snow pack in free space optical communication. Bull. Electr. Eng. Inform. 10(3), 1395–1404 (2021)
Y. Cao, H. Qin, X. Peng, Y. Wang, Z. Zhang, Research on polarization coding cooperative communication scheme for FSO system. Electronics 11(10), 1597 (2022)
N. Mohan, Z. Ghassemlooy, E. Li, M.M. Abadi, S. Zvanovec, R. Hudson, Z. Htay, The BER performance of a FSO system with polar codes under weak turbulence. IET Optoelectron. 16(2), 72–80 (2022)
S. Fujia, E. Okamoto, H. Takenaka, H. Kunimori, H. Endo, M. Fujiwara, R. Shimizu, M. Sasaki, M. Toyoshima, Performance analysis of polarcode transmission experiments over 7.8km terrestrial freespace optical link using channel equalization, in International Conference on Space Optics—ICSO 2020, vol. 11852 (SPIE, 2021), pp. 2301–2310
N. Gupta, A. Dixit, V.K. Jain et al., Capacity and BER analysis of BCH and LDPC coded FSO communication system for different channel conditions. Opt. Quant. Electron. 53(5), 1–25 (2021)
P. Cao, Q. Rao, J. Yang, X. Liu, LDPC code with dynamically adjusted LLR under FSO turbulence channel. J. Phys. Conf. Ser. 1920, 012023 (2021)
H. Jiang, N. He, X. Liao, W. Popoola, S. Rajbhandari, The BER performance of the LDPCcoded MPPM over turbulence UWOC channels. Photonics 9, 349 (2022)
A.A. Youssef, M. Abaza, A.S. Alatawi, LDPC decoding techniques for freespace optical communications. IEEE Access 9, 133510–133519 (2021)
M.T. Dabiri, S.M.S. Sadough, Performance analysis of alloptical amplify and forward relaying over lognormal FSO channels. J. Opt. Commun. Netw. 10(2), 79–89 (2018)
H. Moradi, H.H. Refai, P.G. LoPresti, A switched diversity approach for multireceiving optical wireless systems. Appl. Opt. 50(29), 5606–5614 (2011)
M. Abramowitz, I.A. Stegun, Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, 9th edn. (Dover Publications, New York, 1972), p.924
M.L. Singh, H.S. Gill, M. Singh, S. Kaur et al., An experimental evaluation of link outage due to beam wander in a turbulent FSO link. Wirel. Pers. Commun. 113(4), 2403–2414 (2020)
R.G. Gallager, Lowdensity paritycheck codes. IRE Trans. Inf. Theory 8(1), 21–28 (1962)
T.J. Richardson, R.L. Urbanke, Efficient encoding of lowdensity paritycheck codes. IEEE Trans. Inf. Theory 47(2), 638–656 (2001)
K. Yu, L. Shu, M.P. Fossorier, Lowdensity paritycheck codes based on finite geometries: a rediscovery and new results. IEEE Trans. Inf. Theory 47(7), 2711–2736 (2001)
G. Feng, L. Hanzo, Reliability ratio based weighted bitflipping decoding for lowdensity paritycheck codes. Electron. Lett. 40(21), 1356–1358 (2004)
C.H. Lee, W. Wolf, Implementationefficient reliability ratio based weighted bitflipping decoding for LDPC codes. Electron. Lett. 41(13), 755–757 (2005)
D.G. Forney Jr, On iterative decoding and the twoway algorithm, in Proceedings of International Symposium on Turbo Codes and Related Topics (1997)
A. Nouh, A.H. Banihashemi, Bootstrap decoding of lowdensity paritycheck codes. IEEE Commun. Lett. 6(9), 391–393 (2002)
N.D. Chatzidiamantis, D.S. Michalopoulos, E.E. Kriezis, G.K. Karagiannidis, R. Schober, Relay selection protocols for relayassisted freespace optical systems. J. Opt. Commun. Netw. 5(1), 92–103 (2013)
A.A. Farid, S. Hranilovic, Outage capacity optimization for freespace optical links with pointing errors. IEEE J. Lightwave Technol. 25(7), 1702–1710 (2007)
G. Soni, J.S. Malhotra, Impact of beam divergence on the performance of free space optical system. Int. J. Sci. Res. Publ. 2(2), 1–5 (2012)
Funding
Open access funding provided by The Science, Technology & Innovation Funding Authority (STDF) in cooperation with The Egyptian Knowledge Bank (EKB). This work funded by Science, Technology and Innovation Funding Authority (STDF) in cooperation with Egyptian Knowledge Bank (EKB).
Author information
Authors and Affiliations
Contributions
AY worked on MATLAB computing and LDPC decoding algorithms comparisons. Also, AY proposed two novel LDPC decoders for FSO communications. The author read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Youssef, A.A. Bootstrapped low complex iterative LDPC decoding algorithms for freespace optical communication links. J Wireless Com Network 2023, 78 (2023). https://doi.org/10.1186/s1363802302285w
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1363802302285w