 Research
 Open access
 Published:
Adaptive modulation and coding in underwater acoustic communications: a machine learning perspective
EURASIP Journal on Wireless Communications and Networking volume 2020, Article number: 203 (2020)
Abstract
The increasing demand for exploring and managing the vast marine resources of the planet has underscored the importance of research on advanced underwater acoustic communication (UAC) technologies. However, owing to the severe characteristics of the oceanic environment, underwater acoustic (UWA) propagation experiences nearly the harshest wireless channels in nature. This article resorts to the perspective of machine learning (ML) to cope with the major challenges of adaptive modulation and coding (AMC) design in UACs. First, we present an ML AMC framework for UACs. Then, we propose an attentionaided knearest neighbor (AkNN) algorithm with simplicity and robustness, based on which an ML AMC approach is designed with immunity to channel modeling uncertainty. Leveraging its online learning ability, such AkNNbased AMC classifier offers salient capabilities of both sustainable selfenhancement and broad applicability to various operation scenarios. Next, aiming at higher implementation efficiency, we take strategies of complexity reduction and present a dimensionalityreduced and dataclustered AkNN (DRDCAkNN) AMC classifier. Finally, we demonstrate that these proposed ML approaches have superior performance over traditional modelbased methods by simulations using actual data collected from three lake experiments.
1 Introduction
Ocean, as the origin of life, covers two thirds of our planet, supports 90% of the world’s freight traffic, and contains a vast amount of underutilized resources. However, human understanding of the deep ocean is even less than space. Therefore, growing attention needs to be cast to researches and exploitations of the mysterious ocean. Recently, thanks to the rapid development of related technologies, underwater acoustic communication (UAC) systems have found broad applications, such as environmental monitoring, offshore exploration, disaster detection, and national security [1].
Traditional UAC systems are generally equipped with a fixed set of physical layer (PHY) parameters, corresponding to a single modulation and coding scheme (MCS). However, underwater acoustic (UWA) channels are varying temporally and spatially. As a result, it is impossible for an UAC system to cope with a large variety of UWA channel dynamics well by only using one fixed MCS [2–5]. To this end, the adaptive modulation and coding (AMC) technique has emerged to be an appealing avenue for UAC efficiency improvement through tracking channel dynamics and adaptively switching among a set of MCSs to achieve the most efficient transmission.
In 1968, as the origin of the AMC technology, Hayes proposed an adaptive scheme where the transmitter uses the channel state information (CSI) fed back from the receiver to adjust parameters [6]. From then on, lots of research efforts on applying AMC to terrestrial wireless communications have been made. In 1992, Webb presented a variablerate quadrature amplitude modulation (QAM) system, which offered an attractive solution to the bandwidth restricted microcellular networks [7, 8]. In [9], a bit error rate (BER) comparison was made among various modulation schemes that are used for AMC and then came out with the optimal signaltonoise ratio (SNR) range of each scheme. In [10], adaptive systems were introduced by evaluating the performance of some simple QAM schemes in both perfectly known and predicted channels. Moreover, in [11], a crosslayer combination of AMC with the truncated automatic repeat request (ARQ) technology was made for the communications of secondary users in cognitive radio networks, which can adapt well to the radio conditions and make full use of the available resources.
Unfortunately, in contrast to terrestrial wireless communications, UACs have to face several unique challenges caused by the undesirable UWA channel characteristics, such as the much more complex spatiotemporal channel variability, more severe multipath fading, and more limited bandwidth [12]. As a result, the development of AMC in UACs is far behind its terrestrialbased counterpart. Some existing results are summarized as follows. Stojanovic used the product of Doppler spread and multipath spread as a criterion for switching between coherent and noncoherent communication modes [13]. For UWA orthogonal frequencydivision multiplexing (OFDM) systems, Wan et al. utilized the effective signaltonoise ratio (ESNR) as a new performance metric for AMC [14]. In [15], Shen et al. selected SNR as the switching metric and presented an adaptive multimode orthogonal multicarrier (MOMC) technology.
So far, the underwater AMC researches have generally focused on the modelbased methods. Unfortunately, although extensive efforts have been put on UWA channel modeling, there is not a general channel model yet that fits accurately in various practical scenarios (detailed analysis will be given in Section 2.1), due to the high uncertainty and complexity of UWA channels. As such, those modelbased AMC methods can be either insufficient or inaccurate in practical UAC scenarios. To address this problem, we resort to the datadriven machine learning (ML) technology to empower underwater AMC with intelligence, so as to offer immunity to channel modeling uncertainty and thus enabling flexible system optimization and sustainable performance improvement. The ML methods can make predictions or decisions from data observations without the aid of a specific model.
The recent revival of the ML technology has found its wide applications in broad fields, including image/audio processing, economics, and computational biology [16]. Moreover, there are also some interesting results obtained by introducing ML into the field of communications. In terrestrial radios, deep learning (DL) has been advocated for demodulation in OFDM systems [17]. For 5G wireless systems, an efficient online CSI prediction scheme which learns the historical data via deep neural networks (DNNs) has been designed [18]. For noncooperative communication systems, a DLbased method was proposed to perform automatic modulation classification [19], while for UACs, an adaptive and energyefficient routing protocol for underwater delay/disruption tolerant sensor networks has been proposed [20]. Moreover, NATO has developed a decision treebased approach that is capable of choosing the modulation scheme with the highest data rate among several predefined singlecarrier signals depending on CSI [21]. In [22], a reinforcement learningbased adaptive transmission strategy was presented for timevarying UWA channels, which formulates the adaptive problem as a partially observable Markov decision process. These early successes illuminate the feasibility and potential benefits of applying ML in wireless communication systems.
In this paper, we focus on a novel MLbased AMC framework for UACs. Therein, the AMC procedure is formulated as a classifier that has been trained by a preorganized and labeled database (i.e., training set). After performing model training to establish the functional mapping, we treat such a classifier as a black box, with the input being the realtime channel state and the output being the corresponding optimal MCS. Further, we adopt an online learning mechanism to enable continuous classifier updating during the AMC operation. In doing so, our strategy has salient capabilities of both sustainable selfenhancement and broad applicability to diverse UAC scenarios. The main contribution of this paper can be summarized as follows:

This paper resorts to the perspective of ML and gives a complete ML AMC framework for UACs, which consists of not only the specific classification algorithm but also the procedure of data preprocessing and labeling. The latter is essential to the success of ML but is often overlooked in generic ML literature.

A new online learning attentionaided knearest neighbor (AkNN) AMC classifier based on supervised learning is proposed, which enables a novel implementation of AMC with immunity to channel modeling uncertainty.

Aiming at higher implementation efficiency, we further design an improved approach called the dimensionalityreduced and dataclustered AkNN (DRDCAkNN) AMC classifier, which yields lower complexity by performing feature dimensionality reduction and training set condensation.

The above contributions have been verified by extensive simulations using actual data collected from lake experiments.
The remainder of this paper is organized as follows. Section 2 analyzes the reason for lacking a general UWA channel model and then defines the system model of MLbased AMC. Section 3 describes our proposed AkNNbased AMC method. Section 4 focuses on the implementation efficiency improvement of AkNNbased method and designs the DRDCAkNN AMC classifier. Section 5 presents the simulation results. Finally, Section 6 concludes this paper and discusses some future directions.
2 System model
In this section, we first explore the reason for the current lack of a general model for UWA channels. Then, we define the system model of AMC in UACs. Next, we formulate the AMC procedure as a classification problem from an ML perspective and discuss the considered ML algorithm, followed by an introduction of the MCSs to be used.
2.1 Analysis of UWA channel model
Since almost all electromagnetic frequencies are severely absorbed and dispersed in water, underwater information transmission is conducted dominantly by acoustic waves [23]. As summarized in Table 1, UWA channels suffer from much more complicated distortions and interferences compared with its terrestrial wireless counterpart and thus posing serious performancedegrading factors to UACs.
Recently, due to its interpretability and simplicity, the raytracing model is widely used to formulate the propagation of UWA waves, which assumes that the sound energy propagates along some eigenrays from the source (i.e., transmitter, denoted by TX) to the destination (i.e., receiver, denoted by RX). Therein, following Snell’s law, acoustic rays always bend toward the region with lower propagation velocity. Let T, S, and z denote temperature, salinity, and depth, respectively, we can calculate the speed of UWA waves (denoted by c) empirically as
Such formula reveals that any change in these specific measurements crucial for UAC will result in variation of c, which can induce refraction of acoustic ray paths [3, 24]. Consequently, the complexity of UWA propagation comes from the irregularity of the sound speed profile (SSP), which shows the speed of sound in water at different vertical levels. However, as marine environment is a typical inhomogeneous medium with strong dynamic characteristics of seasonal changes and daynight temperature variations, there is still no widely accepted method that can effectively and accurately predict the complicated SSP variations [25]. Such complexity further makes it quite challenging to construct accurate and general UWA channel models in an affordable manner.
2.2 System model for AMC in UACs
Considering a nodetonode UAC link from the TX to the RX, we define the system model of AMC as depicted in Fig. 1. Once receiving a data frame that has been encapsulated into acoustic waveforms at the jth (j∈{1,2,...,J}) time instance, RX first performs channel estimation to sense the channel condition \(\mathbf {h}_{j}\in \mathcal {H}\), where \(\mathcal {H}\) represents the set of all the observed h_{j}. h_{j} can be represented by a Pdimensional CSI feature set \(\mathbf {f}_{j} \in \mathbb {R}^{P}\), in the form of
where f_{jp} is the pth measured CSI feature. Then, according to the obtained f_{j}, a proper MCS \(m_{i} \in \mathcal {M}, i \in \{1,2,...,I\}\) that best matches it will be selected as the optimal solution m_{opt} under a specific policy π
and then fed back to the TX. Given the harsh UWA channel dynamics, it is necessary to develop and maintain a finite set of allowable MCS realizations (i.e., \(\mathcal {M}\)) for trading off throughput and reliability in practice, where each m_{i} defines a channel coding scheme with rate R_{c} plus a modulation scheme with rate R_{b}, and the corresponding actual physical layer data rate [26, 27]
Next, once notified with a new m_{opt}, the TX will switch to this scheme immediately for subsequent transmissions.
Note that the abovementioned policy π is a mapping from channel quality measurements to the MCS to be picked [28]. According to different application scenarios, π can aim to either maximize the throughput or minimize the bit error rate (BER). In this paper, for the purpose of maximizing the link throughput R_{i} while satisfying a certain BER constraint φ (i.e., BER_{i}≤φ), index of the desired solution for a given channel state will be selected depending on
2.3 AMC as classification: an ML perspective
To improve the efficiency of AMC systems, designing an appropriate MCS switching algorithm is of great importance. Existing AMC methods for terrestrial wireless communications can be categorized into two groups: one is based on instantaneous CSI (ICSI) obtained from channel estimation, while the other is based on statistical link information (SLI) inferred through longterm observations or historical knowledge. Unfortunately, due to the complicated SSP variation in the UWA environment, the ICSIbased methods often fail to work effectively for UAC due to the lack of a general channel model that accurately represents complicated UWA propagation effects. Meanwhile, the SLIbased methods hinge on longterm channel statistics and thus suffer severely from slow response speed to fast dynamics and sudden changes in UAC links. These drawbacks of conventional methods motivate us to develop MLaided AMC approaches for performance improvement.
Turning to the perspective of ML, the AMC procedure can be formulated as a classification problem that aims to partition \(\mathbb {R}^{P}\) into nonoverlapping feasible regions for each m_{i}. As Fig. 2 depicts, AMC is equivalent to a classifier G(·)
As such, we further propose a novel framework of MLbased AMC for UAC systems. As illustrated in Fig. 3, it is appealing to track and adapt to complex UWA scenarios, with immunity to channel modeling uncertainty.
2.4 Classification algorithm for MLbased AMC
Generally, typical ML algorithms can be classified into four broad categories depending on the nature of the dataset for learning or the feedback mechanism available to the learning system. They are supervised learning (SL), unsupervised learning (UL), semisupervised learning (SSL), and reinforcement learning (RL), where SL algorithms are more convenient at solving classification problems due to their ability to infer an inputoutput mapping function from labeled training data.
In this paper, we adopt the kNN algorithm to investigate the potential of ML for AMC in UACs and obtain the AMC classifier G(·). As a nonparametric method among the most popular SL approaches, kNN is often used as a benchmark for more complex algorithms, such as support vector machine (SVM) and deep neural network (DNN), thanks to its simplicity and robustness that leads to achievable results even facing small training sets [29].
Assume a training set T
where y_{n} denotes the labeled membership of each observation and x_{n}=(f_{n1},f_{n2},...,f_{nP}) represents the associated feature values. Once given a query ω, the kNN algorithm first searches in T to find its knearest neighbors depending on some specific distance measurements d(·), where the Euclidean distance
is the one that has been widely utilized. Then, kNN proceeds to the voting stage and labels ω with the class y_{ω} that the majority of the k neighbors belong to. Such a process can be expressed as
where δ is the Dirac function that equals to 1 if y=y_{k} or 0 otherwise.
However, since different distances reflect different degrees of similarity, the information provided by each of the knearest neighbors to support the classification process is obviously of different importance. Thus, directly adopting the conventional kNN algorithm where each neighbor has a equal weight in the voting stage will inevitably bring some performance degradation to the classification, or even lead to incorrect results. To address this issue, we resort to the attention mechanism and propose the AkNN algorithm for the underwater AMC task. As a cognitive process of selectively concentrating on a few features while ignoring others, the attention mechanism can help ML models assign different weights to each part of the input, extract more critical and important information, and make more accurate judgments without incurring more costs to model computation and storage [30, 31].
In the AkNN algorithm, the specific job of the attention mechanism is to produce a set of w_{k} for the concerned neighbors, where w_{k} denotes the weight of the kthnearest neighbor of ω. Then by assigning nearer neighbors with higher w_{k}, attention can dynamically highlight the importance of different neighbors in the voting stage. Thus, we have
Note that attention weights can be trained, or predefined based on some sort of correlation metric, or even be Gaussian shaped with tunable parameters. In this work, we set w_{k} to the Squared Inversion (SI) kernel, i.e.,
2.5 MCS model
In this work, we adopt the convolutional coded multicarrier multiple frequency shift keying (CCMCMFSK) as the transmission scheme to evaluate the proposed MLbased AMC system, where Fig. 4 depicts its structure.
2.5.1 MCMFSK
With the advances in UAC technologies, considerable efforts have been made in the design of modulation schemes. From FSK and phaseshift keying (PSK), through orthogonal frequencydivision multiplexing (OFDM), to the latest orthogonal signaldivision multiplexing (OSDM) [32–34], these modulation schemes have been investigated extensively and proven useful in the harsh oceanic environment.
In this paper, we adopt the scheme of MCMFSK, which combines the techniques of MFSK and OFDM to transmit information in parallel over multiple orthogonal subchannels [35, 36]. As such, this method not only inherits the robust performance of MFSK, but also integrates the high spectral efficiency of OFDM. Moreover, by introducing the parameter of frequency diversity, an additional degree of freedom in the MCS table design can be obtained to improve its scope of application.
2.5.2 Convolutional code
To reduce the transmission errors caused by the noise and interference in UWA channels, there have been extensive works on the design of errorcorrecting code (ECC) that can improve the ability of error controlling at the price of adding redundancy to the original message. Among existing ECC approaches, such as ReedSolomon (RS) codes, lowdensity paritycheck (LDPC) codes, and turbo codes, the simple convolutional code with Viterbi decoder is selected as the coding scheme in the following discussions, thanks to its ability to obtain a good tradeoff between errorcorrecting performance and implementation complexity. To be more specific, the adopted convolutional code is with coding rate R_{c}=1/2, constraint length 7, and generator polynomial (171,133).
3 AkNN AMC for UAC
In this section, we present a novel ML framework for AMC in UAC systems, where an online learning AkNN classifier serves as the switching method for predicting the optimal MCS to maximize the link throughput.
3.1 System assumptions
Specifically, we consider the following assumptions in our AkNN AMC method:

Accurate channel knowledge. We assume that through channel estimation, the RX obtains CSI accurately, thus enabling a highquality training process.

Perfect feedback. Generally, RX informs TX of the selected MCS by sending a message through the feedback channel. In this paper, we assume an errorfree feedback stage.
3.2 AkNN classifier
Figure 5 illustrates the architecture of our AkNN AMC method, where a twostage process is conducted. During the offline training stage, the mapping function between the input CSI and the output MCS modes is established by training the AkNN classifier iteratively until a certain stopping criterion is satisfied, a.k.a. an expected model prediction accuracy. Therein, the training set is constructed based on the signal samples generated from predefined \(\mathcal {M}\) for various kinds of \(\mathcal {H}\). During the online deployment stage, the trained classifier is applied to analyze the realtime input CSI vector ω and generate the optimal MCS to best match the practical UAC channel conditions. Further, an online learning mechanism is incorporated to update the AMC classifier as new data arrive, so as to constantly improve the applicability of the model. Then, we summarize the AkNN AMC method in Algorithm 1.
In the framework of the proposed AkNN AMC discussed above, there are some critical steps that are needed to be further clarified. Next, we elaborate on two techniques of the model training: feature set selection and training set construction.
3.3 Feature set selection
To apply AkNN to AMC, we start with collecting a set of synthetic and real labeled data from both simulations and field experiments. Without loss of generality, various UWA channel models and test scenarios are used to generate the input channel data, each of which is represented by a Pdimensional feature set.
To support a good training accuracy, we assign P with a large value to provide enough information, or the capability of our AkNN classifier will be restricted seriously. However, due to the socalled curse of dimensionality, each dimensionality added to \(\mathbb {R}^{P}\) leads to a significant computational complexity increase in both feature extraction and model training [37]. As such, there is an important tradeoff between information sufficiency and computational efficiency. To this end, the current practice is to preset the feature space by experience or prior knowledge. In this work, we construct a sixdimensional feature set \(\mathbf {f} \in \mathbb {R}^{6}\) to represent different UWA channel conditions by extracting the following CSI parameters: signaltonoise ratio (SNR), time delay spread (τ_{max}), time delay of the strongest path (τ_{hmax}), total power of the first three paths (e_{3}), total power of all paths (e_{total}), and the normalized amplitude of the first path (h_{1}). Note that e_{3}, e_{total}, and h_{1} are related to the normalized format of the raw channel impulse response at each observation instance, where the amplitude of each path has been scaled to [0,1] by dividing the absolute value of the strongest path amplitude. Besides, e_{3} and e_{total} can reflect the energy distribution of the first three paths and of all paths, which is related to the complexity of channel structure.
3.4 Training set construction
As the foundation of ML techniques, training data is an essential set of input information that enables ML algorithms to learn the underlying principles and extract key features. For the proposed AkNN AMC classifier, the constructed training set has to involve the corresponding BER, denoted by BER_{ij}(m_{i},f_{j}), of each m_{i} in all kinds of f_{j}. Once the required information of each observation
is made available, we first store them in the corresponding subsets according to m_{i}, i.e.,
and then merge all to form the training set:
So far, an original training set has been successfully constructed, as illustrated in Fig. 6. However, as an important step before the training starts, further preprocessings to T_{0} are needed to turn raw data into a cleaner and more reasonable format for the AMC task.
3.4.1 “Onetoone” mapping
Each original \(T_{0m_{i}}\) includes the observation of m_{i} in all the possible channels, thus making the mapping relationship between \(\mathcal {M}\) and \(\mathcal {H}\), provided by the whole training set, onetomany. Unfortunately, such mapping relation will significantly confuse the classifier and make it impossible to determine the optimal MCS for each specific f_{j} through training. To solve this problem, we use Eq. (5) to modify the sets and only retain information of the desired m_{opt}, so as to obtain a onetoone mapping function for model training. Then, the processed \(T_{0m_{i}}\) can be expressed as
with N_{i} denoting the number of observations retained in the ith subset.
3.4.2 Feature scaling
Since the various features included in T_{0} are almost impossible to have a consistent magnitude, the one with a wider range of value will dominate the distance calculated by AkNN, which means other features’ influences will be overpowered, and thus, significant loss in training accuracy will be caused. To address this issue, we perform feature normalization across all variables. Specifically, for each f_{p}, its normalized counterpart \(f_{p}^{\dag }\) can be calculated via
with f_{pmin} and f_{pmax} denoting the minimum and maximum values of f_{p}, respectively. After feature scaling, a new training set T with normalized feature quantities has been successfully obtained. Let \(N=\sum _{i=1}^{I}{N_{i}}\) represent the total number of observations that belong to different \(T_{m_{i}}^{\prime }\), we have the data matrix of the whole channel observations as
Note that all the training sets we discussed in this paper have been preprocessed by the abovementioned two steps.
4 An efficiencyenhancing AkNN AMC approach
With the ability to implicitly learn the uncertain and complex UWA channel, the proposed AkNN AMC classifier is demonstrated to achieve higher efficiency and wider applicability than the traditional modelbased approaches. However, before deploying it into practical scenarios, we should give enough attention to the inherent complexity of kNN algorithm and make efforts to improve its implementation efficiency.
4.1 Complexity of the AkNN classifier
The implementation complexity of the AkNN classifier can be generally divided into the following two major aspects: (1) storage complexity of large amounts of training data and (2) computational complexity in the searching of the nearest neighbors. Specifically, to implement the proposed classifier in practice, a major difficulty is that we have to reserve enough memory to store all the training data. Moreover, since the searching of the nearest neighbors requires computing and sorting the distances from all stored observations, the proposed classifier will be computationally intensive when facing huge amounts of data or high dimensional feature space. All of these adverse characteristics pose significant challenges for the proposed AkNN AMC method to achieve a good performance in the actual deployment.
4.2 DRDCAkNN classifier
To overcome the aforementioned challenges, we turn to design an improved approach with lower complexity than previous solution, which is called the DRDCAkNN classifier. Figure 7 illustrates the architecture of this new approach by highlighting its two significant improvements: (1) feature dimensionality reduction through principal component analysis (PCA) and (2) training set condensation via kmeans data clustering.
4.2.1 Dimensionality reduction
As a frequently used technique in data analysis, PCA provides a tool to seek linear combinations of the original variables which retain maximal variance and thus minimize information loss over feature transformation. In the DRDCAkNN classifier, we adopt PCA to reduce the complexity induced by high feature dimensionalities. Let X^{′} denote the columnwise centralized form of the original N×P data matrix X
which contains N observations with each represented by a Pdimensional feature set. We perform singular value decomposition (SVD) to X^{′} and obtain
where singular values in Σ_{N×P} are sorted in descending order. Then, the columns of U_{N×N}Σ_{N×P} are the principal components (PCs), while PC loadings are represented by the corresponding columns of V_{P×P}, and sample variance of the qth PC can be calculated as \(\Sigma _{qq}^{2}/{(N1)}\) [38]. Generally, holding more than an expected ratio ψ of total variance, i.e.,
the first Q PCs are retained to compactly represent the original data for training. Along this way, a great dimensionality reduction can be achieved by PCA through converting \(\mathbf {f} \in \mathbb {R}^{P}\) into a lower dimensional subspace \(\bar {\mathbf {f}} \in \mathbb {R}^{Q}\), i.e.,
4.2.2 Data clustering
Another efficiencyimproving measure is to cluster each class of training samples and then use only some representative observations for the model training. Considering its efficiency and robustness in cluster analysis, we adopt the kmeans technique to perform data condensation. Given a training set as depicted in Fig. 6, kmeans is performed in each subset \(T_{m_{i}}\) to partition the N_{i} observations into V(V≤N_{i}) clusters as
with c_{iv} denoting the corresponding centroid of each s_{iv}. Specifically, such procedure can be accomplished through proceeding the following two steps iteratively until satisfying the stopping criteria that the assignments no longer change when the centroids are updated:

Step 1 (data assignment). Assign each observation to the cluster of the nearest c_{iv}, where the squared Euclidean distance is used, i.e.,
$$ s^{(t)}=\arg\min_{\mathbf{c}_{iv}\in \mathbb{C}}\\mathbf{o}_{{in}_{i}}\mathbf{c}_{iv}^{(t)}\_{2}^{2}, $$(23)Note that the initial c_{iv} are some randomly selected points from \(T_{m_{i}}\).

Step 2 (centroid update). Once an assignment is finished, recalculate the means of the new cluster to update its centroid as
$$ \mathbf{c}_{iv}^{(t+1)}=\frac{1}{\\mathbf{s}^{(t)}\_{0}}\sum_{\mathbf{o}_{iv}\in{\mathbf{s}^{(t)}}}\mathbf{o}_{iv}. $$(24)
Finally, using the obtained centroids to represent each corresponding cluster, we successfully obtain an efficient form of the training set
Figure 8 shows its architecture. Therein, the number of features in the training set is first reduced from NP to NQ via DR, then further decreased to VIQ after DC. Assuming that bbit memory is required for the storage of each feature or label, we compare the complexity of the DRDCAkNN AMC classifier and that of the previous AkNN approach, as shown in Table 2. Remarkably, this novel design with enhanced computational efficiency is demonstrated to be effective.
4.2.3 Online learning
Moreover, withthe ability of online learning, the DRDCAkNN AMC method is able to continuously improve its understanding of the UWA environment. Therein, T_{DRDC} is updated through tuning the centroid c_{iv} of each cluster as the new sample ω arrives, i.e.,
where n_{iv} is the number of observations included in each cluster before the new arrival, and \(\bar {{\omega }}\) is the DR processed ω. The algorithm of DRDCAkNN AMC is summarized in Algorithm 2.
5 Simulation results
To evaluate the performance of the proposed two ML AMC approaches, several simulations have been conducted in CCMCMFSK UAC systems with three predefined MCSs, as depicted in Table 3. For data gathering, we collected a large set of realworld channel measurements from three previous field experiments conducted at Ganhe reservoir (October 2011), Fuxian lake (July 2013), and Danjiangkou reservoir (June 2016) [39]. Figure 9 shows the configurations, and Table 4 provides the mean value of each selected feature associated with these experiments. These data are then organized and labeled. Specifically, for each channel condition, the corresponding MCS is labeled by testing each MCS and selecting the best one according to Eq. (5). Eventually, a dataset of 1656 observations is made available, with labels covering all three MCS values.
Further, according to different simulation purposes, two categories of training sets are constructed as depicted in Tables 5 and 6, respectively. The first category is used to train and optimize the AMC classifier, aiming to validate the attention mechanism, select k value, etc. To this end, each training set is a randomly extracted part from the whole 1656 observations. On the other hand, the second category is to evaluate the online learning ability of this AMC approach when deployed in practice, where each training set includes all the observations in a specific lake environment. Noticeably, throughout the simulations, we adopt the technique of kfold crossvalidation with k=10 [40], to calculate the corresponding classification accuracy (η) for AMC.
5.1 Analysis of the AkNN AMC
5.1.1 Impact of different k values
As a key hyperparameter in kNN, k is the number of instances that are taken into account for the determination of affinity with different classes. However, a proper value of k that leads to high prediction accuracy is challenging to be derived. Specifically, small k values may increase undesired noise effects, while large values of k will make the system computationally expensive or even produce errors when k exceeds a certain value. In Fig. 10, given different k values ranging from 1 to 55, we investigate the performance of the AkNN classifier corresponding to all the training sets listed in Table 5. Therein, η is found to improve rapidly as k increases at the beginning. However, this trend slows down and almost saturates when k is greater than 15. Consequently, we set k to 15 in this work.
5.1.2 AkNN AMC versus traditional SNRbased AMC
To better understand the nature of AkNN AMC performance, we learn the mapping function from the input channel conditions to the output MCSs by training an AkNN classifier by training set 6. The learned results are evaluated on test data in terms of the optimality of the predicted MCS (Fig. 11) and the achieved performances in terms of average throughput (aTP) and BER (Fig. 12), with comparison to a traditional modelbased method that only adopts SNR as the MCS switching metric. Therein, aTP at the εth SNR level is calculated via
where N_{t(ε)} denotes the total number of observations under such condition, while \(\hat {\mathcal {I}}\), \(\hat {n}_{i(\varepsilon)}\), and BER_{i(ε)} represent the set of indexes of the optimal MCSs, the number of correct optimal solutions in m_{i}, and the corresponding BER, respectively. Noticeably, since channels are represented by multidimensional features rather than a single SNR, each SNR may correspond to multiple optimal MCS choices with different data rates, and hence, aTP and BER do not vary monotonically in SNR. Instead, the BER curves stay rather flat around the required BER threshold, while the aTP improves as SNR increases. As confirmed by Figs. 11 and 12, the AkNN AMC obtains nearideal solutions in tracking channel dynamics under different operation scenarios, thanks to its immunity to channel modeling uncertainty and powerful multidimensional feature analysis capability. Therefore, our intelligent ML system is demonstrated to offer better AMC performance than its modelbased counterpart, in terms of broad applicability to various operation scenarios.
5.1.3 The learning curve of AkNN AMC classifier
Equipped with the online learning mechanism, the proposed AMC design has the capability of being adapted to various changing and unknown environments. To investigate whether it works in practice, we use the second category of data for further simulations. As illustrated in Fig. 13, an initial AMC classifier is built through offline training using training set 9, which achieves a prediction accuracy of 90.4% in the UWA environment of GH. Next, we deploy this classifier to DJKh12 (i.e., training set 7) and FXHh2 (i.e., training set 8). Thanks to the learning ability, our AMC system is found to achieve a steadily improved prediction accuracy, and finally reaches an acceptable AMC performance, i.e., η≥0.9. Such results suggest the proposed online learning AMC classifier could extend its applicable scenarios intelligently.
5.2 Analysis of the DRDCAkNN AMC
5.2.1 Effectiveness of DRDC
First, the first category of training sets is adopted to evaluate the effectiveness of the DRDC processing. During the DR procedure, to determine the number of selected PCs, we adopt Eq. (20) and set ψ=90%, which indicates that the retained PCs cumulatively explain more than 90% of the total amount of information contained in the raw data. Therefore, according to the explained variance (EV) and cumulative explained variance (CEV) of PCs depicted in Fig. 14, the first three PCs are enough to satisfy Eq. (20). In addition, we present the PC loadings of training set 1 in Table 7, where each PC is a linear transformation of the original variables.
Once the dataset processed by DR is made available, we adopt the elbow method [41] to explore the optimal value of V for the DC operation. Using training set 1 as an example, Fig. 15 shows J_{c} as a function of different V, where J_{c} denotes the cost function as
which is the sum of squared errors (SSE) of samples in each cluster corresponding to the centroid c_{iv}. Remarkably, it can be seen that, with the increase of V, the curves first drop sharply and then slowly approach zero. Aiming at finding a good tradeoff between J_{c} and V, we set the optimal V for each m_{i} to 2, which is the elbow of the curves and represents that our returns will diminish as V continuously increases [41]. Further, by the same analysis, V for the other training sets also holds the same optimal value.
Table 8 compares the training performance with and without DRDC, in terms of prediction accuracy and system complexity. As expected, through performing the DRDC processing, our classifier achieves a significant reduction in complexity by nearly 190.8% at the price of an average accuracy loss of 2.5%. Moreover, given the online learning during the actual deployment, DRDC will play an even more crucial role in system efficiency improvement as the training set keeps expanding.
To make the DRDC process more intuitive, Fig. 16 shows the detailed variation of training set 6 along with such processing, where the observations of different MCSs are represented in different colors. The DR operation first converts the sophisticated sixdimensional original samples to a visualized new set of only three dimensionalities. Then further processed by DC, the previous 1656 sixdimensional observations are successfully represented by only six data points with three dimensionalities, thus offering an excellent efficiency enhancement to our MLaided AMC system.
5.2.2 Learning curve of the DRDCAkNN AMC classifier
Following the same simulation procedure, Fig. 17 presents the learning curve of the DRDCAkNN AMC classifier and compares the performance with that of its AkNN counterpart. Remarkably, despite the introduction of DRDC, the prediction accuracy of our MLaided AMC classifier suffers only a slight loss, which is no more than 10%. Moreover, the DRDCAkNN AMC classifier is much more efficient and can still maintain excellent learning ability, thus enabling a continuously increased applicability in the actual deployment.
6 Conclusion and future work
This article turns to an ML perspective to cope with the major challenges of AMC design in the harsh underwater environment. The proposed online learning AkNN classifier based on SL enables a novel implementation of AMC, which has excellent immunity to channel modeling uncertainty. Moreover, to handle the inherent highcomplexity issues, we further present the DRDCAkNN classifier for feature dimensionality reduction and data condensation, which can offer a great complexity reduction compared to the AkNN approach, and facilitate an easier implementation of the AMC systems.
While the proposed two ML methods expand the applicability of AMC systems in UACs compared with the traditional modelbased approaches, there are still ample issues left for future work. Currently, in order to reduce the demand for computing resources and training time, the features used to train the AMC model are manually extracted from CSI by experience. However, the uncertainty of experience may potentially impact on the system performance, since there are no bestpractice rules on which features are crucial for MCS switching in underwater AMC. To alleviate this problem, it may be necessary to investigate a DLbased AMC framework that can be categorized as a totally datadriven solution, to enable sustainable model improvement through automatically detecting and generating more complex and highlevel features from raw data sources.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Abbreviations
 UAC:

Underwater acoustic communication
 UWA:

Underwater acoustic
 ML:

Machine learning
 AMC:

Adaptive modulation and coding
 AkNN:

Attentionaided knearest neighbor
 DRDCAkNN:

Dimensionalityreduced and dataclustered AkNN
 PHY:

Physical layer
 MCS:

Modulation and coding scheme
 CSI:

Channel state information
 DL:

Deep learning
 DNN:

Deep neural network
 TX:

Transmitter
 RX:

Receiver
 SSP:

Sound speed profile
 SLI:

Statistical link information
 SL:

Supervised learning
 UL:

Unsupervised learning
 SSL:

Semisupervised learning
 RL:

Reinforcement learning
References
I. F. Akyildiz, D. Pompili, T. Melodia, Underwater acoustic sensor networks: research challenges. Ad Hoc Netw.3(3), 257–279 (2005).
M. Badiey, Y. Mu, J. A. Simmen, S. E. Forsythe, Signal variability in shallowwater sound channels. IEEE J. Oceanic Eng.25(4), 492–500 (2000).
M. Stojanovic, J. Preisig, Underwater acoustic communication channels: propagation models and statistical characterization. IEEE Commun. Mag.47(1), 84–89 (2009).
A. Song, M. Badiey, H. Song, W. S. Hodgkiss, M. B. Porter, the KauaiEx Group Impact of ocean variability on coherent underwater acoustic communications during the Kauai experiment (KauaiEx). J. Acoust. Soc. Am.123(2), 856–865 (2008).
M. Chitre, S. Shahabudeen, L. Freitag, M. Stojanovic, in OCEANS 2008. Recent advances in underwater acoustic communications & networking (IEEEQuebec City, 2008), pp. 1–10.
J. Hayes, Adaptive feedback communications. IEEE Trans. Commun. Technol.16(1), 29–34 (1968).
W. T. Webb, QAM: the modulation scheme for future mobile radio communications?. Electron. & Commun. Eng. J.4(4), 167–176 (1992).
W. Webb, R. Steele, Variable rate QAM for mobile radio. IEEE Trans. Commun.43(7), 2223–2230 (1995).
M. Rajesh, B. Shrisha, N. Rao, H. Kumaraswamy, in 2016 IEEE Int’l Conf. Recent Trends in Electron., Inform. & Commun. Technol. (RTEICT). An analysis of BER comparison of various digital modulation schemes used for adaptive modulation (IEEEBangalore, 2016), pp. 241–245.
A. Svensson, An introduction to adaptive QAM modulation schemes for known and predicted channels. P. IEEE.95(12), 2322–2336 (2007).
Y. Yang, H. Ma, S. Aissa, Crosslayer combining of adaptive modulation and truncated ARQ under cognitive radio resource requirements. IEEE Trans. Veh. Technol.61(9), 4020–4030 (2012).
K. H. Park, M. A. Imran, P. Casari, H. Kulhandjian, H. Chen, A. Abdi, F. Dalgleish, IEEE Access special section editorial: underwater wireless communications and networking. IEEE Access.7:, 52288–52294 (2019).
A. Benson, J. Proakis, M. Stojanovic, in OCEANS 2000 MTS/IEEE Conf. and Exhibition. Conf. Proc. (Cat. No. 00CH37158), 2. Towards robust adaptive acoustic communications (IEEEProvidence, 2000), pp. 1243–1249.
L. Wan, H. Zhou, X. Xu, Y. Huang, S. Zhou, Z. Shi, J. H. Cui, Adaptive modulation and coding for underwater acoustic OFDM. IEEE J. Oceanic Eng.40(2), 327–336 (2015).
X. Shen, J. Huang, Q. Zhang, C. He, Achieving high speed UWA communication with adaptive MOMC technology. J. Northwestern polytechnical university.25(1), 147 (2007).
M. J. Er, Y. Zhou, Theory and Novel Applications of Machine Learning (IntechOpen, 2009).
H. Ye, G. Y. Li, B. H. Juang, IEEE Wirel. Commun. Le.7(1), 114–117 (2018).
C. Luo, J. Ji, Q. Wang, X. Chen, P. Li, Channel state information prediction for 5G wireless communications: a deep learning approach. IEEE Trans. Netw. Sci. Eng.7(1), 227–236 (2018).
Y. Wang, J. Yang, M. Liu, G. Gui, Lightamc: lightweight automatic modulation classification using deep learning and compressive sensing. IEEE Trans. Veh. Technol.69(3), 3491–3495 (2020).
T. Hu, Y. Fei, in 2010 IEEE Int’l Symp. Model., Anal. and Simul. of Comput. and Telecommun. Syst. An adaptive and energyefficient routing protocol based on machine learning for underwater delay tolerant networks (IEEEMiami Beach, 2010), pp. 381–384.
K. Pelekanakis, L. Cazzanti, G. Zappa, J. Alves, in 2016 IEEE Third Underwater Commun. and Netw. Conf. (UComms). Decision treebased adaptive modulation for underwater acoustic communications (IEEELerici, 2016), pp. 1–5.
C. Wang, Z. Wang, W. Sun, D. R. Fuhrmann, Reinforcement learningbased adaptive transmission in timevarying underwater acoustic channels. IEEE Access.6:, 2541–2558 (2018).
J. Heidemann, M. Stojanovic, M. Zorzi, Underwater sensor networks: applications, advances and challenges. Philos. T. R. Soc. A. 370(1958), 158–175 (2012).
L. R. LeBlanc, F. H. Middleton, An underwater acoustic sound velocity data model. J. Acoust. Soc. Am.67(6), 2055–2062 (1980).
A. Ahmed, M. Younis, in 2017 IEEE Int’l Conf. Commun. (ICC). Distributed realtime sound speed profiling in underwater environments (Paris, 2017), pp. 1–7.
A. J. Goldsmith, S. G. Chua, Adaptive coded modulation for fading channels. IEEE Trans. Commun.46(5), 595–602 (1998).
L. Huang, L. Zhang, Y. Wang, Q. Zhang, in 2019 IEEE International Conference on Signal, Information and Data Processing (ICSIDP), Chongqing, China. A Twodimensional Strategy of Adaptive Modulation and Coding for Underwater Acoustic Communication Systems, (2019), pp. 1–5.
A. Misra, V. Krishnamurthy, S. Schober, in IEEE 6th Workshop on Signal Process. Adv. in Wirel. Commun., 2005.Stochastic learning algorithms for adaptive modulation (IEEENew York, 2005), pp. 756–760.
C. M. Biship, Pattern recognition and machine learning (information science and statistics) (SpringerVerlag New York, Inc., New York, 2006).
G. W. Lindsay, Attention in psychology, neuroscience, and machine learning. Front. Comput. Neurosc.14:, 29 (2020).
M. Luong, H. Pham, C. D. Manning, in P. 2015 Conf. Empir. Methods in Nat. Lang. Process. (emnlp). Effective approaches to attentionbased neural machine translation (ACLLisbon, 2015), pp. 1412–1421.
J. Han, L. Zhang, Q. Zhang, G. Leus, Lowcomplexity equalization of orthogonal signaldivision multiplexing in doublyselective channels. IEEE Trans. Signal Proces.67(4), 915–929 (2018).
J. Han, Y. Wang, L. Zhang, G. Leus, Timedomain oversampled orthogonal signaldivision multiplexing underwater acoustic communications. J. Acoust. Soc. Am.145(1), 292–300 (2019).
J. Han, S. P. Chepuri, Q. Zhang, G. Leus, Iterative pervector equalization for orthogonal signaldivision multiplexing over timevarying underwater acoustic channels. IEEE J. Oceanic Eng.44(1), 240–255 (2019).
R. Sinha, R. D. Yates, in Veh. Technol. Conf. Fall 2000. IEEE VTS Fall VTC2000. 52nd Veh. Technol. Conf. (Cat. No. 00CH37152), 1. An OFDM based multicarrier MFSK system (IEEEBoston, 2000), pp. 257–264.
C. X. Gao, H. X. Yang, F. Yuan, E. Cheng, Underwater acoustic communication system based on MCMFSK. Appl. Mech. and Mater. 556:, 4897–4900 (2014). Trans Tech Publ.
K. Beyer, J. Goldstein, R. Ramakrishnan, U. Shaft, in Int’l Conf. Database Theory. When is “nearest neighbor” meaningful? (SpringerBerlin, 1999), pp. 217–235.
H. Zou, T. Hastie, R. Tibshirani, Sparse principal component analysis. J. Comput. Graph. Stat.15(2), 265–286 (2006).
T. Yang, S. Huang, in P. the 11th ACM Int’l Conf. Underwater Netw. & Syst. Building a database of ocean channel impulse responses for underwater acoustic communication performance evaluation: issues, requirements, methods and results (ACMShanghai, 2016), pp. 1–8.
T. T. Wong, P. Y. Yeh, Reliable accuracy estimates from kfold cross validation. IEEE Trans. Knowl. Data En.32(8), 1586–1594 (2020).
T. M. Kodinariya, P. R. Makwana, Review on determining number of cluster in kmeans clustering. Int. J. Adv. Res. Comput. Sci. Manage. Stud. 1(6), 90–95 (2013).
Acknowledgements
The authors would like to thank all the referees for their constructive and insightful comments on this paper.
Funding
This work was supported in part by the Science, Technology and Innovation Commission of Shenzhen Municipality (Grant No. JCYJ20180306170932431), in part by the National Key R&D Program of China (Grant No. 2016YFC1400203), in part by the National Natural Science Foundation of China (Grant Nos. 61771394, 61531015, and 61801394), and in part by the Natural Science Basic Research Plan in Shaanxi Province of China (Grant No. 2018JM6042).
Author information
Authors and Affiliations
Contributions
Conceptualization, L.H. and Z.T.; methodology, L.H. and Z.T.; software, L.H.; validation, Q.Z.; formal analysis, L.H.; investigation, L.H. and L.Z.; data curation, L.H.; writing—original draft preparation, L.H.; writing—review and editing, W.T., Y.W., and C.H.; visualization, L.H. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Huang, L., Zhang, Q., Tan, W. et al. Adaptive modulation and coding in underwater acoustic communications: a machine learning perspective. J Wireless Com Network 2020, 203 (2020). https://doi.org/10.1186/s1363802001818x
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s1363802001818x