 Research
 Open access
 Published:
Robust beamforming design for UAV communications based on integrated sensing and communication
EURASIP Journal on Wireless Communications and Networking volumeÂ 2023, ArticleÂ number:Â 88 (2023)
Abstract
Integrated sensing and communication (ISAC) has been a promising technique in various wireless communication applications. In this paper, we investigate a beamforming design method based on ISAC waveforms in unmanned aerial vehicle (UAV) communications. An integrated state prediction and beamforming design framework is presented. We utilize the target states from sensing algorithms to improve the prediction performance. Based on the predicted states, we formulate the mathematical form of the communication interruption probability. To enhance the beamforming performance, we propose a design approach that satisfies both sensing and communication metrics and the communication interruption constraints. We show that the proposed method achieves robust communication under the integrated state prediction and beamforming design framework. Simulation results show that by using the ISAC signal, our method significantly lowers the communication interruption probability in the beamforming process and achieves better communication performance.
1 Introduction
As a promising technique, integrated sensing and communication (ISAC) has attracted much interest in the next generation of wireless communications. With the growing intelligence of peopleâ€™s daily life and industrial production, it is a common scenario that both highrate communications and highaccuracy sensing are needed [1]. ISAC can realize both functions with one waveform and one set of hardware equipment. Conventionally, researches on the two fields rarely intersect. The spectrum resources dedicated for sensing and communications are usually separated to avoid interfering with each other. But in recent years, more and more scenarios with both needs have emerged. For example, when multiple unmanned aerial vehicles (UAVs) team up and perform tasks in unknown environments, the UAVs need to sense the environment in real time while maintaining seamless communications with the controller or each other to guarantee timely offloading and scheduling. Other scenarios like smart factories or autonomous driving also call for joint use of the two modules. On the other hand, the everincreasing wireless communication applications have crowded the existing allocated frequency bands and led to that the commonly used bands tend to shift to higher frequencies, for example, millimeterwave bands [2]. It is noted that these bands are often used for highresolution shortrange sensing, for example, millimeterwave radars [3]. Consequently, people begin to investigate the integration of sensing and communication in one system, i.e., the ISAC system. It is an effective solution to alleviate the possible spectrum tension and reduce the hardware overhead in these cases. Itâ€™s envisioned that ISAC will support various applications ranging from sensingassisted beamforming, millimeterwave channel estimation to cooperated sensing.
Some early work has been done in radar to realize sensing and communication with one waveform. The typical method is to embed the communication information into radar waveforms and realize message transmission during detection. For example, the authors in [4] used the differential quadrature reference phase shift keying (DQPSK) method to encode data streams and modulate chirp signals to support the multifunctionality of military radio frequency (RF) subsystems. To avoid mutual jamming, the authors in [5] utilized different pseudonoise (PN) codes to spread digital streams and used the obtained waveforms for sensing. However, the radarsignalbased methods are far from meeting the practical communication needs since the modulations based on the traditional radar waveforms are pretty inefficient. Therefore, ISAC attracted little interest in academic research or industrial production in the early years. Later in 2011, the authors in [6] proposed an insightful method that performs a sensing algorithm on orthogonal frequency division multiplexing (OFDM)modulated communication signals. It proves that the OFDM signals, while satisfying existing protocols, have pretty acceptable sensing performance. Henceforth, methods that utilize or adjust communication signals for sensing to realize ISAC start to get attention. Given that the preambles of communication frames often have good correlation properties, the authors in [7] proposed to utilize the preamble in IEEE 802.11ad WLAN standard for range and velocity estimation. The method is shown to support Gbps data rates and cmlevel sensing resolution at the same time. Considering the similarity in the use of antenna arrays, the authors in [8] enabled both multipleinput multipleoutput (MIMO) radar detection and multiuser multipleinput multipleoutput (MUMIMO) communication in one system by designing the conventional communication beamforming matrix to match the radar beam pattern. In summary, plenty of simulated and practical results have confirmed ISACâ€™s practicality in the next generation wireless communications.
When it comes to highfrequency communications, a massive multipleinput multipleoutput (mMIMO) system is often adopted to avoid severe channel loss. Meanwhile, the mMIMO technique can also help to measure the target angle during sensing and realize MIMO radar function when needed [9]. So quite a portion of research on ISAC is based on mMIMO assumptions. Although it can provide highrate and highresolution sensing, the pencillike beam link is easily interrupted by misalignment, especially when the target is moving. So, a robust beamforming design is needed to adjust the transmission beam to the right direction. Traditional beam alignment methods are typically based on location information provided by the targetâ€™s global positioning system (GPS) [10, 11]. The base node then predicts the targetâ€™s location and forms a directional beam. But the GPSbased method has some inherent drawbacks. It has relatively low precision and is vulnerable to be blocked. Besides, it needs a maintained uplink channel to receive the targetâ€™s GPS message. This leads to possible additional overhead and may cause the expiration of the location information. In this way, ISAC could be a competitive technique for beam alignment. On the one hand, it does not need the satelliteâ€™s assistance, so it performs well in blockage scenarios, such as indoor navigation or relays. On the other hand, ISAC can acquire relatively fresher and higher precision information than GPS. Most importantly, it is a proactive method that can significantly reduce the uplink overhead.
As mentioned above, millimeterwave communication, with its abundant spectrum resource, is adopted increasingly in UAV communications [12,13,14], where antenna arrays and beamforming techniques are often introduced to provide beamforming gain. While millimeterwave experiences severe fading and high penetration loss, UAVs serving as aerial access points can significantly enhance the coverage and quality of communications [15]. In this paper, we focus on the scenario where the UAV serves as an aerial access point providing highreliable and lowlatency communications, while maintaining a robust beamforming link to the base station (BS). However, the mobility of UAVs can still cause frequent channel variations. Therefore, a robust beamforming algorithm is required to counteract the frequent channel variations caused by 3dimensional (3D) narrow beams and the rapid movements of UAVs [16,17,18]. While ISACbased beamforming, with its fast updates and less uplink overhead, can become a competitive method. Some researches have been conducted on ISACbased beamforming [19, 20]. But these studies are mainly investigated in street scenarios, where an ISAC module is mounted on the roadside unit (RSU), performing sensingassisted beam alignment to the automobile on the road. In this paper, we apply the ISAC technique to UAV communications. Compared to the street scenario, the mobility of UAVs is much higher. While the motion model of the automobile is approximately onedimensional along the road, UAVs can move in any threedimensional direction in the air. It makes the state prediction more complicated. Meanwhile, communication with UAVs is more likely to be interrupted due to larger beam space and higher mobility, which is also the focus of our work. We aim to mitigate these frequent interruptions with the proposed ISAC method. The main contributions of this work are summarized as follows:

An innovative integrated state prediction and beamforming design framework based on ISAC are proposed, to solve the interruption problem in UAV communications. It combines the sensing algorithm and the communication interruption analysis, and realizes improved link robustness.

A new state prediction scheme is presented, where additional information from the radial velocity is introduced. We utilize the coupling of the threedimensional velocities to achieve better estimations of the targetâ€™s states.

The interruption probability is accordingly formulated to measure the performance of beamforming, considering both the misalignment and the communication outage. An analytically solvable form of the interruption probability is derived based on the predicted states.

A robust beamforming scheme is designed, which satisfies the given interruption constraints, along with both sensing and communication metrics. The beam pointing direction, beamwidth, and time fraction of the preamble are optimized simultaneously.
The remainder of this paper is organized as follows. An ISACbased beamforming model for UAV communications is presented in Sect.Â 2. States prediction and interruption analysis are discussed in Sect.Â 3. In Sect.Â 4, a robust beamforming design is introduced. Numerical results are presented in Sect.Â 5. Conclusions are drawn in Sect.Â 6.
2 Methods and system model
In this section, we provide a model that the BS communicates with the target UAV in the mmâ€”Wave band with a massive \(N_x \times N_z\)â€”uniform planar array (UPA). The UAV serves as an aerial access point t providing highreliable and lowlatency communications. To be brief, we refer to them as the base node and the UAV node in this paper. To maintain a highthroughput communication link, the base node forms a 3D analog beam toward the UAV node. While the high mobility of UAV plus the narrow beam may cause the link to be unstable, we introduce an ISACbased beam design algorithm to improve the beamforming performance.
As shown in Fig.Â 1, we put the base node at the Cartesian coordinate origin (0,Â 0,Â 0), which forms a beam pointing at direction \((\theta _B^v,\theta _B^h)\). We further denote the position of the UAV node as (x,Â y,Â z). Our aim is to efficiently predict the state of the UAV, analyze the interruption probability of the communication link, and adjust the beam to realize robust beamforming. We first provide the general framework of our ISACbased beamforming system.
2.1 General framework
This framework is primarily structured on the standard beamforming procedure [21], with the proposed ISAC method embedded. It can be divided into the following five processes:
(a) Initial access
Initially, we need to establish a directional communication link between the base node and the UAV node. At this time, we do not have the prior of the UAVâ€™s position or the channel state. Stateofart beam training algorithm, for example, in [22], can be used to accomplish the initial access process.
(b) Transmission/receiving echoes
Once the transmission link is established, the base node starts to send ISAC signals to the UAV node. While part of the signals are received by the UAV node, some signals are reflected back to the base node. These echoes contain information of the UAVâ€™s motion state, which can be utilized for the state prediction.
(c) State prediction
Through radar algorithms, the UAVâ€™s state of motion, i.e., angle, distance, and radial velocity, is obtained from the ISAC echoes. Based on the specific motion model, the base node can predict the UAVâ€™s state in the next frame.
(d) Interruption analysis
Since the prediction is not accurate, and the transmission beam can only cover a limited area, the communication link is easily interrupted. To measure the beam alignment performance, we further formulate the communication interruption probability based on the prediction results.
(e) Beamforming design
With the aforementioned interruption analysis, we introduce a set of beamforming design criteria in our model. It not only considers the interruption probability but also satisfies both sensing and communication metrics.
This process is subsequently performed through (b)â€“(e) looping. If an interruption occurs, the process is restarted from (a).
2.2 Signal model
As shown in Fig.Â 2, we choose the preamblebased form of the ISAC technique since it is easy to implement in practice. We denote i as the frame index and T as the frame length, which is preset as a constant. Each frame consists of two blocks, a preamble block, and a transmission block. The preamble block is used to allocate resources and estimate the channel states for communication purpose and obtain motion states for sensing purpose. The transmission block is used to send messages as usual. We denote the length of the preamble block as \(T^p\) and the transmission block as \(T^d\), where \(T=T^p+T^d\). With T fixed, \(T^p\) and \(T^d\) can be adjusted for better sensing and communication tradeoff.
In this paper, we mainly focus on the onetoone scenario, which can easily be extended to multitarget ones. We denoted the data stream generated at the ith frame as \(s_i(t)\) and the transmitted waveform at the base node as follows:
where \(\textbf{f}_i(t)\in \mathbb {C}^{N_xN_z\times 1}\) is the analog transmit beamforming vector, which is used to adjust the pointing direction of the analog beam and \(t\in [0, T]\) denotes the time in one frame.
(a) Communication signal model:
Since the communication target is a UAV, the channel is dominated by the LoS channel. Therefore, the channel matrix can be expressed as follows:
where \(h_i\) is the flat fading channel coefficient. \(\nu _{c,i}\) is the phase offset due to the Doppler shift and \(\textbf{a}_t^h(\theta _v,\theta _h) \in \mathbb {C}^{1 \times N_x N_z}\) is the array response vector. \(\theta _v,\theta _h\) denote the polar and azimuthal angles, respectively.
The channel coefficient \(h_i\) in (2) is formulated as follows:
where \(\alpha _c = \lambda e^{j2\pi \lambda d}/{4\pi d}\) [23] is the path loss. \(\lambda\) is the wave length, and d is the target distance. \(h_r,h_p\) represent the normalized multipath fading and the pointing errors [24]. The subscript i on the right side of (3) has been omitted for brevity.
Since the communication link is dominated by the LoS channel, \(h_r\) in (3) is considered as Rician fading model and given as follows:
in which K is the ratio between the power in the LoS path and the power in the nonLoS (NLoS) paths. \(h^{\textrm{nLoS}}\) denotes the Rayleigh fading channel accounting for NLoS components, which is assumed to be a circularlysymmetric complex Gaussian (CSCG) random variable [11].
The pointing error \(h_p\) in (3) is caused by the UAV deviating from the beam center of the BS node. According to [25], \(h_p\) is formulated as follows:
where \(A_0\) is the fraction of the collected power with no deviation at distance d, r is the distance of deviation from the center, and \(w_{\textrm{eq}}\) is the equivalent beamwidth. Let
with a being the aperture of the receiving antenna and \(w_d\) being the radius of the transmitting beam footprint at distance d, \(A_0\) and \(w_{\textrm{eq}}^2\) are then calculated as follows:
where \(\text {erf}(\cdot )\) is Gauss error function.
Then, the received signal at the UAV node can be expressed as follows:
where \(n_{c,i}\) is additive complex Gaussian noise that follows \(\mathcal{C}\mathcal{N}\left( 0, \sigma ^{2}_{c,i}\right)\). \(p_i\) is the transmit power. \(M_t = N_x \times N_z\) is the number of transmit antennas.
(b) Sensing signal model:
When signals are transmitted to the UAV node, some signals are reflected back and received by the receive array at the base node. These signals contain information about the targetâ€™s motion states and can be used to improve the performance of states prediction and beamforming design. Similarly, the sensing channel matrix is constructed as follows:
in which \(\beta _s\) is the sensing channel coefficient that is expressed as \(\beta _s = \lambda \sqrt{\sigma _{\textrm{RCS}}}/{8\pi ^{\frac{3}{2}} d^2}\) [26]. \(\sigma _{\textrm{RCS}}\) is the targetâ€™s radar crosssection. \(\nu _{s,i}\) is the Doppler shift and \(a_r^h(\theta _v,\theta _h)\) is the receive array response vector.
Since the groundtoair channel is sparse, we assume that the reflected signal is dominated by the echo from the target UAV and other unrelated noise. Therefore, the received echo at the base node is expressed as follows:
where \(n_{s,i}\) is additive complex Gaussian noise that follows \(\mathcal{C}\mathcal{N}\left( 0, \sigma ^{2}_{s,i}\right)\). \(M_r\) is the number of receive antennas, and \(\tau _i\) is the time delay.
2.3 Beam model
As shown in Fig.Â 3, we model the beam as a uniform cone beam to simplify the beamforming design. That is, in the coneshaped beam coverage, the gain is a fixed value; otherwise, the gain is approximately zero. Therefore, besides the beam pointing direction \((\theta _B^v,\theta _B^h)\), we further define the 3D beam width as \(\psi _i\) and the beam coverage area as \(BC_i\). The detailed mathematical form of \(BC_i\) is analyzed in the following section.
Based on the coneshaped beam pattern assumption, we further define the beamforming gain as follows:
in which \(G_{Bm, i}\) is inversely proportional to the square of beamwidth and is much larger than \(G_{Bs, i}\). The received communication at the UAV node and the reflected sensing signal at the base node could be reformulated, respectively, as follows:
where we replace the array directional gain part with the assumed beam gain.
2.4 Measurement model
The preamble part is used for parameter estimation due to its good correlation property. Therefore, we mainly focus on echoes of the preamble part in (13)
As for the interference caused by echoes of data blocks, we assume it is perfectly canceled by the stateofart successive interference cancelation (SIC) technique [27]. Meanwhile, We assume the channel coefficients and target states to be invariant during a frame. Thus, we can use the parameters in the current frame to do the predictions and design in the next frame.
After matchfiltering (14) with delay and Doppler grid, we get that
where \(G_{mf}\) is the total gain after match filter, [l,Â m] is the range and Doppler bin, and \(n_{s,i}[l,m]\) is the noise on the \([l,m]^{\textrm{th}}\) bin. Based on (15), we can obtain the timedelay \(\tau _i\), and Doppler shift \(\nu _{s, i}\) by searching the peek of the grid after eliminating the clutterâ€™s interference. The distance \(d_i\) and radial velocity \(v_i^r\) can then be calculated by
where c is the speed of light, and \(f_c\) is the frequency of the carrier.
In addition, the orientation angle of the target UAV could be obtained by the multiple signal classification (MUSIC) algorithm, which is known to have superior performance for angle estimation.
In practice, the tracking of the UAV and the beamforming algorithms is both performed at the base node, which requires only one additional antenna array to receive the sensing echoes compared to conventional UAV communications. The additional array can also be used to receive regular communication signals in the uplink process. Due to the high precision positioning of the base station, the frequency and accuracy requirements for UAV selfpositioning have also been reduced, which decreases the payload and overhead of the UAV to some extent.
3 State prediction and interruption analysis
In this section, we utilize the states obtained in Sect.Â 2 to predict the UAVâ€™s motion state at the \((i+1)\)th frame and derive the mathematical form of interruption probability. We denote the UAVâ€™s motion state at the current frame as \(\varvec{x}_{i}=\left[ \theta _{i}^h, \theta _{i}^v, d_{i}, v_{i}^{h}, v_{i}^{v}, v_{i}^{r}\right]\), each of which stands for azimuthal angle, polar angle, distance, azimuthal velocity, polar velocity, and radial velocity separately.
3.1 State prediction
To achieve predictive beamforming, we mainly focus on \(\theta _{i+1}^h, \theta _{i+1}^v\) and \(d_{i+1}\) among them. Based on the obtained motion states at the current frame, we have
Through the sensing algorithm, we can only obtain \(\theta _{i}^h, \theta _{i}^v\), \(d_{i}, v_{i}^{r}\) and other states at past epochs, but not the specific value of \(v_{i}^{h}, v_{i}^{v}\). That means, we can derive a straightforward value of \(d_{i+1}\), but for \(\theta _{i+1}^h, \theta _{i+1}^v\) we cannot. So, our first goal here is to provide additional information to \(\theta _{i+1}^h, \theta _{i+1}^v\) based on the measured value at hand. That is, we need to build a relationship between \(v_{i}^h, v_{i}^v\) and other states in \(\varvec{x}_i\).
Ideally, the objectâ€™s radial and tangential velocities are orthogonal and therefore, cannot be connected mathematically. But in reality, the targetâ€™s 3D velocity changes are usually coupled. So, to fit the actual movement model of the target UAV, We make the following assumptions about the threedimensional motion of the target [28]:
(a) Horizontal model In the 2D XY plane, the target UAV keeps moving at a constant speed. That is
where \(v^p\) denotes the component in the horizontal plane of the UAVâ€™s speed. \(\mathcal {N}(\cdot )\) denotes the normal distribution. The subscript \(i1\) means in the \((i1)\)th frame. To be brief, the subscript i of parameters in the current frame is omitted here and below.
(b) Vertical model In the Z direction, the target UAV maintains a slowly changing velocity motion model. That is
where \(v^z\) denotes the component in the vertical direction of the UAVâ€™s speed.
Meanwhile, we assume that the change of \(v^p\) is orthogonal to that of \(v^z\). In such scenarios, the UAV moves mainly in a horizontal plane and has little or limited vertical maneuver. Our work is easy to be extended to scenarios with other motion models since what we need is to bring more information to the tangential velocity \(v_{i}^{h}, v_{i}^{v}\) based on a coupled motion model.
With the above assumptions on the motion model, we can construct the relationship between the two systems using the velocity synthesis formula as follows:
where v represents the overall scalar velocity value.
Based on the transformation relationship between the Cartesian coordinate system and the spherical coordinate system, we can also construct projection transformations of the UAVâ€™s 3D velocities as follows:
where \(v^r,v^v,v^h\) represents the UAVâ€™s radial, polar tangential, and azimuthal tangential velocity in the spherical coordinate system, and \((\theta _0^v,\theta _0^h)\) is the orientation angle of the UAV node at the current frame. \(v^x,v^y,v^z\) represent the UAVâ€™s velocity components of the three dimensions in the Cartesian coordinate system. From the first row in (23), we can derive that
in which \(D=\frac{\cos \theta _{0}^{h}}{\sin \theta _{0}^{h}}, E=\frac{\cos \theta _{0}^{v}}{\sin \theta _{0}^{v} \sin \theta _{0}^{h}}, F=\frac{v^{r}}{\sin \theta _{0}^{v} \sin \theta _{0}^{h}}\). Bring (24) into the last two rows in (23), then, we can obtain the following distributions
The value of \(v^r\) can be obtained with sensing algorithms. And the distribution of \(v^z\) is assumed as (21). Combining (19) (20) (21) and (24), we can also obtain the distribution of \(v^x\). Therefore, the distributions of \(v^h\) and \(v^v\) can be determined as \(f_1\) and \(f_2\).
Substituting (25) into (18), we can further derive the distributions of \(\theta _{i+1}^h\) and \(\theta _{i+1}^v\). Besides the value of \(d_{i+1}\), we can now bring more information to the state prediction procedure with the deduced distributions of \(\theta _{i+1}^h\) and \(\theta _{i+1}^v\).
3.2 Interruption analysis
In this subsection, we will investigate the occurrence of communication interruption when communicating with the target UAV via beamforming. Typically, an interruption occurs when the base node cannot form an acceptable communication link toward the UAV node; that is, the communication SNR is lower than the needed threshold \(\gamma _{\textrm{th}}\). We define the communication interruption probability as \(P_{\textrm{int}}\) and the success communication probability as \(P_{\textrm{suc}}\). Taking the beam alignment gain into account, we can get that
in which \(\gamma _c\) means the communication SNR. \(\mathcal {H}_0\) denotes successful beam alignment, and \(\mathcal {H}_1\) denotes beam misalignment.
In our scenario, successful beam alignment refers that the target UAV is within the beam coverage, that is
Correspondingly, \(\mathcal {H}_1\) is represented as follows:
According to (12), we can obtain that
Based on the aforementioned cone beam assumption, \(G_{B,i+1}\) is assigned as follows:
where \(G_{B0}\) is normalized beamforming gain.
Then, we can derive that [29]
where \(Q_1\) is Marcumâ€™s Qfunction, and \(\sigma _q\) is the Rician Channel coefficient. Combining (29), (30), and (31), we can obtain the two corresponding conditional probabilities in (26).
In the previous subsection, we have derived distributions of the targetâ€™s angle \((\theta _{i+1}^v,\theta _{i+1}^h)\), which can be used to enhance the prediction performance. According to (27), the relationship between alignment probability and derived distributions is formulated as follows:
Meanwhile, there is \(P(\mathcal {H}_1)=1P(\mathcal {H}_0)\)
The Eq.Â (32) means that the alignment probability \(P(\mathcal {H}_0)\) is defined as the total probability of the distributions of the targetâ€™s angle, which is outside the beam coverage. According to (32), \(P(\mathcal {H}_0)\) is affected by the beam direction, beam width, and other parameters of sensing and communication.
To guide the beamforming design, first, we need to obtain the specific form of (32) that could directly construct the relationship between \(P(\mathcal {H}_0)\) and the beamforming design parameters. Since the integral region \(\hbox{BC}_{i+1}\) is conical, it takes work to solve the integration. So, we perform a coordinate system rotation, after which the beam direction \((\theta _B^h,\theta _B^v)\) is positioned at the zaxis direction. That is, all coordinates in the original coordinate system are rotated around the zaxis by \(\theta _B^h\), then around the yaxis by \(\theta _B^v\), both clockwise. To be concise, we omit the frame index \(i+1\). After the coordinate transformation, the beam coverage region in the new coordinate system is denoted in the spherical coordinate system as follows:
or in the Cartesian coordinate system as follows:
in which \(P^{\prime }=\left[ x^{\prime }, y^{\prime }, z^{\prime }\right] ^{T}\) denotes the transformed coordinate of the target. We further denote the original coordinate as \(P=\left[ x, y, z\right] ^{T}\). And the coordinate rotation matrix R is formulated as follows:
where \(R\left( Z, \theta ^{h}\right)\) represents a clockwise rotation around the zaxis by \(\theta ^{h}\), followed by \(R\left( Y, \theta ^{v}\right)\), which is a clockwise rotation around the yaxis. Then, we leftmultiply the original coordinates with rotation matrices in the rotation order.
Combining (34), (35) and (36), we can derive a solvable integral region form
where \(g_0\left( \theta ^h\right)\) and \(g_1\left( \theta ^h\right)\) are the corresponding lower and upper bounds of \(\theta ^v\).
The detailed derivation is shown in Appendix A.
After the coordinate system transformation, the conical integral region is converted to explicit limits of \(\theta _{i+1}^h, \theta _{i+1}^v\). Therefore, the integral limits in Eq.Â (26) are clarified. But we still do not know the specific form of \(\theta _{i+1}^h\) and \(\theta _{i+1}^v\)â€™s distributions. In the previous subsection, we have proved that the distributions of \(\theta _{i+1}^h\) and \(\theta _{i+1}^v\) can be determined by parameters at hand but are difficult to write explicitly. So, we derive it backward. Substitute (18) into (26), there is
After layers of transformation from \(\theta _{i+1}^h,\theta _{i+1}^v\) to \(v^p,v^z\), a solvable form of \(P(\mathcal {H}_0)\) is eventually derived that
The detailed derivation is shown in Appendix B.
Subscribing (39) into (26), we can now formulate the relationship between the interruption probability with beam direction, beamwidth, and other parameters.
4 Beamforming design
In this section, we introduce a robust beamforming design satisfying both sensing and communication metrics, along with the proposed communication interruption constraints. Our beamforming design problem is expressed as
where \(T^p\) is the length of the preamble, \(\theta _B\) is the beam direction, and \(\psi\) is the beamwidth.
R is the equivalent transmission rate which is formulated as follows:
where \(\alpha = \frac{T^p}{T}\) denotes time fraction of the preamble in a frame.
\(P_{\textrm{int}\_\textrm{th}}\) is the interruption probability threshold. The constraints (40b) reflect our robustness preference in the design problem. We need first to keep the interruption probability at a low level and then, consider maximizing throughput. Since the state prediction is based on the sensed parameters, the constraint (40c) and (40c) means that we must ensure the estimation is reliable. \(\sigma _{d}\) and \(\sigma _{v^r}\) are the CramÃ©rRao lower bound (CRLB) of the distance estimation and the radial velocity estimation. \(\sigma _{d_{\textrm{th}}}\) and \(\sigma _{v^r_{\textrm{th}}}\) are the corresponding thresholds. \(\sigma _{d}^2\) can be expressed as [3]
where c is the speed of light, W is the bandwidth, and \(\gamma _s\) is the sensing SNR, which can be approximated as \(\gamma _s = \frac{\gamma _c}{4\pi d^2}\times \sigma _{\textrm{RCS}}\). \(\sigma _{v^r}^2\) can be expressed as follows:
where \(\lambda\) is the wavelength.
Although (40) is a nonconvex optimization problem, the solution is simple. We take a twostep approach. The design problem is split into two parts and optimized alternatively. The first part is
The constraint (44b) is highly nonconvex due to the complicated formation of \(P_{\textrm{int}}\). But the two optimized variables can both be quantified as \(\theta _B \in R_{\theta _B}\) and \(\psi \in R_{\psi }\), where
in which \(\theta _0\) is the minimum beam direction resolution, and
in which \(\frac{\pi }{2^k}\) is the narrowest beamwidth, and k is a positive integer. Iterating over all possible values in \(R_{\theta _B}\) and \(R_{\theta _{\psi }}\), we can obtain the optimal solution of (44). It is noted that due to the maximum speed limit of the UAV, \(R_{\theta _B}\) can be further reduced to lower the computational overhead. Therefore, (44) can be solved in constant complexity.
The second part of (40) is constructed as
Note that \(\sigma _{d}^{2}\) and \(\sigma _{v^r}^{2}\) are monotonically related to \(T^p\) and \(\psi\), so (47) is easy to solve and has constant complexity. Alternately solving the two optimization problems one or two times, we can obtain the optimal values of \(T^p\), \(\theta _B\), and \(\psi\). The overall complexity of (40) is \(\mathcal {O}(1)\) due to the discretization of the optimization variables. The main computational cost of the algorithm lies in the sensing process to obtain the targetâ€™s accurate states, as discussed in Sect.Â 2.4.
By solving (40), we can realize the proposed robust beamforming design, including the length of the preamble, beam pointing direction, and beamwidth. Fundamentally, the CLRB need should be satisfied to realize periodic target sensing. While satisfying the interruption constraints, the capacity might be decreased. But it will significantly reduce beam recovery overhead and the cost of interruption.
5 Results and discussion
In this section, simulations are shown to verify the performance of our algorithm. The real 3D track of the target UAV is generated based on the assumed model as (19), (20), and (21) in Sect.Â 3.1. In this scenario, we can obtain the radial velocity in sensing. It provides additional information for the angle estimation and improves the prediction performance. Our algorithm can easily be extended to scenarios with other motion models as long as the velocity transformation relationship is replaced accordingly. The frame duration is set as \(T = 0.2s\), which means the sensing process is performed every 0.2Â s. The operation frequency is set as \(f_c = 30\)Â GHz, and the bandwidth is set as \(W = 1.76\)Â GHz [7]. We also set the target radar crosssection as \(\sigma _{\textrm{RCS}} = 2\,\hbox{m}^2\) [30], a reasonable value for small drones in the millimeterwave band. To keep track of the target UAV, the accurate state information is updated at the start of each frame by the sensing algorithm and that of the next frame is predicted at the same time.
FigureÂ 4 shows the tracking performance of our prediction algorithm. The tracks over time in 3D view are presented in Fig.Â 4a, and the corresponding distance estimation differences are depicted in Fig.Â 4b. The initial velocity of the target UAV is set at [5,Â 5,Â 0.5]Â m/s assuming highspeed UAVs [31]. In Fig.Â 4a, the blue line denotes the actual movement trajectory of the UAV. The red line denotes the predicted locations by our algorithm over time. And the yellow line is obtained by the traditional GPSbased algorithm as in [11], which uses the locationsâ€™ differentials instead of actual velocities. In this paper, we only consider the performance gap of algorithms, ignoring possible differences in other aspects, such as latency and precision, that are related to practical applications. As shown in Fig.Â 4a, the red line is almost always close to the blue line, while the yellow line deviates most of the time. It shows that the GPSbased track performs poorly when the UAV changes its direction frequently, while our predicted track almost precisely fits the actual track. Similarly, in Fig.Â 4b, it can be observed that in most cases, our distance estimation errors are much smaller than those of the GPSbased algorithm. It is because we have introduced an additional dimension of radial velocity, which implies information about the direction of the targetâ€™s movement. In the middle part of the trajectory around the 25th frame, the two algorithms have similar good performance since there are few changes in the targetâ€™s moving direction. As shown in Fig.Â 4a, the UAV is flying in a nearly straight line, enabling both algorithms to achieve lower estimation errors. It is noted that the differences of the predicted track have a slight increase in this part, and it is because the change in the value of the speed causes a perturbation to our algorithm.
FigureÂ 5 depicts the changes of interruption probability as in (26). In Fig.Â 5a, both the proposed and GPSbased algorithms as in [11] with different beamwidths are simulated, which are presented as solid lines and dashed lines with cross marks separately. The xaxis is the standard deviation of the noise \(\sigma _c\). As \(\sigma _c\) increases, the communication SNR \(\gamma _c\) decreases, and interruption probability increases accordingly from 0 to 1. Meanwhile, when the beamwidth decreases by \(\pi /4,\pi /8,\pi /16\) and \(\pi /32\), the lines tend to shift toward the right. This is because when the beam gets narrower, the beamforming gain increases inversely with the square of beamwidth. Then, SNR increases so that it can resist stronger noise. When \(\sigma _c\) keeps increasing, the interruption probability of the lines all gradually reaches 1 due to low SNR. On the other hand, the lines also tend to shift upwards. It means that wider beams sometimes cause more interruptions. The reason is that while narrower beams bring higher SNR, they also create additional difficulties in the beam alignment process. The beam coverage area is reduced proportionately to the square of beam width, thus decreasing the beam alignment probability \(P(\mathcal {H}_0)\). The interruption probability is then kept at a certain level, even if the SNR is high. While the lines of the same color have the same beamwidth, we can see that in each pair, the dashed one usually has a higher interruption probability. It shows that our proposed beam design approach can markedly improve the stability of the communication link compared to the GPSbased approach since we introduce additional radial velocity information, which significantly improves the accuracy of position estimation. Also, it is noted that when the beamwidth \(\psi = \pi /4\) and \(\pi /8\), the pair of lines in blue and red converged together. It is because the beam is wide enough to cover almost every possible location of the target UAV, and the slight difference in beam direction is not that critical. It also results that when SNR is high enough, the interruption probability may reach 0.
In Fig.Â 5b, we also compared our algorithms with the constant angular velocity algorithm as proposed in [32], which assumes that the target UAV moves with the same angular velocity, denoted as \(\omega\), as the previous moment. The dashed lines with cross marks are replaced with the constant\(\omega\) algorithm with the corresponding beamwidth. The interruption probability of the constant\(\omega\) algorithm shows a similar trend as the GPSbased algorithm, i.e., it goes up as the noise increases and shifts toward right and up as the beamwidth decreases. It is noted that under the same configuration, the constant\(\omega\) algorithm tends to have a higher interruption probability. It is because when the target UAVâ€™s movement distance per unit of time is relatively large compared to the distance from the BS node, its angular velocity usually cannot be maintained relatively constant. As the distance from the base station increases, the performance of this algorithm improves, eventually reaching a similar interruption probability as the GPSbased algorithm. However, there is still a significant gap between this algorithm and the proposed algorithm due to its lack of additional radial velocity information.
We present changes of R, \(\sigma _d^2\) and \(\sigma _{v^r}^2\), that is, the achievable rate, the CLRB of distance and radial velocity, over time in Fig.Â 6. The frame index in the xaxis is selected from the same track in Fig.Â 4 with the distance increasing approximately from 80 to 130Â m. The yaxis on the left denotes \(\sigma _d\) and \(\sigma _{v^r}\), corresponding to the rising lines, and the yaxis on the right denotes R, corresponding to the decreasing lines. The lines with the same color share the same percentage of preamble \(\alpha\). As we can see in the figure, with the growth of the frame index, as well as the distance of the UAV, the achievable rate decreases continuously while the CRLBs increase by a larger order of magnitude. It is because the communication SNR \(\gamma _c\) is inversely proportional to the square of the distance, with logarithmic operations afterward. In contrast, the sensing SNR \(\gamma _c\) is inversely proportional to the fourth power of the distance. It proves that in an ISAC system, sensing is often much more sensitive to distance than communication. Also, we can see that with \(\alpha\) increases from 0.05, 0.1 to 0.2, sensing performance improves, and communication worsens since a longer preamble makes low CRLB and squeezes communication resources. Itâ€™s noted that there are more increases in \(\sigma _{v^r}^2\) than in \(\sigma _{d}^2\) because that longer signal in the time domain typically contributes more to Doppler estimation.
6 Conclusion
In this paper, we provide a robust solution for beamforming in UAV communications. To solve the problem of frequent communication interruption due to high mobility and narrow beam, we propose an ISACbased framework to realize a robust beamforming design. In particular, we introduce additional state information obtained from the sensing process, which is realized by the current communication preamble. Simulations show that it clearly enhances the state prediction performance. Then, based on the predicted states, we formulate the form of interruption probability. A robust beamforming design is then presented, satisfying the derived interruption probability constraints as well as communication and sensing metrics. Numerical results have confirmed that our algorithm is capable of reducing communication interruptions.
Availability of data and materials
Not applicable.
Abbreviations
 ISAC:

Integrated sensing and communication
 UAV:

Unmanned aerial vehicle
 DQPSK:

Differential quadrature reference phase shift keying
 RF:

Radio frequency
 PN:

Pseudonoise
 OFDM:

Orthogonal frequency division multiplexing
 MIMO:

Multipleinput multipleoutput
 mMIMO:

Massive multipleinput multipleoutput
 GPS:

Global positioning system
 BS:

Base station
 3D:

3dimensional
 IoT:

Internet of things
 LoS:

Lineofsight
 RSU:

Roadside unit
 UPA:

Uniform planar array
 CSCG:

Circularlysymmetric complex Gaussian
 SIC:

Successive interference cancellation
 MUSIC:

Multiple signal classification
 CRLB:

CramÃ©râ€“Rao lower bound
References
F. Liu, Y. Cui, C. Masouros, J. Xu, T.X. Han, Y.C. Eldar, S. Buzzi, Integrated sensing and communications: toward dualfunctional wireless networks for 6G and beyond. IEEE J. Sel. Areas Commun. 40(6), 1728â€“1767 (2022)
F. Liu, C. Masouros, A.P. Petropulu, H. Griffiths, L. Hanzo, Joint radar and communication design: applications, stateoftheart, and the road ahead. IEEE Trans. Commun. 68(6), 3834â€“3862 (2020)
M.A. Richards, Fundamentals of Radar Signal Processing (McGrawHill Education, New York, 2014)
M. Roberton, E. Brown. Integrated radar and communications based on chirped spreadspectrum techniques. In: IEEE MTTS International Microwave Symposium Digest, vol. 1. (IEEE, 2003), pp. 611â€“614
S. Xu, Y. Chen, P. Zhang, Integrated radar and communication based on DSUWB. In 2006 3rd International Conference on Ultrawideband and Ultrashort Impulse Signals (IEEE, 2006). pp. 142â€“144
C. Sturm, W. Wiesbeck, Waveform design and signal processing aspects for fusion of wireless communications and radar sensing. Proc. IEEE 99(7), 1236â€“1259 (2011)
P. Kumari, J. Choi, N. GonzÃ¡lezPrelcic, R.W. Heath, Ieee 802.11 adbased radar: an approach to joint vehicular communicationradar system. IEEE Trans. Veh. Technol. 67(4), 3012â€“3027 (2017)
F. Liu, C. Masouros, A. Li, H. Sun, L. Hanzo, MUMIMO communications with MIMO radar: from coexistence to joint transmission. IEEE Trans. Wirel. Commun. 17(4), 2755â€“2770 (2018)
B. Nuss, L. Sit, M. Fennel, J. Mayer, T. Mahler, T. Zwick, MIMO OFDM radar system for drone detection. In: 2017 18th International Radar Symposium (IRS) (IEEE, 2017), pp. 1â€“9
W. Wu, N. Cheng, N. Zhang, P. Yang, W. Zhuang, X. Shen, Fast mmwave beam alignment via correlated bandit learning. IEEE Trans. Wirel. Commun. 18(12), 5894â€“5908 (2019)
Y. Huang, Q. Wu, T. Wang, G. Zhou, R. Zhang, 3D beam tracking for cellularconnected UAV. IEEE Wirel. Commun. Lett. 9(5), 736â€“740 (2020)
K. Guo, R. Liu, M. Alazab, R.H. Jhaveri, X. Li, M. Zhu, STARRISEmpowered cognitive nonterrestrial vehicle network with NOMA. IEEE Trans. Intell. Veh. 8(6), 3735â€“3749 (2023)
R. Liu, K. Guo, K. An, F. Zhou, Y. Wu, Y. Huang, G. Zheng, Resource allocation for NOMAenabled cognitive satelliteUAVterrestrial networks with imperfect CSI. IEEE Trans. Cognit. Commun. Netw. 1â€“1 (2023)
K. Guo, R. Liu, C. Dong, K. An, Y. Huang, S. Zhu, Ergodic capacity of NOMAbased overlay cognitive integrated satelliteUAVterrestrial networks. Chin. J. Electron. 32(2), 273â€“282 (2023)
Z. Xiao, L. Zhu, Y. Liu, P. Yi, R. Zhang, X.G. Xia, R. Schober, A survey on millimeterwave beamforming enabled UAV communications and networking. IEEE Commun. Surv. Tutor. 24(1), 557â€“610 (2021)
K. Guo, M. Wu, X. Li, H. Song, N. Kumar, Deep reinforcement learning and NOMAbased multiobjective RISassisted isUAVTNS: trajectory optimization and beamforming design. IEEE Trans. Intell. Transp. Syst. 1â€“14 (2023)
L. Zhu, J. Zhang, Z. Xiao, X. Cao, X.G. Xia, R. Schober, Millimeterwave fullduplex UAV relay: joint positioning, beamforming, and power control. IEEE J. Sel. Areas Commun. 38(9), 2057â€“2073 (2020)
L. Zhu, J. Zhang, Z. Xiao, X. Cao, D.O. Wu, X.G. Xia, 3D beamforming for flexible coverage in millimeterwave UAV communications. IEEE Wirel. Commun. Lett. 8(3), 837â€“840 (2019)
F. Liu, W. Yuan, C. Masouros, J. Yuan, Radarassisted predictive beamforming for vehicular links: communication served by sensing. IEEE Trans. Wirel. Commun. 19(11), 7704â€“7719 (2020)
W. Yuan, F. Liu, C. Masouros, J. Yuan, D.W.K. Ng, N. GonzÃ¡lezPrelcic, Bayesian predictive beamforming for vehicular networks: a lowoverhead joint radarcommunication approach. IEEE Trans. Wirel. Commun. 20(3), 1442â€“1456 (2020)
M. Giordani, M. Polese, A. Roy, D. Castor, M. Zorzi, A tutorial on beam management for 3GPP NR at mmWave frequencies. IEEE Commun. Surv. Tutor. 21(1), 173â€“196 (2018)
L. Yang, W. Zhang, Hierarchical codebook and beam alignment for UAV communications. In: 2018 IEEE Globecom Workshops (GC Wkshps) (IEEE, 2018), pp. 1â€“6
L. Liu, S. Zhang, R. Zhang, Comp in the sky: UAV placement and movement optimization for multiuser communications. IEEE Trans. Commun. 67(8), 5645â€“5658 (2019)
G. Xu, N. Zhang, M. Xu, Z. Xu, Q. Zhang, Z. Song, Outage probability and average BER of UAVassisted dualhop FSO communication with amplifyandforward relaying. IEEE Trans. Veh. Technol. (2023)
A.A.A. Boulogeorgos, E.N. Papasotiriou, A. Alexiou, Analytical performance assessment of THz wireless systems. IEEE Access 7, 11436â€“11453 (2019)
P. Kumari, S.A. Vorobyov, R.W. Heath, Adaptive virtual waveform design for millimeterwave joint communicationradar. IEEE Trans. Signal Process. 68, 715â€“730 (2019)
L. Dai, B. Wang, Y. Yuan, S. Han, I. ChihLin, Z. Wang, Nonorthogonal multiple access for 5G: solutions, challenges, opportunities, and future research trends. IEEE Commun. Mag. 53(9), 74â€“81 (2015)
X.R. Li, V.P. Jilkov, Survey of maneuvering target tracking. Part I. Dynamic models. IEEE Trans. Aerosp. Electron. Syst. 39(4), 1333â€“1364 (2003)
H.L. Song, Y.C. Ko, Beam alignment for highspeed UAV via angle prediction and adaptive beam coverage. IEEE Trans. Veh. Technol. 70(10), 10185â€“10192 (2021)
C.C. Tsai, C.T. Chiang, W.J. Liao. Radar cross section measurement of unmanned aerial vehicles. In: 2016 IEEE International Workshop on Electromagnetics: Applications and Student Innovation Competition (iWEM) (IEEE, 2016), pp. 1â€“3
M. Khabbaz, J. Antoun, C. Assi, Modeling and performance analysis of UAVassisted vehicular networks. IEEE Trans. Veh. Technol. 68(9), 8384â€“8396 (2019)
L. Yang, W. Zhang, Beam tracking and optimization for UAV communications. IEEE Trans. Wirel. Commun. 18(11), 5367â€“5379 (2019)
Acknowledgements
Not applicable.
Funding
Not applicable.
Author information
Authors and Affiliations
Contributions
All the authors contributed to the system model, state prediction and interruption analysis, beamforming design, simulations, and the writing of this paper. All authors read and approved the final manuscript.
Corresponding author
Ethics declarations
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Additional information
Publisherâ€™s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A
Subscribing (35) into (36), we can obtain
Since (48) is difficult to be simplified, we convert (48) into
Subscribing (49) into (34), we can get
Let \(R_4=\tan \left( \frac{\psi }{2}\right) R_3\), and we get
Let \(R_0=R_0\left( \theta _B^v, \theta _B^h\right) =\left( R_4^T R_4R_1^T R_1R_2^T R_2\right)\). Since \(R_0\) is symmetric, we unfold (52) as follows:
where \(R_{ij}\) is the element in the ith row and jth column of matrix \(R_0\). Subscribing \(\left\{ \begin{array}{l}x=d \sin \theta ^v \cos \theta ^h \\ y=d \sin \theta ^v \sin \theta ^h \\ z=d \cos \theta ^v\end{array}\right.\) into (53), we get
Let
we get
in which
We denote the solution of (57) as \(R_{a0}\). Since the solution of this inequality depends heavily on \(\text {sgn}(B)\), we first assume \(B > 0\) for the derivation.
We note that if \(X<1\), the inequality has no solution, i.e., \(R_{a0} = \varnothing\). We further denote the solution interval of inequality \(X<1\) as \(R_{a1}\). That is, if \(\theta ^h \in R_{a1}\), there is no \(\theta ^v\) satisfying the points in the beam region. Expanding the inequality \(X<1\), we get that
where
\(X_1,X_2,X_3,\theta _{\text {temp2}},X0\) are all related to beam configurations only. The inequality is similar in form to (57), so we perform similar derivations. Since the solution of this inequality depends heavily on \(\text {sgn}(X_1)\), we first assume \(X_1 > 0\) for the derivation.

i.
If \(X0\ge 1\), the inequality (61) has no solution. That is, in the case of these beam configurations, \(X>1\) is always true, and (57) always has solutions. In this case, \(R_{a1} = \varnothing\).

ii.
If \(X0\le 1\), the inequality (61) always holds. That is, \(X<1\) always holds. It means that in the case of these beam configurations, the beam region cannot be expressed. This cannot be true. So after derivation, \(X0>1\) always holds.

iii.
If \(1<X0<1\), solve inequality (61) and we can get that
$$\begin{aligned} R_{a1}&= \left( \frac{1}{2}\hbox{arccos}(X0)\frac{1}{2}\theta _{\text {temp2}}, \frac{1}{2}\hbox{arccos}(X0)\frac{1}{2}\theta _{\text {temp2}}\right) \\&\quad \cup \left( \pi \frac{1}{2}\hbox{arccos}(X0)\frac{1}{2}\theta _{\text {temp2}}, \pi +\frac{1}{2}\hbox{arccos}(X0)\frac{1}{2}\theta _{\text {temp2}}\right) \end{aligned}$$(64)Note that since \(\theta ^h \in (0,2\pi )\), the final interval should be \(R_{a1}\cap (0,2\pi )\) and hence, varies with different beam configurations. Due to space limitations, we omit this part of the proof.
Given the above, we could obtain \(R_{a1}\) under different beam configurations. If \(\theta ^h\in R_{a1}\), \(R_{a0} = \varnothing\).
If \(X\ge 1\), it means that in this interval of \(\theta ^h\), all points with \(\theta ^v\in (0,\frac{1}{2}\pi )\) are in the beam region. This cannot be true. Therefore, after derivation, we find that \(X\le 1\) always holds.
If \(1<X<1\), the inequality (57) can be normally solved. We denote the solution interval of inequality \(1<X<1\) as \(R_{b1}\), and there are \(R_{a1}\cup R_{b1} = (0,2\pi )\) and \(R_{a1}\cap R_{b1} = \varnothing\). Solving (57), we get that
Note that since \(\theta ^v \in (0,\frac{1}{2}\pi )\), the final interval should be \(R_{a0}(\theta ^h)\cap (0,\frac{1}{2}\pi )\), and hence varies with different \(\theta ^h\) intervals. Due to space limitations, we omit this part of the proof.
Now, we can get a clear form of beam coverage
where \(R_{b1}\) is the solution interval of \(1<X<1\), and \(R_{a0}(\theta ^h)\) is the corresponding solution interval of (57).
Then, look back at (57), where we assume that \(B>0\). The sign of B changes with different intervals of \(\theta ^h\). Solving the inequality \(B>0\), we get an interval of \(\theta ^h\), denoted as \(R_{a2}\). Due to space limitations, the specific form of \(R_{a2}\) and its overlapping relations with \(R_{a1}\) are omitted here. When \(\theta ^h\in R_{a2}\), \(BC_i\) can be obtained by the above derivations. But when \(B\le 0\), we denote this interval of \(\theta ^h\) as \(R_{b2}\), and there are \(R_{a2}\cup R_{b2} = (0,2\pi ), R_{a2}\cap R_{b2} = \varnothing\). When \(\theta ^h\in R_{b2}\), the derivations are slightly changed:

i.
Replace X with \(X^\prime = X\) in \(R_{a0}(\theta ^h)\).

ii.
$$\begin{aligned} BC_i=\left\{ \theta _v,\theta _h\theta ^v \in R_{b0}(\theta ^h), \theta ^h \in R_{b1}\right\} \end{aligned}$$(67)
where \(R_{a0}\cup R_{b0} = (0,\frac{1}{2}\pi ), R_{a0}\cap R_{b0} = \varnothing\).
Similarly, look back at (61), where we assume that \(X1>0\). Actually, the sign of X1 changes with different beam configurations. When \(X1\le 0\), the derivations are slightly changed:

i.
Replace X0 with \(X0^\prime = X0\) in \(R_{a1}\).

ii.
$$\begin{aligned} BC_i=\left\{ \theta _v,\theta _h\theta ^v \in R_{a0}(\theta ^h), \theta ^h \in R_{a1}\right\} \end{aligned}$$(68)
Appendix B
Substitute (18) into (26), there is
After arranging the integral limits of (38), we get that
Based on the (23) and (24), we obtain that
Subscribing (71) into (70), we get that
and the integral region \(D_0\) is the region enclosed by
Combining (24) and (20), we further replace \(v^x\) with \(v^p\), \(v^z\) and get that
After partial derivative for both sides of (24), we get that
Subscribe it into (74), we finally obtain a solvable form that
where \(p(v^p)\) and \(p(v^z)\) are mutually independent and both follow Gaussian distributions.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Zhang, H., Yang, T., Wu, X. et al. Robust beamforming design for UAV communications based on integrated sensing and communication. J Wireless Com Network 2023, 88 (2023). https://doi.org/10.1186/s13638023023000
Received:
Accepted:
Published:
DOI: https://doi.org/10.1186/s13638023023000