Skip to main content

Robust beamforming design for UAV communications based on integrated sensing and communication


Integrated sensing and communication (ISAC) has been a promising technique in various wireless communication applications. In this paper, we investigate a beamforming design method based on ISAC waveforms in unmanned aerial vehicle (UAV) communications. An integrated state prediction and beamforming design framework is presented. We utilize the target states from sensing algorithms to improve the prediction performance. Based on the predicted states, we formulate the mathematical form of the communication interruption probability. To enhance the beamforming performance, we propose a design approach that satisfies both sensing and communication metrics and the communication interruption constraints. We show that the proposed method achieves robust communication under the integrated state prediction and beamforming design framework. Simulation results show that by using the ISAC signal, our method significantly lowers the communication interruption probability in the beamforming process and achieves better communication performance.

1 Introduction

As a promising technique, integrated sensing and communication (ISAC) has attracted much interest in the next generation of wireless communications. With the growing intelligence of people’s daily life and industrial production, it is a common scenario that both high-rate communications and high-accuracy sensing are needed [1]. ISAC can realize both functions with one waveform and one set of hardware equipment. Conventionally, researches on the two fields rarely intersect. The spectrum resources dedicated for sensing and communications are usually separated to avoid interfering with each other. But in recent years, more and more scenarios with both needs have emerged. For example, when multiple unmanned aerial vehicles (UAVs) team up and perform tasks in unknown environments, the UAVs need to sense the environment in real time while maintaining seamless communications with the controller or each other to guarantee timely offloading and scheduling. Other scenarios like smart factories or autonomous driving also call for joint use of the two modules. On the other hand, the ever-increasing wireless communication applications have crowded the existing allocated frequency bands and led to that the commonly used bands tend to shift to higher frequencies, for example, millimeter-wave bands [2]. It is noted that these bands are often used for high-resolution short-range sensing, for example, millimeter-wave radars [3]. Consequently, people begin to investigate the integration of sensing and communication in one system, i.e., the ISAC system. It is an effective solution to alleviate the possible spectrum tension and reduce the hardware overhead in these cases. It’s envisioned that ISAC will support various applications ranging from sensing-assisted beamforming, millimeter-wave channel estimation to cooperated sensing.

Some early work has been done in radar to realize sensing and communication with one waveform. The typical method is to embed the communication information into radar waveforms and realize message transmission during detection. For example, the authors in [4] used the differential quadrature reference phase shift keying (DQPSK) method to encode data streams and modulate chirp signals to support the multifunctionality of military radio frequency (RF) subsystems. To avoid mutual jamming, the authors in [5] utilized different pseudo-noise (PN) codes to spread digital streams and used the obtained waveforms for sensing. However, the radar-signal-based methods are far from meeting the practical communication needs since the modulations based on the traditional radar waveforms are pretty inefficient. Therefore, ISAC attracted little interest in academic research or industrial production in the early years. Later in 2011, the authors in [6] proposed an insightful method that performs a sensing algorithm on orthogonal frequency division multiplexing (OFDM)-modulated communication signals. It proves that the OFDM signals, while satisfying existing protocols, have pretty acceptable sensing performance. Henceforth, methods that utilize or adjust communication signals for sensing to realize ISAC start to get attention. Given that the preambles of communication frames often have good correlation properties, the authors in [7] proposed to utilize the preamble in IEEE 802.11ad WLAN standard for range and velocity estimation. The method is shown to support Gbps data rates and cm-level sensing resolution at the same time. Considering the similarity in the use of antenna arrays, the authors in [8] enabled both multiple-input multiple-output (MIMO) radar detection and multi-user multiple-input multiple-output (MU-MIMO) communication in one system by designing the conventional communication beamforming matrix to match the radar beam pattern. In summary, plenty of simulated and practical results have confirmed ISAC’s practicality in the next generation wireless communications.

When it comes to high-frequency communications, a massive multiple-input multiple-output (mMIMO) system is often adopted to avoid severe channel loss. Meanwhile, the mMIMO technique can also help to measure the target angle during sensing and realize MIMO radar function when needed [9]. So quite a portion of research on ISAC is based on mMIMO assumptions. Although it can provide high-rate and high-resolution sensing, the pencil-like beam link is easily interrupted by misalignment, especially when the target is moving. So, a robust beamforming design is needed to adjust the transmission beam to the right direction. Traditional beam alignment methods are typically based on location information provided by the target’s global positioning system (GPS) [10, 11]. The base node then predicts the target’s location and forms a directional beam. But the GPS-based method has some inherent drawbacks. It has relatively low precision and is vulnerable to be blocked. Besides, it needs a maintained uplink channel to receive the target’s GPS message. This leads to possible additional overhead and may cause the expiration of the location information. In this way, ISAC could be a competitive technique for beam alignment. On the one hand, it does not need the satellite’s assistance, so it performs well in blockage scenarios, such as indoor navigation or relays. On the other hand, ISAC can acquire relatively fresher and higher precision information than GPS. Most importantly, it is a proactive method that can significantly reduce the uplink overhead.

As mentioned above, millimeter-wave communication, with its abundant spectrum resource, is adopted increasingly in UAV communications [12,13,14], where antenna arrays and beamforming techniques are often introduced to provide beamforming gain. While millimeter-wave experiences severe fading and high penetration loss, UAVs serving as aerial access points can significantly enhance the coverage and quality of communications [15]. In this paper, we focus on the scenario where the UAV serves as an aerial access point providing high-reliable and low-latency communications, while maintaining a robust beamforming link to the base station (BS). However, the mobility of UAVs can still cause frequent channel variations. Therefore, a robust beamforming algorithm is required to counteract the frequent channel variations caused by 3-dimensional (3D) narrow beams and the rapid movements of UAVs [16,17,18]. While ISAC-based beamforming, with its fast updates and less uplink overhead, can become a competitive method. Some researches have been conducted on ISAC-based beamforming [19, 20]. But these studies are mainly investigated in street scenarios, where an ISAC module is mounted on the roadside unit (RSU), performing sensing-assisted beam alignment to the automobile on the road. In this paper, we apply the ISAC technique to UAV communications. Compared to the street scenario, the mobility of UAVs is much higher. While the motion model of the automobile is approximately one-dimensional along the road, UAVs can move in any three-dimensional direction in the air. It makes the state prediction more complicated. Meanwhile, communication with UAVs is more likely to be interrupted due to larger beam space and higher mobility, which is also the focus of our work. We aim to mitigate these frequent interruptions with the proposed ISAC method. The main contributions of this work are summarized as follows:

  • An innovative integrated state prediction and beamforming design framework based on ISAC are proposed, to solve the interruption problem in UAV communications. It combines the sensing algorithm and the communication interruption analysis, and realizes improved link robustness.

  • A new state prediction scheme is presented, where additional information from the radial velocity is introduced. We utilize the coupling of the three-dimensional velocities to achieve better estimations of the target’s states.

  • The interruption probability is accordingly formulated to measure the performance of beamforming, considering both the misalignment and the communication outage. An analytically solvable form of the interruption probability is derived based on the predicted states.

  • A robust beamforming scheme is designed, which satisfies the given interruption constraints, along with both sensing and communication metrics. The beam pointing direction, beamwidth, and time fraction of the preamble are optimized simultaneously.

The remainder of this paper is organized as follows. An ISAC-based beamforming model for UAV communications is presented in Sect. 2. States prediction and interruption analysis are discussed in Sect. 3. In Sect. 4, a robust beamforming design is introduced. Numerical results are presented in Sect. 5. Conclusions are drawn in Sect. 6.

2 Methods and system model

In this section, we provide a model that the BS communicates with the target UAV in the mm—Wave band with a massive \(N_x \times N_z\)—uniform planar array (UPA). The UAV serves as an aerial access point t providing high-reliable and low-latency communications. To be brief, we refer to them as the base node and the UAV node in this paper. To maintain a high-throughput communication link, the base node forms a 3D analog beam toward the UAV node. While the high mobility of UAV plus the narrow beam may cause the link to be unstable, we introduce an ISAC-based beam design algorithm to improve the beamforming performance.

As shown in Fig. 1, we put the base node at the Cartesian coordinate origin (0, 0, 0), which forms a beam pointing at direction \((\theta _B^v,\theta _B^h)\). We further denote the position of the UAV node as (xyz). Our aim is to efficiently predict the state of the UAV, analyze the interruption probability of the communication link, and adjust the beam to realize robust beamforming. We first provide the general framework of our ISAC-based beamforming system.

Fig. 1
figure 1

ISAC-based beamforming model

2.1 General framework

This framework is primarily structured on the standard beamforming procedure [21], with the proposed ISAC method embedded. It can be divided into the following five processes:

(a) Initial access

Initially, we need to establish a directional communication link between the base node and the UAV node. At this time, we do not have the prior of the UAV’s position or the channel state. State-of-art beam training algorithm, for example, in [22], can be used to accomplish the initial access process.

(b) Transmission/receiving echoes

Once the transmission link is established, the base node starts to send ISAC signals to the UAV node. While part of the signals are received by the UAV node, some signals are reflected back to the base node. These echoes contain information of the UAV’s motion state, which can be utilized for the state prediction.

(c) State prediction

Through radar algorithms, the UAV’s state of motion, i.e., angle, distance, and radial velocity, is obtained from the ISAC echoes. Based on the specific motion model, the base node can predict the UAV’s state in the next frame.

(d) Interruption analysis

Since the prediction is not accurate, and the transmission beam can only cover a limited area, the communication link is easily interrupted. To measure the beam alignment performance, we further formulate the communication interruption probability based on the prediction results.

(e) Beamforming design

With the aforementioned interruption analysis, we introduce a set of beamforming design criteria in our model. It not only considers the interruption probability but also satisfies both sensing and communication metrics.

This process is subsequently performed through (b)–(e) looping. If an interruption occurs, the process is restarted from (a).

2.2 Signal model

As shown in Fig. 2, we choose the preamble-based form of the ISAC technique since it is easy to implement in practice. We denote i as the frame index and T as the frame length, which is pre-set as a constant. Each frame consists of two blocks, a preamble block, and a transmission block. The preamble block is used to allocate resources and estimate the channel states for communication purpose and obtain motion states for sensing purpose. The transmission block is used to send messages as usual. We denote the length of the preamble block as \(T^p\) and the transmission block as \(T^d\), where \(T=T^p+T^d\). With T fixed, \(T^p\) and \(T^d\) can be adjusted for better sensing and communication trade-off.

Fig. 2
figure 2

Frame structure

In this paper, we mainly focus on the one-to-one scenario, which can easily be extended to multi-target ones. We denoted the data stream generated at the ith frame as \(s_i(t)\) and the transmitted waveform at the base node as follows:

$$\begin{aligned} \textbf{x}_i(t)=\textbf{f}_i(t)s_i(t) \end{aligned}$$

where \(\textbf{f}_i(t)\in \mathbb {C}^{N_xN_z\times 1}\) is the analog transmit beamforming vector, which is used to adjust the pointing direction of the analog beam and \(t\in [0, T]\) denotes the time in one frame.

(a) Communication signal model:

Since the communication target is a UAV, the channel is dominated by the LoS channel. Therefore, the channel matrix can be expressed as follows:

$$\begin{aligned} \textbf{H}_i = h_i e^{j2\pi \nu _{c,i}t}\textbf{a}_t^h(\theta _v,\theta _h) \end{aligned}$$

where \(h_i\) is the flat fading channel coefficient. \(\nu _{c,i}\) is the phase offset due to the Doppler shift and \(\textbf{a}_t^h(\theta _v,\theta _h) \in \mathbb {C}^{1 \times N_x N_z}\) is the array response vector. \(\theta _v,\theta _h\) denote the polar and azimuthal angles, respectively.

The channel coefficient \(h_i\) in (2) is formulated as follows:

$$\begin{aligned} h_i=\alpha _c h_r h_p \end{aligned}$$

where \(\alpha _c = \lambda e^{j2\pi \lambda d}/{4\pi d}\) [23] is the path loss. \(\lambda\) is the wave length, and d is the target distance. \(h_r,h_p\) represent the normalized multipath fading and the pointing errors [24]. The subscript i on the right side of (3) has been omitted for brevity.

Since the communication link is dominated by the LoS channel, \(h_r\) in (3) is considered as Rician fading model and given as follows:

$$\begin{aligned} h_r=\left( \sqrt{\frac{K}{K+1}} +\sqrt{\frac{1}{K+1}} h^{\textrm{nLoS}}\right) \end{aligned}$$

in which K is the ratio between the power in the LoS path and the power in the non-LoS (NLoS) paths. \(h^{\textrm{nLoS}}\) denotes the Rayleigh fading channel accounting for NLoS components, which is assumed to be a circularly-symmetric complex Gaussian (CSCG) random variable [11].

The pointing error \(h_p\) in (3) is caused by the UAV deviating from the beam center of the BS node. According to [25], \(h_p\) is formulated as follows:

$$\begin{aligned} h_p = A_0 \exp {\left( -\frac{2r^2}{w_{\textrm{eq}}^2}\right) } \end{aligned}$$

where \(A_0\) is the fraction of the collected power with no deviation at distance d, r is the distance of deviation from the center, and \(w_{\textrm{eq}}\) is the equivalent beamwidth. Let

$$\begin{aligned} u = \frac{\sqrt{\pi }a}{\sqrt{2}w_d} \end{aligned}$$

with a being the aperture of the receiving antenna and \(w_d\) being the radius of the transmitting beam footprint at distance d, \(A_0\) and \(w_{\textrm{eq}}^2\) are then calculated as follows:

$$\begin{aligned} A_0 = [\text {erf}(u)]^2,\quad w_{\textrm{eq}}^2 = w_d^2 \frac{\sqrt{\pi }\text {erf}(u)}{2u\exp {(-u^2})} \end{aligned}$$

where \(\text {erf}(\cdot )\) is Gauss error function.

Then, the received signal at the UAV node can be expressed as follows:

$$\begin{aligned} y_{c,i}(t)=\sqrt{p_i}\sqrt{M_t}\textbf{H}_i\textbf{x}_i(t)+n_{c,i}(t) \end{aligned}$$

where \(n_{c,i}\) is additive complex Gaussian noise that follows \(\mathcal{C}\mathcal{N}\left( 0, \sigma ^{2}_{c,i}\right)\). \(p_i\) is the transmit power. \(M_t = N_x \times N_z\) is the number of transmit antennas.

(b) Sensing signal model:

When signals are transmitted to the UAV node, some signals are reflected back and received by the receive array at the base node. These signals contain information about the target’s motion states and can be used to improve the performance of states prediction and beamforming design. Similarly, the sensing channel matrix is constructed as follows:

$$\begin{aligned} \textbf{H}_{s,i}=\beta _{s}e^{j2\pi \nu _{s,i}t}\textbf{a}_r(\theta _v,\theta _h)\textbf{a}_t^H(\theta _v,\theta _h) \end{aligned}$$

in which \(\beta _s\) is the sensing channel coefficient that is expressed as \(\beta _s = \lambda \sqrt{\sigma _{\textrm{RCS}}}/{8\pi ^{\frac{3}{2}} d^2}\) [26]. \(\sigma _{\textrm{RCS}}\) is the target’s radar cross-section. \(\nu _{s,i}\) is the Doppler shift and \(a_r^h(\theta _v,\theta _h)\) is the receive array response vector.

Since the ground-to-air channel is sparse, we assume that the reflected signal is dominated by the echo from the target UAV and other unrelated noise. Therefore, the received echo at the base node is expressed as follows:

$$\begin{aligned} y_{s,i}(t)=\sqrt{p_i}\sqrt{M_tM_r}\textbf{H}_{s,i}\textbf{x}_i(t-\tau _{i})+n_{s,i}(t) \end{aligned}$$

where \(n_{s,i}\) is additive complex Gaussian noise that follows \(\mathcal{C}\mathcal{N}\left( 0, \sigma ^{2}_{s,i}\right)\). \(M_r\) is the number of receive antennas, and \(\tau _i\) is the time delay.

2.3 Beam model

As shown in Fig. 3, we model the beam as a uniform cone beam to simplify the beamforming design. That is, in the cone-shaped beam coverage, the gain is a fixed value; otherwise, the gain is approximately zero. Therefore, besides the beam pointing direction \((\theta _B^v,\theta _B^h)\), we further define the 3D beam width as \(\psi _i\) and the beam coverage area as \(BC_i\). The detailed mathematical form of \(BC_i\) is analyzed in the following section.

Fig. 3
figure 3

Cone beam model

Based on the cone-shaped beam pattern assumption, we further define the beamforming gain as follows:

$$\begin{aligned} G_{B,i}=\left\{ \begin{array}{ll} G_{Bm,i}, &{}\quad (\theta _v,\theta _h)\in BC_i\\ G_{Bs,i}, &{} \quad \text {else } \end{array}\right. \end{aligned}$$

in which \(G_{Bm, i}\) is inversely proportional to the square of beamwidth and is much larger than \(G_{Bs, i}\). The received communication at the UAV node and the reflected sensing signal at the base node could be reformulated, respectively, as follows:

$$\begin{aligned} y_{c,i}(t)=\sqrt{p_i}\sqrt{M_t} h_i e^{j2\pi \nu _{c,i}t}\sqrt{G_{B,i}}s_i(t)+n_{c,i}(t) \end{aligned}$$
$$\begin{aligned} y_{s,i}(t)=\sqrt{p_i}\sqrt{M_tM_r}\beta _{s}e^{j2\pi \nu _{s,i}t}\textbf{a}_r(\theta _v,\theta _h)\sqrt{G_{B,i}}s_i(t-\tau _{i})+n_{s,i}(t) \end{aligned}$$

where we replace the array directional gain part with the assumed beam gain.

2.4 Measurement model

The preamble part is used for parameter estimation due to its good correlation property. Therefore, we mainly focus on echoes of the preamble part in (13)

$$\begin{aligned} y_{s,i}^p(t)=\sqrt{p_i}\sqrt{M_tM_r}\beta _{s}e^{j2\pi \nu _{s,i}t}\textbf{a}_r(\theta _v,\theta _h)\sqrt{G_{B,i}}s_i^p(t-\tau _{i})+n_{s,i}(t) \end{aligned}$$

As for the interference caused by echoes of data blocks, we assume it is perfectly canceled by the state-of-art successive interference cancelation (SIC) technique [27]. Meanwhile, We assume the channel coefficients and target states to be invariant during a frame. Thus, we can use the parameters in the current frame to do the predictions and design in the next frame.

After match-filtering (14) with delay and Doppler grid, we get that

$$\begin{aligned} y_{s,i}[l,m] = G_{mf}\int _0^{T^p}s_i^p(t-\tau _i)s_{i}^{p*}(t-\tau [l])e^{j2\pi (\nu _{s,i}-\nu _[m])t}dt + n_{s,i}[l,m] \end{aligned}$$

where \(G_{mf}\) is the total gain after match filter, [lm] is the range and Doppler bin, and \(n_{s,i}[l,m]\) is the noise on the \([l,m]^{\textrm{th}}\) bin. Based on (15), we can obtain the time-delay \(\tau _i\), and Doppler shift \(\nu _{s, i}\) by searching the peek of the grid after eliminating the clutter’s interference. The distance \(d_i\) and radial velocity \(v_i^r\) can then be calculated by

$$\begin{aligned} d_i = \frac{\tau _i \times c}{2} \end{aligned}$$
$$\begin{aligned} v_i^r = \frac{\nu _{s,i} \times c}{2f_c} \end{aligned}$$

where c is the speed of light, and \(f_c\) is the frequency of the carrier.

In addition, the orientation angle of the target UAV could be obtained by the multiple signal classification (MUSIC) algorithm, which is known to have superior performance for angle estimation.

In practice, the tracking of the UAV and the beamforming algorithms is both performed at the base node, which requires only one additional antenna array to receive the sensing echoes compared to conventional UAV communications. The additional array can also be used to receive regular communication signals in the uplink process. Due to the high precision positioning of the base station, the frequency and accuracy requirements for UAV self-positioning have also been reduced, which decreases the payload and overhead of the UAV to some extent.

3 State prediction and interruption analysis

In this section, we utilize the states obtained in Sect. 2 to predict the UAV’s motion state at the \((i+1)\)th frame and derive the mathematical form of interruption probability. We denote the UAV’s motion state at the current frame as \(\varvec{x}_{i}=\left[ \theta _{i}^h, \theta _{i}^v, d_{i}, v_{i}^{h}, v_{i}^{v}, v_{i}^{r}\right]\), each of which stands for azimuthal angle, polar angle, distance, azimuthal velocity, polar velocity, and radial velocity separately.

3.1 State prediction

To achieve predictive beamforming, we mainly focus on \(\theta _{i+1}^h, \theta _{i+1}^v\) and \(d_{i+1}\) among them. Based on the obtained motion states at the current frame, we have

$$\begin{aligned} d_{i+1}&= d_{i}+v_{i}^{r}\times T \\ \theta _{i+1}^h&= \theta _{i}^h+{v_{i}^{h}}/d_{i}\times T \\ \theta _{i+1}^v&= \theta _{i}^v+{v_{i}^{v}}/d_{i}\times T \end{aligned}$$

Through the sensing algorithm, we can only obtain \(\theta _{i}^h, \theta _{i}^v\), \(d_{i}, v_{i}^{r}\) and other states at past epochs, but not the specific value of \(v_{i}^{h}, v_{i}^{v}\). That means, we can derive a straightforward value of \(d_{i+1}\), but for \(\theta _{i+1}^h, \theta _{i+1}^v\) we cannot. So, our first goal here is to provide additional information to \(\theta _{i+1}^h, \theta _{i+1}^v\) based on the measured value at hand. That is, we need to build a relationship between \(v_{i}^h, v_{i}^v\) and other states in \(\varvec{x}_i\).

Ideally, the object’s radial and tangential velocities are orthogonal and therefore, cannot be connected mathematically. But in reality, the target’s 3D velocity changes are usually coupled. So, to fit the actual movement model of the target UAV, We make the following assumptions about the three-dimensional motion of the target [28]:

(a) Horizontal model In the 2D X-Y plane, the target UAV keeps moving at a constant speed. That is

$$\begin{aligned} v^{p} \sim \mathcal {N}\left( v_{i-1}^{p},\left( \sigma ^{p}\right) ^{2}\right) \end{aligned}$$
$$\begin{aligned} \left( v^{p}\right) ^{2}=\left( v^{x}\right) ^{2}+\left( v^{y}\right) ^{2} \end{aligned}$$

where \(v^p\) denotes the component in the horizontal plane of the UAV’s speed. \(\mathcal {N}(\cdot )\) denotes the normal distribution. The subscript \(i-1\) means in the \((i-1)\)th frame. To be brief, the subscript i of parameters in the current frame is omitted here and below.

(b) Vertical model In the Z direction, the target UAV maintains a slowly changing velocity motion model. That is

$$\begin{aligned} v^{z} \sim \mathcal {N}\left( v_{i-1}^{z},\left( \sigma ^{z}\right) ^{2}\right) \end{aligned}$$

where \(v^z\) denotes the component in the vertical direction of the UAV’s speed.

Meanwhile, we assume that the change of \(v^p\) is orthogonal to that of \(v^z\). In such scenarios, the UAV moves mainly in a horizontal plane and has little or limited vertical maneuver. Our work is easy to be extended to scenarios with other motion models since what we need is to bring more information to the tangential velocity \(v_{i}^{h}, v_{i}^{v}\) based on a coupled motion model.

With the above assumptions on the motion model, we can construct the relationship between the two systems using the velocity synthesis formula as follows:

$$\begin{aligned} v^{2}=(v^{r})^2 + (v^{v})^2 + (v^{h})^2 = (v^{x})^2+(v^{y})^2 + (v^{z})^2 \end{aligned}$$

where v represents the overall scalar velocity value.

Based on the transformation relationship between the Cartesian coordinate system and the spherical coordinate system, we can also construct projection transformations of the UAV’s 3D velocities as follows:

$$\begin{aligned} \left\{ \begin{array}{l} v^{r}=v^{x} \sin \theta _{0}^{v} \cos \theta _{0}^{h}+v^{y} \sin \theta _{0}^{v} \sin \theta _{0}^{h}+v^{z} \cos \theta _{0}^{v} \\ v^{v}=v^{x} \cos \theta _{0}^{v} \cos \theta _{0}^{h}+v^{y} \cos \theta _{0}^{v} \sin \theta _{0}^{h}-v^{z} \sin \theta _{0}^{v} \\ v^{h}=-v^{x} \sin \theta _{0}^{h}+v^{y} \cos \theta _{0}^{h} \end{array}\right. \end{aligned}$$

where \(v^r,v^v,v^h\) represents the UAV’s radial, polar tangential, and azimuthal tangential velocity in the spherical coordinate system, and \((\theta _0^v,\theta _0^h)\) is the orientation angle of the UAV node at the current frame. \(v^x,v^y,v^z\) represent the UAV’s velocity components of the three dimensions in the Cartesian coordinate system. From the first row in (23), we can derive that

$$\begin{aligned} v^{y}=\frac{v^{r}-v^{x} \sin \theta _{0}^{v} \cos \theta _{0}^{h}-v^{z} \cos \theta _{0}^{v}}{\sin \theta _{0}^{v} \sin \theta _{0}^{h}}=D v^{x}+E v^{z}+F \end{aligned}$$

in which \(D=-\frac{\cos \theta _{0}^{h}}{\sin \theta _{0}^{h}}, E=-\frac{\cos \theta _{0}^{v}}{\sin \theta _{0}^{v} \sin \theta _{0}^{h}}, F=\frac{v^{r}}{\sin \theta _{0}^{v} \sin \theta _{0}^{h}}\). Bring (24) into the last two rows in (23), then, we can obtain the following distributions

$$\begin{aligned} v^{h}=f_{1}\left( v^{x},v^z, v^r\right) \\ v^{v}=f_{2}\left( v^{x},v^z,v^r\right) \end{aligned}$$

The value of \(v^r\) can be obtained with sensing algorithms. And the distribution of \(v^z\) is assumed as (21). Combining (19) (20) (21) and (24), we can also obtain the distribution of \(v^x\). Therefore, the distributions of \(v^h\) and \(v^v\) can be determined as \(f_1\) and \(f_2\).

Substituting (25) into (18), we can further derive the distributions of \(\theta _{i+1}^h\) and \(\theta _{i+1}^v\). Besides the value of \(d_{i+1}\), we can now bring more information to the state prediction procedure with the deduced distributions of \(\theta _{i+1}^h\) and \(\theta _{i+1}^v\).

3.2 Interruption analysis

In this subsection, we will investigate the occurrence of communication interruption when communicating with the target UAV via beamforming. Typically, an interruption occurs when the base node cannot form an acceptable communication link toward the UAV node; that is, the communication SNR is lower than the needed threshold \(\gamma _{\textrm{th}}\). We define the communication interruption probability as \(P_{\textrm{int}}\) and the success communication probability as \(P_{\textrm{suc}}\). Taking the beam alignment gain into account, we can get that

$$\begin{aligned} P_{\textrm{suc}}&=1-P_{\textrm{int}}=P(\mathcal {H}_0)P(\gamma _c>\gamma _{\textrm{th}}|\mathcal {H}_0)+P(\mathcal {H}_1)P(\gamma _c>\gamma _{\textrm{th}}|\mathcal {H}_1) \end{aligned}$$

in which \(\gamma _c\) means the communication SNR. \(\mathcal {H}_0\) denotes successful beam alignment, and \(\mathcal {H}_1\) denotes beam misalignment.

In our scenario, successful beam alignment refers that the target UAV is within the beam coverage, that is

$$\begin{aligned} \left( \theta _{i+1}^v,\theta _{i+1}^h\right) \in BC_{i+1} \end{aligned}$$

Correspondingly, \(\mathcal {H}_1\) is represented as follows:

$$\begin{aligned} \left( \theta _{i+1}^v,\theta _{i+1}^h\right) \notin BC_{i+1} \end{aligned}$$

According to (12), we can obtain that

$$\begin{aligned} \gamma _c&= \frac{\left| \sqrt{p_{i+1}}\sqrt{M_t}h_{i+1}e^{j2\pi \nu _{c,i+1}t}\sqrt{G_{B,i+1}}\right| ^2}{\sigma _c^2} \\ &= \frac{p_{i+1}M_t h_{i+1}^2G_{B,i+1}}{\sigma _c^2} \end{aligned}$$

Based on the aforementioned cone beam assumption, \(G_{B,i+1}\) is assigned as follows:

$$\begin{aligned} G_{B,i+1}=\left\{ \begin{array}{ll} \frac{G_{B0}}{\psi ^2}, &{}\quad (\theta _v,\theta _h)\in BC_{i+1}\\ 0, &{}\quad \text {else} \end{array}\right. \end{aligned}$$

where \(G_{B0}\) is normalized beamforming gain.

Then, we can derive that [29]

$$\begin{aligned} P(\gamma _c>\gamma _{\textrm{th}}) = Q_1\left( \frac{\sqrt{\gamma _c}}{\sigma _q},\frac{\sqrt{\gamma _{\textrm{th}}}}{\sigma _q}\right) \end{aligned}$$

where \(Q_1\) is Marcum’s Q-function, and \(\sigma _q\) is the Rician Channel coefficient. Combining (29), (30), and (31), we can obtain the two corresponding conditional probabilities in (26).

In the previous subsection, we have derived distributions of the target’s angle \((\theta _{i+1}^v,\theta _{i+1}^h)\), which can be used to enhance the prediction performance. According to (27), the relationship between alignment probability and derived distributions is formulated as follows:

$$\begin{aligned} P(\mathcal {H}_0)=\iint _{\textrm{BC}_{i+1}} p\left( \theta ^h_{i+1}\right) p\left( \theta ^v_{i+1}\right) d \theta ^v_{i+1} d \theta ^h_{i+1} \end{aligned}$$

Meanwhile, there is \(P(\mathcal {H}_1)=1-P(\mathcal {H}_0)\)

The Eq. (32) means that the alignment probability \(P(\mathcal {H}_0)\) is defined as the total probability of the distributions of the target’s angle, which is outside the beam coverage. According to (32), \(P(\mathcal {H}_0)\) is affected by the beam direction, beam width, and other parameters of sensing and communication.

To guide the beamforming design, first, we need to obtain the specific form of (32) that could directly construct the relationship between \(P(\mathcal {H}_0)\) and the beamforming design parameters. Since the integral region \(\hbox{BC}_{i+1}\) is conical, it takes work to solve the integration. So, we perform a coordinate system rotation, after which the beam direction \((\theta _B^h,\theta _B^v)\) is positioned at the z-axis direction. That is, all coordinates in the original coordinate system are rotated around the z-axis by \(\theta _B^h\), then around the y-axis by \(\theta _B^v\), both clockwise. To be concise, we omit the frame index \(i+1\). After the coordinate transformation, the beam coverage region in the new coordinate system is denoted in the spherical coordinate system as follows:

$$\begin{aligned} BC^{\prime }=\left\{ \theta _v^{\prime },\theta _h^{\prime }|\theta _{v}^{\prime }<\frac{1}{2} \psi \right\} \end{aligned}$$

or in the Cartesian coordinate system as follows:

$$\begin{aligned} BC^{\prime }=\left\{ x^{\prime },y^{\prime },z^{\prime }|z^{\prime }\tan \left( \frac{\psi }{2}\right) > \sqrt{x^{\prime 2}+y^{\prime 2}}\right\} \end{aligned}$$

in which \(P^{\prime }=\left[ x^{\prime }, y^{\prime }, z^{\prime }\right] ^{T}\) denotes the transformed coordinate of the target. We further denote the original coordinate as \(P=\left[ x, y, z\right] ^{T}\). And the coordinate rotation matrix R is formulated as follows:

$$\begin{aligned} R&= R\left( \theta _{B}^{v}, \theta _{B}^{h}\right) =R\left( Y, \theta ^{v}\right) R\left( Z, \theta ^{h}\right) \\ &= \left[ \begin{array}{ccc} \cos \theta _{B}^{v} &{} 0 &{} -\sin \theta _{B}^{v} \\ 0 &{} 1 &{} 0 \\ \sin \theta _{B}^{v} &{} 0 &{} \cos \theta _{B}^{v} \end{array}\right] \left[ \begin{array}{ccc} \cos \theta _{B}^{h} &{} \sin \theta _{B}^{h} &{} 0 \\ -\sin \theta _{B}^{h} &{} \cos \theta _{B}^{h} &{} 0 \\ 0 &{} 0 &{} 1 \end{array}\right] \\ &= \left[ \begin{array}{ccc} \cos \theta _{B}^{v} \cos \theta _{B}^{h} &{} \cos \theta _{B}^{v} \sin \theta _{B}^{h} &{} -\sin \theta _{B}^{v} \\ -\sin \theta _{B}^{h} &{} \cos \theta _{B}^{h} &{} 0 \\ \sin \theta _{B}^{v} \cos \theta _{B}^{h} &{} \sin \theta _{B}^{v} \sin \theta _{B}^{h} &{} \cos \theta _{B}^{v} \end{array}\right] \end{aligned}$$

where \(R\left( Z, \theta ^{h}\right)\) represents a clockwise rotation around the z-axis by \(\theta ^{h}\), followed by \(R\left( Y, \theta ^{v}\right)\), which is a clockwise rotation around the y-axis. Then, we left-multiply the original coordinates with rotation matrices in the rotation order.

$$\begin{aligned} P^{\prime } =R\left( Y, \theta ^{v}\right) R\left( Z, \theta ^{h}\right) \cdot P =R\left( \theta _{B}^{v}, \theta _{B}^{h}\right) \cdot P \end{aligned}$$

Combining (34), (35) and (36), we can derive a solvable integral region form

$$\begin{aligned} B C=\left\{ \theta _v, \theta _h \mid g_0\left( \theta ^h\right)<\theta ^v<g_1\left( \theta ^h\right) , 0<\theta ^h<2 \pi \right\} \end{aligned}$$

where \(g_0\left( \theta ^h\right)\) and \(g_1\left( \theta ^h\right)\) are the corresponding lower and upper bounds of \(\theta ^v\).

The detailed derivation is shown in Appendix A.

After the coordinate system transformation, the conical integral region is converted to explicit limits of \(\theta _{i+1}^h, \theta _{i+1}^v\). Therefore, the integral limits in Eq. (26) are clarified. But we still do not know the specific form of \(\theta _{i+1}^h\) and \(\theta _{i+1}^v\)’s distributions. In the previous subsection, we have proved that the distributions of \(\theta _{i+1}^h\) and \(\theta _{i+1}^v\) can be determined by parameters at hand but are difficult to write explicitly. So, we derive it backward. Substitute (18) into (26), there is

$$\begin{aligned} P(\mathcal {H}_0)=\frac{T^{2}}{d_{0}^{2}} \int _{\beta _{0}}^{\beta _{1}} \int _{\alpha _{0}}^{\alpha _{1}} p_{0}\left( v^{v}, v^{h}\right) d v^{h} d v^{v} \end{aligned}$$

After layers of transformation from \(\theta _{i+1}^h,\theta _{i+1}^v\) to \(v^p,v^z\), a solvable form of \(P(\mathcal {H}_0)\) is eventually derived that

$$\begin{aligned} P(\mathcal {H}_0)&= \left( A_{0} B_{1}-A_{1} B_{0}\right) \frac{T^{2}}{d_{0}^{2}} \iint _{D_{1}} p\left( v^{p}\right) p\left( v^{z}\right) \\ &\quad \frac{v^{p}}{\sqrt{\left( D^{2}+1\right) \left( v^{p}\right) ^{2}-E^{2}\left( v^{z}\right) ^{2}-2 E F v^{z}}} d v^{p} d v^{z} \end{aligned}$$

The detailed derivation is shown in Appendix B.

Subscribing (39) into (26), we can now formulate the relationship between the interruption probability with beam direction, beamwidth, and other parameters.

4 Beamforming design

In this section, we introduce a robust beamforming design satisfying both sensing and communication metrics, along with the proposed communication interruption constraints. Our beamforming design problem is expressed as

$$\begin{aligned} \max _{T^p,\theta _B,\psi } \quad R \ \end{aligned}$$
$$\begin{aligned} \hbox{s.t.} \quad P_{\textrm{int}}\le&P_{\textrm{int}\_\textrm{th}} \end{aligned}$$
$$\begin{aligned} \sigma _{d}^{2}\ge&\sigma _{d_{\textrm{th}}}^{2} \end{aligned}$$
$$\begin{aligned} \sigma _{v^r}^{2}\ge&\sigma _{v^r_{\textrm{th}}}^{2} \end{aligned}$$
$$\begin{aligned} 0<&T^p < T \end{aligned}$$

where \(T^p\) is the length of the preamble, \(\theta _B\) is the beam direction, and \(\psi\) is the beamwidth.

R is the equivalent transmission rate which is formulated as follows:

$$\begin{aligned} R = (1 - \alpha )log(1+\gamma _c) \end{aligned}$$

where \(\alpha = \frac{T^p}{T}\) denotes time fraction of the preamble in a frame.

\(P_{\textrm{int}\_\textrm{th}}\) is the interruption probability threshold. The constraints (40b) reflect our robustness preference in the design problem. We need first to keep the interruption probability at a low level and then, consider maximizing throughput. Since the state prediction is based on the sensed parameters, the constraint (40c) and (40c) means that we must ensure the estimation is reliable. \(\sigma _{d}\) and \(\sigma _{v^r}\) are the Cramér-Rao lower bound (CRLB) of the distance estimation and the radial velocity estimation. \(\sigma _{d_{\textrm{th}}}\) and \(\sigma _{v^r_{\textrm{th}}}\) are the corresponding thresholds. \(\sigma _{d}^2\) can be expressed as [3]

$$\begin{aligned} \sigma _{d}^{2}=\frac{3c^{2}}{8 \pi ^2 W^{3} \alpha T \gamma _s} \end{aligned}$$

where c is the speed of light, W is the bandwidth, and \(\gamma _s\) is the sensing SNR, which can be approximated as \(\gamma _s = \frac{\gamma _c}{4\pi d^2}\times \sigma _{\textrm{RCS}}\). \(\sigma _{v^r}^2\) can be expressed as follows:

$$\begin{aligned} \sigma _{v^r}^{2}=\frac{3\lambda ^{2}}{8\pi ^2 W \alpha ^3 T^3 \gamma _s} \end{aligned}$$

where \(\lambda\) is the wavelength.

Although (40) is a non-convex optimization problem, the solution is simple. We take a two-step approach. The design problem is split into two parts and optimized alternatively. The first part is

$$\begin{aligned} \max _{\theta _B,\psi } \quad R \end{aligned}$$
$$\begin{aligned} \hbox{s.t.} \quad P_{\textrm{int}}\le&P_{\textrm{int}\_\textrm{th}} \end{aligned}$$

The constraint (44b) is highly non-convex due to the complicated formation of \(P_{\textrm{int}}\). But the two optimized variables can both be quantified as \(\theta _B \in R_{\theta _B}\) and \(\psi \in R_{\psi }\), where

$$\begin{aligned} R_{\theta _B} = \left\{ (\theta _v,\theta _h)|\theta _v = m \times \theta _0, m = 1,...,\frac{\pi }{2\theta _0}; \theta _h = n \times \theta _0, n = 1,...,\frac{2\pi }{\theta _0}\right\} \end{aligned}$$

in which \(\theta _0\) is the minimum beam direction resolution, and

$$\begin{aligned} R_{\theta _{\psi }} = \left\{ \psi |\psi = \frac{\pi }{2},\frac{\pi }{4},...,\frac{\pi }{2^k}\right\} \end{aligned}$$

in which \(\frac{\pi }{2^k}\) is the narrowest beamwidth, and k is a positive integer. Iterating over all possible values in \(R_{\theta _B}\) and \(R_{\theta _{\psi }}\), we can obtain the optimal solution of (44). It is noted that due to the maximum speed limit of the UAV, \(R_{\theta _B}\) can be further reduced to lower the computational overhead. Therefore, (44) can be solved in constant complexity.

The second part of (40) is constructed as

$$\begin{aligned} \max _{T^p,\psi } \quad R \end{aligned}$$
$$\begin{aligned} \hbox{s.t.}\quad \sigma _{d}^{2}\ge&\sigma _{d_{\textrm{th}}}^{2} \end{aligned}$$
$$\begin{aligned} \sigma _{v^r}^{2}\ge&\sigma _{v^r_{\textrm{th}}}^{2} \end{aligned}$$
$$\begin{aligned} 0<&T^p < T \end{aligned}$$

Note that \(\sigma _{d}^{2}\) and \(\sigma _{v^r}^{2}\) are monotonically related to \(T^p\) and \(\psi\), so (47) is easy to solve and has constant complexity. Alternately solving the two optimization problems one or two times, we can obtain the optimal values of \(T^p\), \(\theta _B\), and \(\psi\). The overall complexity of (40) is \(\mathcal {O}(1)\) due to the discretization of the optimization variables. The main computational cost of the algorithm lies in the sensing process to obtain the target’s accurate states, as discussed in Sect. 2.4.

By solving (40), we can realize the proposed robust beamforming design, including the length of the preamble, beam pointing direction, and beamwidth. Fundamentally, the CLRB need should be satisfied to realize periodic target sensing. While satisfying the interruption constraints, the capacity might be decreased. But it will significantly reduce beam recovery overhead and the cost of interruption.

5 Results and discussion

In this section, simulations are shown to verify the performance of our algorithm. The real 3D track of the target UAV is generated based on the assumed model as (19), (20), and (21) in Sect. 3.1. In this scenario, we can obtain the radial velocity in sensing. It provides additional information for the angle estimation and improves the prediction performance. Our algorithm can easily be extended to scenarios with other motion models as long as the velocity transformation relationship is replaced accordingly. The frame duration is set as \(T = 0.2s\), which means the sensing process is performed every 0.2 s. The operation frequency is set as \(f_c = 30\) GHz, and the bandwidth is set as \(W = 1.76\) GHz [7]. We also set the target radar cross-section as \(\sigma _{\textrm{RCS}} = 2\,\hbox{m}^2\) [30], a reasonable value for small drones in the millimeter-wave band. To keep track of the target UAV, the accurate state information is updated at the start of each frame by the sensing algorithm and that of the next frame is predicted at the same time.

Figure 4 shows the tracking performance of our prediction algorithm. The tracks over time in 3D view are presented in Fig. 4a, and the corresponding distance estimation differences are depicted in Fig. 4b. The initial velocity of the target UAV is set at [5, 5, 0.5] m/s assuming high-speed UAVs [31]. In Fig. 4a, the blue line denotes the actual movement trajectory of the UAV. The red line denotes the predicted locations by our algorithm over time. And the yellow line is obtained by the traditional GPS-based algorithm as in [11], which uses the locations’ differentials instead of actual velocities. In this paper, we only consider the performance gap of algorithms, ignoring possible differences in other aspects, such as latency and precision, that are related to practical applications. As shown in Fig. 4a, the red line is almost always close to the blue line, while the yellow line deviates most of the time. It shows that the GPS-based track performs poorly when the UAV changes its direction frequently, while our predicted track almost precisely fits the actual track. Similarly, in Fig. 4b, it can be observed that in most cases, our distance estimation errors are much smaller than those of the GPS-based algorithm. It is because we have introduced an additional dimension of radial velocity, which implies information about the direction of the target’s movement. In the middle part of the trajectory around the 25th frame, the two algorithms have similar good performance since there are few changes in the target’s moving direction. As shown in Fig. 4a, the UAV is flying in a nearly straight line, enabling both algorithms to achieve lower estimation errors. It is noted that the differences of the predicted track have a slight increase in this part, and it is because the change in the value of the speed causes a perturbation to our algorithm.

Fig. 4
figure 4

Real-time tracking performance of the target UAV. a The tracks over time. b The distances between the predicted locations and the real locations

Figure 5 depicts the changes of interruption probability as in (26). In Fig. 5a, both the proposed and GPS-based algorithms as in [11] with different beamwidths are simulated, which are presented as solid lines and dashed lines with cross marks separately. The x-axis is the standard deviation of the noise \(\sigma _c\). As \(\sigma _c\) increases, the communication SNR \(\gamma _c\) decreases, and interruption probability increases accordingly from 0 to 1. Meanwhile, when the beamwidth decreases by \(\pi /4,\pi /8,\pi /16\) and \(\pi /32\), the lines tend to shift toward the right. This is because when the beam gets narrower, the beamforming gain increases inversely with the square of beamwidth. Then, SNR increases so that it can resist stronger noise. When \(\sigma _c\) keeps increasing, the interruption probability of the lines all gradually reaches 1 due to low SNR. On the other hand, the lines also tend to shift upwards. It means that wider beams sometimes cause more interruptions. The reason is that while narrower beams bring higher SNR, they also create additional difficulties in the beam alignment process. The beam coverage area is reduced proportionately to the square of beam width, thus decreasing the beam alignment probability \(P(\mathcal {H}_0)\). The interruption probability is then kept at a certain level, even if the SNR is high. While the lines of the same color have the same beamwidth, we can see that in each pair, the dashed one usually has a higher interruption probability. It shows that our proposed beam design approach can markedly improve the stability of the communication link compared to the GPS-based approach since we introduce additional radial velocity information, which significantly improves the accuracy of position estimation. Also, it is noted that when the beamwidth \(\psi = \pi /4\) and \(\pi /8\), the pair of lines in blue and red converged together. It is because the beam is wide enough to cover almost every possible location of the target UAV, and the slight difference in beam direction is not that critical. It also results that when SNR is high enough, the interruption probability may reach 0.

Fig. 5
figure 5

Interruption probability of the communication link. a Comparisons with the GPS-based algorithm. b Comparisons with the constant-\(\omega\) algorithm

In Fig. 5b, we also compared our algorithms with the constant angular velocity algorithm as proposed in [32], which assumes that the target UAV moves with the same angular velocity, denoted as \(\omega\), as the previous moment. The dashed lines with cross marks are replaced with the constant-\(\omega\) algorithm with the corresponding beamwidth. The interruption probability of the constant-\(\omega\) algorithm shows a similar trend as the GPS-based algorithm, i.e., it goes up as the noise increases and shifts toward right and up as the beamwidth decreases. It is noted that under the same configuration, the constant-\(\omega\) algorithm tends to have a higher interruption probability. It is because when the target UAV’s movement distance per unit of time is relatively large compared to the distance from the BS node, its angular velocity usually cannot be maintained relatively constant. As the distance from the base station increases, the performance of this algorithm improves, eventually reaching a similar interruption probability as the GPS-based algorithm. However, there is still a significant gap between this algorithm and the proposed algorithm due to its lack of additional radial velocity information.

Fig. 6
figure 6

CRLB and achievable rate

We present changes of R, \(\sigma _d^2\) and \(\sigma _{v^r}^2\), that is, the achievable rate, the CLRB of distance and radial velocity, over time in Fig. 6. The frame index in the x-axis is selected from the same track in Fig. 4 with the distance increasing approximately from 80 to 130 m. The y-axis on the left denotes \(\sigma _d\) and \(\sigma _{v^r}\), corresponding to the rising lines, and the y-axis on the right denotes R, corresponding to the decreasing lines. The lines with the same color share the same percentage of preamble \(\alpha\). As we can see in the figure, with the growth of the frame index, as well as the distance of the UAV, the achievable rate decreases continuously while the CRLBs increase by a larger order of magnitude. It is because the communication SNR \(\gamma _c\) is inversely proportional to the square of the distance, with logarithmic operations afterward. In contrast, the sensing SNR \(\gamma _c\) is inversely proportional to the fourth power of the distance. It proves that in an ISAC system, sensing is often much more sensitive to distance than communication. Also, we can see that with \(\alpha\) increases from 0.05, 0.1 to 0.2, sensing performance improves, and communication worsens since a longer preamble makes low CRLB and squeezes communication resources. It’s noted that there are more increases in \(\sigma _{v^r}^2\) than in \(\sigma _{d}^2\) because that longer signal in the time domain typically contributes more to Doppler estimation.

6 Conclusion

In this paper, we provide a robust solution for beamforming in UAV communications. To solve the problem of frequent communication interruption due to high mobility and narrow beam, we propose an ISAC-based framework to realize a robust beamforming design. In particular, we introduce additional state information obtained from the sensing process, which is realized by the current communication preamble. Simulations show that it clearly enhances the state prediction performance. Then, based on the predicted states, we formulate the form of interruption probability. A robust beamforming design is then presented, satisfying the derived interruption probability constraints as well as communication and sensing metrics. Numerical results have confirmed that our algorithm is capable of reducing communication interruptions.

Availability of data and materials

Not applicable.



Integrated sensing and communication


Unmanned aerial vehicle


Differential quadrature reference phase shift keying


Radio frequency




Orthogonal frequency division multiplexing


Multiple-input multiple-output


Massive multiple-input multiple-output


Global positioning system


Base station




Internet of things




Roadside unit


Uniform planar array


Circularly-symmetric complex Gaussian


Successive interference cancellation


Multiple signal classification


Cramér–Rao lower bound


  1. F. Liu, Y. Cui, C. Masouros, J. Xu, T.X. Han, Y.C. Eldar, S. Buzzi, Integrated sensing and communications: toward dual-functional wireless networks for 6G and beyond. IEEE J. Sel. Areas Commun. 40(6), 1728–1767 (2022)

    Article  Google Scholar 

  2. F. Liu, C. Masouros, A.P. Petropulu, H. Griffiths, L. Hanzo, Joint radar and communication design: applications, state-of-the-art, and the road ahead. IEEE Trans. Commun. 68(6), 3834–3862 (2020)

    Article  Google Scholar 

  3. M.A. Richards, Fundamentals of Radar Signal Processing (McGraw-Hill Education, New York, 2014)

    Google Scholar 

  4. M. Roberton, E. Brown. Integrated radar and communications based on chirped spread-spectrum techniques. In: IEEE MTT-S International Microwave Symposium Digest, vol. 1. (IEEE, 2003), pp. 611–614

  5. S. Xu, Y. Chen, P. Zhang, Integrated radar and communication based on DS-UWB. In 2006 3rd International Conference on Ultrawideband and Ultrashort Impulse Signals (IEEE, 2006). pp. 142–144

  6. C. Sturm, W. Wiesbeck, Waveform design and signal processing aspects for fusion of wireless communications and radar sensing. Proc. IEEE 99(7), 1236–1259 (2011)

    Article  Google Scholar 

  7. P. Kumari, J. Choi, N. González-Prelcic, R.W. Heath, Ieee 802.11 ad-based radar: an approach to joint vehicular communication-radar system. IEEE Trans. Veh. Technol. 67(4), 3012–3027 (2017)

    Article  Google Scholar 

  8. F. Liu, C. Masouros, A. Li, H. Sun, L. Hanzo, MU-MIMO communications with MIMO radar: from co-existence to joint transmission. IEEE Trans. Wirel. Commun. 17(4), 2755–2770 (2018)

    Article  Google Scholar 

  9. B. Nuss, L. Sit, M. Fennel, J. Mayer, T. Mahler, T. Zwick, MIMO OFDM radar system for drone detection. In: 2017 18th International Radar Symposium (IRS) (IEEE, 2017), pp. 1–9

  10. W. Wu, N. Cheng, N. Zhang, P. Yang, W. Zhuang, X. Shen, Fast mmwave beam alignment via correlated bandit learning. IEEE Trans. Wirel. Commun. 18(12), 5894–5908 (2019)

    Article  Google Scholar 

  11. Y. Huang, Q. Wu, T. Wang, G. Zhou, R. Zhang, 3D beam tracking for cellular-connected UAV. IEEE Wirel. Commun. Lett. 9(5), 736–740 (2020)

    Article  Google Scholar 

  12. K. Guo, R. Liu, M. Alazab, R.H. Jhaveri, X. Li, M. Zhu, STAR-RIS-Empowered cognitive non-terrestrial vehicle network with NOMA. IEEE Trans. Intell. Veh. 8(6), 3735–3749 (2023)

    Article  Google Scholar 

  13. R. Liu, K. Guo, K. An, F. Zhou, Y. Wu, Y. Huang, G. Zheng, Resource allocation for NOMA-enabled cognitive satellite-UAV-terrestrial networks with imperfect CSI. IEEE Trans. Cognit. Commun. Netw. 1–1 (2023)

  14. K. Guo, R. Liu, C. Dong, K. An, Y. Huang, S. Zhu, Ergodic capacity of NOMA-based overlay cognitive integrated satellite-UAV-terrestrial networks. Chin. J. Electron. 32(2), 273–282 (2023)

    Article  Google Scholar 

  15. Z. Xiao, L. Zhu, Y. Liu, P. Yi, R. Zhang, X.-G. Xia, R. Schober, A survey on millimeter-wave beamforming enabled UAV communications and networking. IEEE Commun. Surv. Tutor. 24(1), 557–610 (2021)

    Article  Google Scholar 

  16. K. Guo, M. Wu, X. Li, H. Song, N. Kumar, Deep reinforcement learning and NOMA-based multi-objective RIS-assisted is-UAV-TNS: trajectory optimization and beamforming design. IEEE Trans. Intell. Transp. Syst. 1–14 (2023)

  17. L. Zhu, J. Zhang, Z. Xiao, X. Cao, X.-G. Xia, R. Schober, Millimeter-wave full-duplex UAV relay: joint positioning, beamforming, and power control. IEEE J. Sel. Areas Commun. 38(9), 2057–2073 (2020)

    Article  Google Scholar 

  18. L. Zhu, J. Zhang, Z. Xiao, X. Cao, D.O. Wu, X.-G. Xia, 3-D beamforming for flexible coverage in millimeter-wave UAV communications. IEEE Wirel. Commun. Lett. 8(3), 837–840 (2019)

    Article  Google Scholar 

  19. F. Liu, W. Yuan, C. Masouros, J. Yuan, Radar-assisted predictive beamforming for vehicular links: communication served by sensing. IEEE Trans. Wirel. Commun. 19(11), 7704–7719 (2020)

    Article  Google Scholar 

  20. W. Yuan, F. Liu, C. Masouros, J. Yuan, D.W.K. Ng, N. González-Prelcic, Bayesian predictive beamforming for vehicular networks: a low-overhead joint radar-communication approach. IEEE Trans. Wirel. Commun. 20(3), 1442–1456 (2020)

    Article  Google Scholar 

  21. M. Giordani, M. Polese, A. Roy, D. Castor, M. Zorzi, A tutorial on beam management for 3GPP NR at mmWave frequencies. IEEE Commun. Surv. Tutor. 21(1), 173–196 (2018)

    Article  Google Scholar 

  22. L. Yang, W. Zhang, Hierarchical codebook and beam alignment for UAV communications. In: 2018 IEEE Globecom Workshops (GC Wkshps) (IEEE, 2018), pp. 1–6

  23. L. Liu, S. Zhang, R. Zhang, Comp in the sky: UAV placement and movement optimization for multi-user communications. IEEE Trans. Commun. 67(8), 5645–5658 (2019)

    Article  Google Scholar 

  24. G. Xu, N. Zhang, M. Xu, Z. Xu, Q. Zhang, Z. Song, Outage probability and average BER of UAV-assisted dual-hop FSO communication with amplify-and-forward relaying. IEEE Trans. Veh. Technol. (2023)

  25. A.-A.A. Boulogeorgos, E.N. Papasotiriou, A. Alexiou, Analytical performance assessment of THz wireless systems. IEEE Access 7, 11436–11453 (2019)

    Article  Google Scholar 

  26. P. Kumari, S.A. Vorobyov, R.W. Heath, Adaptive virtual waveform design for millimeter-wave joint communication-radar. IEEE Trans. Signal Process. 68, 715–730 (2019)

    Article  MATH  Google Scholar 

  27. L. Dai, B. Wang, Y. Yuan, S. Han, I. Chih-Lin, Z. Wang, Non-orthogonal multiple access for 5G: solutions, challenges, opportunities, and future research trends. IEEE Commun. Mag. 53(9), 74–81 (2015)

    Article  Google Scholar 

  28. X.R. Li, V.P. Jilkov, Survey of maneuvering target tracking. Part I. Dynamic models. IEEE Trans. Aerosp. Electron. Syst. 39(4), 1333–1364 (2003)

    Article  Google Scholar 

  29. H.-L. Song, Y.-C. Ko, Beam alignment for high-speed UAV via angle prediction and adaptive beam coverage. IEEE Trans. Veh. Technol. 70(10), 10185–10192 (2021)

    Article  Google Scholar 

  30. C.-C. Tsai, C.-T. Chiang, W.-J. Liao. Radar cross section measurement of unmanned aerial vehicles. In: 2016 IEEE International Workshop on Electromagnetics: Applications and Student Innovation Competition (iWEM) (IEEE, 2016), pp. 1–3

  31. M. Khabbaz, J. Antoun, C. Assi, Modeling and performance analysis of UAV-assisted vehicular networks. IEEE Trans. Veh. Technol. 68(9), 8384–8396 (2019)

    Article  Google Scholar 

  32. L. Yang, W. Zhang, Beam tracking and optimization for UAV communications. IEEE Trans. Wirel. Commun. 18(11), 5367–5379 (2019)

    Article  Google Scholar 

Download references


Not applicable.


Not applicable.

Author information

Authors and Affiliations



All the authors contributed to the system model, state prediction and interruption analysis, beamforming design, simulations, and the writing of this paper. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Tao Yang.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.


Appendix A

Subscribing (35) into (36), we can obtain

$$\begin{aligned} P^{\prime }&= R\left( \theta _{B}^{v}, \theta _{B}^{h}\right) \cdot P \\ &= \left[ \begin{array}{ccc} \cos \theta _{B}^{v} \cos \theta _{B}^{h} &{} \cos \theta _{B}^{v} \sin \theta _{B}^{h} &{} -\sin \theta _{B}^{v} \\ -\sin \theta _{B}^{h} &{} \cos \theta _{B}^{h} &{} 0 \\ \sin \theta _{B}^{v} \cos \theta _{B}^{h} &{} \sin \theta _{B}^{v} \sin \theta _{B}^{h} &{} \cos \theta _{B}^{v} \end{array}\right] \left[ \begin{array}{ccc} x\\ y\\ z \end{array}\right] \end{aligned}$$

Since (48) is difficult to be simplified, we convert (48) into

$$\begin{aligned} P^{\prime } =R\left( \theta _{B}^{v}, \theta _{B}^{h}\right) \cdot P =\left[ \begin{array}{c} R_{1} \\ R_{2} \\ R_{3} \end{array}\right] P=\left[ \begin{array}{c} R_{1} P \\ R_{2} P \\ R_{3} P \end{array}\right] \end{aligned}$$

Subscribing (49) into (34), we can get

$$\begin{aligned} \left\{ \begin{array}{l} x^{\prime }=R_{1} P \\ y^{\prime }=R_{2} P \\ z^{\prime }=R_{3} P \end{array}\right. \end{aligned}$$
$$\begin{aligned} \left( R_{3} P\right) ^{2}\tan ^{2}\left( \frac{\psi }{2}\right) >\left[ \left( R_{1} P\right) ^{2}+\left( R_{2} P\right) ^{2}\right] \end{aligned}$$

Let \(R_4=\tan \left( \frac{\psi }{2}\right) R_3\), and we get

$$\begin{aligned} P^{T}\left( R_{4}^{T} R_{4}-R_{1}^{T} R_{1}-R_{2}^{T} R_{2}\right) P>0 \end{aligned}$$

Let \(R_0=R_0\left( \theta _B^v, \theta _B^h\right) =\left( R_4^T R_4-R_1^T R_1-R_2^T R_2\right)\). Since \(R_0\) is symmetric, we unfold (52) as follows:

$$\begin{aligned} R_{11} x^{2}+2 R_{12} x y+2 R_{13} x z+R_{22} y^{2}+2 R_{23} y z+R_{33} z^{2}>0 \end{aligned}$$

where \(R_{ij}\) is the element in the ith row and jth column of matrix \(R_0\). Subscribing \(\left\{ \begin{array}{l}x=d \sin \theta ^v \cos \theta ^h \\ y=d \sin \theta ^v \sin \theta ^h \\ z=d \cos \theta ^v\end{array}\right.\) into (53), we get

$$\begin{aligned} \begin{array}{l} R_{11} \sin ^{2} \theta ^{v} \cos ^{2} \theta ^{h}+2 R_{12} \sin ^{2} \theta ^{v} \sin \theta ^{h} \cos \theta ^{h}+ \\ 2 R_{13} \sin \theta ^{v} \cos ^{2} \theta ^{h}+R_{22} \sin ^{2} \theta ^{v} \sin ^{2} \theta ^{h}+ \\ 2 R_{23} \sin \theta ^{v} \sin \theta ^{h} \cos \theta ^{h}+R_{33} \cos ^{2} \theta ^{h}>0 \end{array} \end{aligned}$$


$$\begin{aligned} \left\{ \begin{array}{l} A=R_{33} \\ B=R_{11} \cos ^{2} \theta ^{h}+2 R_{12} \sin \theta ^{h} \cos \theta ^{h} + R_{22} \sin ^2 \theta ^h - R_{33} \\ C=2 R_{13} \cos \theta ^{h}+2 R_{23} \sin \theta ^{h} \end{array}\right. \end{aligned}$$

we get

$$\begin{aligned} A+B \sin ^{2} \theta ^{v}+C \sin \theta ^{v} \cos \theta ^{v}&> 0 \end{aligned}$$
$$\begin{aligned} \text {sgn}(B)\cos \left( 2 \theta ^{v}+\theta _{\text {temp}}\right)&< X \end{aligned}$$

in which

$$\begin{aligned} \theta _{\text {temp}}= & {} \arctan \frac{C}{B}, \theta ^v\in (0,\frac{1}{2}\pi ), \theta ^h \in (0,2\pi )\end{aligned}$$
$$\begin{aligned} X= & {} \frac{(2 A+B)}{\sqrt{B^{2}+C^{2}}} \end{aligned}$$

We denote the solution of (57) as \(R_{a0}\). Since the solution of this inequality depends heavily on \(\text {sgn}(B)\), we first assume \(B > 0\) for the derivation.

We note that if \(X<-1\), the inequality has no solution, i.e., \(R_{a0} = \varnothing\). We further denote the solution interval of inequality \(X<-1\) as \(R_{a1}\). That is, if \(\theta ^h \in R_{a1}\), there is no \(\theta ^v\) satisfying the points in the beam region. Expanding the inequality \(X<-1\), we get that

$$\begin{aligned} X_1 \cos ^2 \theta ^h - X_2 \sin \theta ^h \cos \theta ^h + X_3&>0 \end{aligned}$$
$$\begin{aligned} \text {sgn}(X_1)cos(2\theta ^h+\theta _{\text {temp2}})&>X0 \end{aligned}$$


$$\begin{aligned} \left\{ \begin{array}{l} X_1=R_{33}R_{11}-R_{33}R_{22}-R_{13}^2+R_{23}^2 \\ X_2=-2R_{33}R_{12}+2R_{13}R_{23} \\ X_3=R_{33}R_{22}-R_{23}^2 \end{array}\right. \end{aligned}$$
$$\begin{aligned} \theta _{\text {temp2}} = \hbox{arctan}\frac{X_2}{X_1},\quad X0 = -\frac{2X_3+X_1}{\sqrt{X_1^2+X_2^2}} \end{aligned}$$

\(X_1,X_2,X_3,\theta _{\text {temp2}},X0\) are all related to beam configurations only. The inequality is similar in form to (57), so we perform similar derivations. Since the solution of this inequality depends heavily on \(\text {sgn}(X_1)\), we first assume \(X_1 > 0\) for the derivation.

  1. i.

    If \(X0\ge 1\), the inequality (61) has no solution. That is, in the case of these beam configurations, \(X>-1\) is always true, and (57) always has solutions. In this case, \(R_{a1} = \varnothing\).

  2. ii.

    If \(X0\le -1\), the inequality (61) always holds. That is, \(X<-1\) always holds. It means that in the case of these beam configurations, the beam region cannot be expressed. This cannot be true. So after derivation, \(X0>-1\) always holds.

  3. iii.

    If \(-1<X0<1\), solve inequality (61) and we can get that

    $$\begin{aligned} R_{a1}&= \left( -\frac{1}{2}\hbox{arccos}(X0)-\frac{1}{2}\theta _{\text {temp2}}, \frac{1}{2}\hbox{arccos}(X0)-\frac{1}{2}\theta _{\text {temp2}}\right) \\&\quad \cup \left( \pi -\frac{1}{2}\hbox{arccos}(X0)-\frac{1}{2}\theta _{\text {temp2}}, \pi +\frac{1}{2}\hbox{arccos}(X0)-\frac{1}{2}\theta _{\text {temp2}}\right) \end{aligned}$$

    Note that since \(\theta ^h \in (0,2\pi )\), the final interval should be \(R_{a1}\cap (0,2\pi )\) and hence, varies with different beam configurations. Due to space limitations, we omit this part of the proof.

Given the above, we could obtain \(R_{a1}\) under different beam configurations. If \(\theta ^h\in R_{a1}\), \(R_{a0} = \varnothing\).

If \(X\ge 1\), it means that in this interval of \(\theta ^h\), all points with \(\theta ^v\in (0,\frac{1}{2}\pi )\) are in the beam region. This cannot be true. Therefore, after derivation, we find that \(X\le 1\) always holds.

If \(-1<X<1\), the inequality (57) can be normally solved. We denote the solution interval of inequality \(-1<X<1\) as \(R_{b1}\), and there are \(R_{a1}\cup R_{b1} = (0,2\pi )\) and \(R_{a1}\cap R_{b1} = \varnothing\). Solving (57), we get that

$$\begin{aligned} R_{a0}(\theta ^h) = \left( \frac{1}{2}\hbox{arccos}(X)-\frac{1}{2}\theta _{\text {temp}}, \pi - \frac{1}{2}\hbox{arccos}(X)-\frac{1}{2}\theta _{\text {temp}}\right) \end{aligned}$$

Note that since \(\theta ^v \in (0,\frac{1}{2}\pi )\), the final interval should be \(R_{a0}(\theta ^h)\cap (0,\frac{1}{2}\pi )\), and hence varies with different \(\theta ^h\) intervals. Due to space limitations, we omit this part of the proof.

Now, we can get a clear form of beam coverage

$$\begin{aligned} BC_i=\left\{ \theta _v,\theta _h|\theta ^v \in R_{a0}(\theta ^h), \theta ^h \in R_{b1}\right\} \end{aligned}$$

where \(R_{b1}\) is the solution interval of \(-1<X<1\), and \(R_{a0}(\theta ^h)\) is the corresponding solution interval of (57).

Then, look back at (57), where we assume that \(B>0\). The sign of B changes with different intervals of \(\theta ^h\). Solving the inequality \(B>0\), we get an interval of \(\theta ^h\), denoted as \(R_{a2}\). Due to space limitations, the specific form of \(R_{a2}\) and its overlapping relations with \(R_{a1}\) are omitted here. When \(\theta ^h\in R_{a2}\), \(BC_i\) can be obtained by the above derivations. But when \(B\le 0\), we denote this interval of \(\theta ^h\) as \(R_{b2}\), and there are \(R_{a2}\cup R_{b2} = (0,2\pi ), R_{a2}\cap R_{b2} = \varnothing\). When \(\theta ^h\in R_{b2}\), the derivations are slightly changed:

  1. i.

    Replace X with \(X^\prime = -X\) in \(R_{a0}(\theta ^h)\).

  2. ii.
    $$\begin{aligned} BC_i=\left\{ \theta _v,\theta _h|\theta ^v \in R_{b0}(\theta ^h), \theta ^h \in R_{b1}\right\} \end{aligned}$$

    where \(R_{a0}\cup R_{b0} = (0,\frac{1}{2}\pi ), R_{a0}\cap R_{b0} = \varnothing\).

Similarly, look back at (61), where we assume that \(X1>0\). Actually, the sign of X1 changes with different beam configurations. When \(X1\le 0\), the derivations are slightly changed:

  1. i.

    Replace X0 with \(X0^\prime = -X0\) in \(R_{a1}\).

  2. ii.
    $$\begin{aligned} BC_i=\left\{ \theta _v,\theta _h|\theta ^v \in R_{a0}(\theta ^h), \theta ^h \in R_{a1}\right\} \end{aligned}$$

Appendix B

Substitute (18) into (26), there is

$$\begin{aligned} P(\mathcal {H}_0)&=\frac{T^{2}}{d_{0}^{2}} \int _{-\frac{d_{0}}{T} \theta _{0}^{h}}^{\frac{d_{0}}{T}\left( 2\pi -\theta _{0}^{h}\right) } \int _{\frac{d_{0}}{T}\left( g_0\left( \theta _{0}^{h}+\frac{v^{h}}{d_{0}} T\right) -\theta _{0}^{v}\right) }^{\frac{d_{0}}{T}\left( g_1\left( \theta _{0}^{h}+\frac{v^{h}}{d_{0}} T\right) -\theta _{0}^{v}\right) }p_{0}\left( v^{v}, v^{h}\right) d v^{v} d v^{h} \end{aligned}$$

After arranging the integral limits of (38), we get that

$$\begin{aligned} P(\mathcal {H}_0)=\frac{T^{2}}{d_{0}^{2}} \int _{\beta _{0}}^{\beta _{1}} \int _{\alpha _{0}}^{\alpha _{1}} p_{0}\left( v^{v}, v^{h}\right) d v^{h} d v^{v} \end{aligned}$$

Based on the (23) and (24), we obtain that

$$\begin{aligned} v^{v}&= A_{0} v^{x}+B_{0} v^{z}+C_{0} \\ A_{0}&= 0, B_{0}=-\frac{1}{\sin \theta _{0}^{v}}, C_{0}=\frac{\cos \theta _{0}^{v}}{\sin \theta _{0}^{v}} v^{r} \\ v^{h}&= A_{1} v^{x}+B_{1} v^{z}+C_{1} \\ A_{1}&= -\frac{1}{\sin \theta _{0}^{h}}, B_{1}=-\frac{\cos \theta _{0}^{v} \cos \theta _{0}^{h}}{\sin \theta _{0}^{v} \sin \theta _{0}^{h}}, C_{1}=\frac{\cos \theta _{0}^{v} \cos \theta _{0}^{h}}{\sin \theta _{0}^{v} \sin \theta _{0}^{h}} v^{r} \end{aligned}$$

Subscribing (71) into (70), we get that

$$\begin{aligned} & P(\mathcal {H}_0) \\ & \quad = \frac{T^{2}}{d_{0}^{2}} \iint _{D_{0}} p_{0}\left( A_{0} v^{x}+B_{0} v^{z}+C_{0}, A_{1} v^{x}+B_{1} v^{z}+C_{1}\right) \left| \frac{\partial \left( v^{v}, v^{h}\right) }{\partial \left( v^{x}, v^{z}\right) }\right| d v^{x} d v^{z} \\ & \quad =\frac{T^{2}}{d_{0}^{2}} \iint _{D_{0}} p_{1}\left( v_{x}, v_{z}\right) \left( A_{0} B_{1}-A_{1} B_{0}\right) d v^{x} d v^{z} \end{aligned}$$

and the integral region \(D_0\) is the region enclosed by

$$\begin{aligned} \alpha _{0}&= A_{1} v^{x}+B_{1} v^{z}+C_{1} \\ \alpha _{1}&= A_{1} v^{x}+B_{1} v^{z}+C_{1} \\ \beta _{0}&= A_{0} v^{x}+B_{0} v^{z}+C_{0} \\ \beta _{1}&= A_{0} v^{x}+B_{0} v^{z}+C_{0} \end{aligned}$$

Combining (24) and (20), we further replace \(v^x\) with \(v^p\), \(v^z\) and get that

$$\begin{aligned} & P(\mathcal {H}_0) \\ &\quad =\frac{T^{2}}{d_{0}^{2}} \iint _{D_{1}} p_{1}\left( v^{p}, v^{z}\right) \left( A_{0} B_{1}-A_{1} B_{0}\right) \left| \frac{\partial \left( v^{x}, v^{z}\right) }{\partial \left( v^{p}, v^{z}\right) }\right| d v^{p} d v^{z} \\ & \quad =\frac{T^{2}}{d_{0}^{2}} \iint _{D_{1}} p_{1}\left( v^{p}, v^{z}\right) \left( A_{0} B_{1}-A_{1} B_{0}\right) \left( \frac{\partial v^{x}}{\partial v^{p}}\right) d v^{p} d v^{z} \end{aligned}$$

After partial derivative for both sides of (24), we get that

$$\begin{aligned} \frac{\partial v^x}{\partial v^p}=\frac{v^p}{\sqrt{\left( D^2+1\right) \left( v^p\right) ^2-E^2\left( v^z\right) ^2-2 E F v^z}} \end{aligned}$$

Subscribe it into (74), we finally obtain a solvable form that

$$\begin{aligned} P(\mathcal {H}_0)&= \left( A_{0} B_{1}-A_{1} B_{0}\right) \frac{T^{2}}{d_{0}^{2}} \iint _{D_{1}} p\left( v^{p}\right) p\left( v^{z}\right) \\ &\quad \frac{v^{p}}{\sqrt{\left( D^{2}+1\right) \left( v^{p}\right) ^{2}-E^{2}\left( v^{z}\right) ^{2}-2 E F v^{z}}} d v^{p} d v^{z} \end{aligned}$$

where \(p(v^p)\) and \(p(v^z)\) are mutually independent and both follow Gaussian distributions.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, H., Yang, T., Wu, X. et al. Robust beamforming design for UAV communications based on integrated sensing and communication. J Wireless Com Network 2023, 88 (2023).

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: