Open Access

Artificial noise-aided biobjective transmitter optimization for service integration in multi-user MIMO broadcast channel

EURASIP Journal on Wireless Communications and Networking20172017:132

https://doi.org/10.1186/s13638-017-0912-5

Received: 15 February 2017

Accepted: 7 July 2017

Published: 25 July 2017

Abstract

This paper considers an artificial noise (AN)-aided transmit design for multi-user MIMO systems with integrated services. Specifically, two sorts of service messages are combined and served simultaneously: one multicast message intended for all receivers and one confidential message intended for only one receiver and required to be perfectly secure from other unauthorized receivers. Our interest lies in the joint design of input covariances of the multicast message, confidential message, and artificial noise (AN), such that the achievable secrecy rate and multicast rate are simultaneously maximized. This problem is identified as a secrecy rate region maximization (SRRM) problem in the context of physical-layer service integration. Since this biobjective optimization problem is inherently complex to solve, we put forward two different scalarization methods to convert it into a scalar optimization problem. First, we propose to prefix the multicast rate as a constant, and accordingly, the primal biobjective problem is converted into a secrecy rate maximization (SRM) problem with quality of multicast service (QoMS) constraint. By varying the constant, we can obtain different Pareto optimal points. The resulting SRM problem can be iteratively solved via a provably convergent difference-of-concave (DC) algorithm. In the second method, we aim to maximize the weighted sum of the secrecy rate and the multicast rate. Through varying the weighted vector, one can also obtain different Pareto optimal points. We show that this weighted sum rate maximization (WSRM) problem can be recast into a primal decomposable form, which is amenable to alternating optimization (AO). Then, we compare these two scalarization methods in terms of their overall performance and computational complexity via theoretical analysis as well as numerical simulation, based on which new insights can be drawn.

Keywords

Physical-layer service integrationArtificial noiseConvex optimizationSecrecy rate region

1 Introduction

1.1 Background

Recently, physical-layer service integration (PHY-SI), a technique of combining multicast service and confidential service into one integrated service for one-time transmission at the physical layer, has received much attention in wireless communications. For one thing, PHY-SI caters to the demand for high transmission rate and secure communication, which has been identified as the key targets that need to be effectively addressed by fifth generation (5G) wireless systems [1]. Besides, compared to the conventional upper-layer-based approach, PHY-SI enables coexisting services to share the same resources by solely exploiting the physical characteristics of wireless channels, thereby significantly increasing the spectral efficiency. This property makes PHY-SI a prominent approach to satisfy the ever-increasing need for radio spectrum. The technique of PHY-SI could also find a wide range of applications in the commercial and military areas. For example, many commercial applications, e.g., advertisement, digital television, and Internet telephony, are supposed to provide personalized service customization. As a consequence, confidential service and public service are collectively provided to satisfy the demand of different user groups. In battlefield scenarios, it is essential to propagate commands with different security levels to the frontline. The public information should be distributed to all soldiers, while the confidential information can only be accessed by specific soldiers. Such emerging applications lead to a crucial problem: how to establish the security of confidential service while not compromising the quality of public service?

1.2 Related works

Let us first have a very brief review on physical-layer security, a technique that lays foundation for the research on PHY-SI. The broadcast nature of wireless medium makes privacy an inherent concern. Physical layer security technique is playing an increasingly important role in wireless communication recently. It can secure communications information-theoretically at the physical layer without using secret keys whose distribution or management may become difficult in, e.g., ad hoc wireless networks. Different strategies against eavesdropping have been developed with various levels of channel state information (CSI) available to the transmitter (see the comprehensive overview in [26]). Liu and Poor first coined the term confidential broadcasting in [7, 8] and established the corresponding secrecy capacity region. In confidential broadcasting, a transmitter broadcasts multiple confidential messages to all receivers. Each confidential message is intended for one specified receiver but required to be perfectly secret from the others. Some efforts have been made in, e.g., [9, 10] to maximize the sum secrecy rate in the scenario of confidential broadcasting.

The study of PHY-SI can be traced back to Csiszár and Körner’s seminal work in [11]. In the basic model of PHY-SI, a transmitter sends a common message to two receivers and simultaneously sends a confidential message intended only for one receiver and kept perfectly secure from the other one. Under discrete memoryless broadcast channel (DMBC) setup, Csiszár and Körner gave a closed-form expression of the maximum rate region that can be applied reliably under the secrecy constraint (i.e., the secrecy capacity region). In recent years, this kind of approach has gained renewed interest, especially that in various multi-antenna scenarios, such as Gaussian broadcast channels [1215] and bidirectional relay channels [16, 17]. In [12], the authors extended the results in [11] to a general MIMO Gaussian case by adopting the channel-enhancement argument. Further, the works [13, 14] considered the case with two confidential messages intended for two different receivers. The resulting secrecy capacity region is proved to be attainable by combining the secret dirty-paper coding (S-DPC) with Gaussian superposition coding. Furthermore, in [16] and [17], Wyrembelski and Boche amalgamated broadcast service, multicast service, and confidential service in bidirectional relay networks, in which a relay adds an additional multicast message for all nodes and a confidential message for only one node besides establishing the conventional bidirectional communication. Nonetheless, the main goal of the aforementioned papers is just to obtain capacity results or to characterize coding strategies that lead to certain rate regions [18]. For implementation efficiency, it is also important to treat physical layer service integration from a signal processing point of view. In particular, optimal or complexity-efficient transmit strategies have to be characterized, so that the achieved performance could reach/approach the boundary of the secrecy capacity region. Such strategies are usually given by optimization problems, which generally turn out to be nonconvex. Along with this comes the fact that most works on PHY-SI end once a certain characterization of a rate region is derived.

Recently, to fill in the gap between the previous information-theoretic results and practical implementation, there is growing interest in analyzing PHY-SI from a signal processing point of view. In [12], the authors proposed a re-parameterizing method to devise transmit strategies for achieving the secrecy boundary performance. However, this method is only applicable to a very simple two-user MISO scenario. Besides, it involves solving a sequence of convex feasibility problems, which is computationally expensive. To improve it, the work [19] proposed a quality-of-service (QoS)-based method to seek the boundary-achieving transmit strategies. Its basic idea is to establish the tradeoff between the secrecy rate and the multicast rate by maximizing the secrecy rate while ensuring the multicast rate above a given threshold. This method is demonstrated as effective in characterizing the secrecy boundary and thus triggered research endeavors on extending the result to a more general and realistic setting. Notable results include the extension to the multi-user [20] and imperfect CSI [21, 22] settings. Even so, relatively less work focussed on the MIMO channel setup, due to the intractability of the associated optimization problems. In [23], the authors circumvented the intractability by proposing a generalized singular value decomposition (GSVD)-based transmission scheme. Using GSVD, multicast message and confidential message can be perfectly decoupled and the resulting problem is easier to handle. However, this result is not applicable to the general multi-user MIMO case. In addition, it is also interesting to incorporate artificial noise (AN) into consideration, as such technique has been shown to be effective in enhancing transmission security [2428]. Specifically, the authors in [2427], and [28] respectively showed that AN is of paramount importance to physical-layer security when there exist multiple eavesdroppers in the network, when the CSI of eavesdropper(s) is imperfectly known at the transmitter, and/or when eavesdroppers are randomly located in the network.

1.3 Main contributions

In this paper, we delve into the AN-aided transmit precoding design in PHY-SI under a general multi-user MIMO case. Specifically, two sorts of service messages are combined and promulgated at the same time: a multicast message intended for all receivers, and a confidential message intended for merely one authorized receiver. The confidential message must be kept perfectly secure from all other unauthorized receivers. Meanwhile, AN is employed to degrade the potential eavesdropping of the unauthorized receivers. This paper aims to jointly optimize the input covariance matrices of the multicast message, confidential message, and AN, to maximize the achievable secrecy and multicast rates simultaneously, or, equivalently, to maximize the achievable secrecy rate region. This secrecy rate region maximization (SRRM) problem turns out to be a biobjective optimization problem. Since the re-parameterizing method is invalid in a general MIMO case, we develop two effective scalarization methods to convert the biobjective problem into an easier-to-handle scalar version for characterizing its Pareto boundary.
  1. 1.

    In the first method, we propose to fix the multicast rate as a constant. Through varying the value of the constant, this method could yield different secrecy boundary points. Since the Pareto optimal points must reside on the boundary of the achievable rate region, this method is bound to provide a complete set of the Pareto optimal points. Though the resultant secrecy rate maximization (SRM) problem is nonconvex by nature, we show this problem actually falls into the context of difference-of-concave (DC) programming [29]. Hence, it can be handled by classical DC algorithm with convergence guarantee.

     
  2. 2.

    As for the second method, a weighted sum-based scalarization is introduced. The crux of this scalarization method is to optimize the weighted sum of the two objectives with different weight vectors. By varying the weight vector, this method gives rise to different Pareto optimal solution. To solve this weighted sum rate maximization (WSRM) problem, we reveal its hidden decomposability by recasting it as an equivalent form amenable to alternating optimization (AO). AO algorithm is naturally employed to solve the WSRM problem. It can be proved that this AO algorithm must converge to one stationary point of the WSRM problem.

     
  3. 3.

    It is particularly worth mentioning that though the DC and AO algorithms have been applied to address the issue of physical-layer security before in, e.g., [24, 30, 31], none of these works considered integrating an additional multicast message. Our paper is an initial attempt to study the application of DC and AO to the emerging PHY-SI system, which turns out to be a harder task than its counterpart in physical-layer security due to the coexisting multicast service.

     

Then, we compare these two sorts of scalarization methods in terms of their overall performance and computational complexity. The comparison results reveal that the first method is more efficacious in finding all Pareto optimal points than the second one. The advantage of the second method lies in its problem structure, which provides the service provider a solution to maximizing the overall revenue. Besides, we show that the DC algorithm is more time-efficient at low transmit power than the AO algorithm. Interestingly, the numerical results indicate that at high transmit power, the AO algorithm becomes the more time-efficient one.

1.4 Organization and notations

This paper is organized as follows. Section 2 provides the system model description and problem formulation. The optimization aspects of our formulated problems are addressed in Sections 3 and 4, corresponding to the first and the second scalarization methods, respectively. The comparison results are given in Section 5. Section 6 presents simulation results to show the efficacy of our proposed methods. Finally, conclusions are drawn in Section 7.

The notation of this paper is as follows. Bold symbols in capital letter and small letter denote matrices and vectors, respectively. (·) H , rank(·), and Tr(·) represent conjugate transpose, rank, and trace of a matrix, respectively. \({\mathbb {R}}_{+}\) and \({\mathbb {H}}_{+}^{n}\) denote the set of nonnegative real numbers and of n-by-n Hermitian positive semidefinite (PSD) matrices. The n×n identity matrix is denoted by I n . \(\mathbf {x}\sim \mathcal {CN}(\mathbf {\mu },\boldsymbol {\Omega })\) denotes that x is a complex circular Gaussian random vector with mean μ and covariance Ω. A0(A0) implies that A is a Hermitian positive semidefinite (definite) matrix. · represents the vector Euclidean norm. K represents a proper cone, and K represents a dual cone associated with K.

2 System model and problem formulation

We consider the downlink of a multiuser system in which a multi-antenna transmitter serves K receivers, and each receiver is equipped with multiple antennas. Assume that all receivers have ordered the multicast service and receiver 1 further ordered the confidential service1. To enhance the security performance, the transmitter utilizes a fraction of its transmit power to send artificially generated noise to interfere the unauthorized receivers (eavesdroppers), i.e., receiver 2 to receiver K. We assume in this paper that all receivers are static and that all the communication links undergo slow frequency-flat fading.

The received signal at receiver k is modeled as
$$ {{\mathbf{y}}_{k}} = \;{\mathbf{H}_{k}}\mathbf{x} + {{\mathbf{z}}_{k}}, k=1,2,\cdots,K $$
(1)
where \({{\mathbf {H}}_{k}}\in {{\mathbb {C}}^{{{N}_{r,k} \times {N}_{t}}}}\) is the channel response between the transmitter and receiver k; N t and N r,k are the number of transmit antennas employed by the transmitter and kth receiver, respectively. z k is independent identically distributed (i.i.d.) complex Gaussian noise with zero mean and unit variance. \({{\mathbf {x}}}\in {{\mathbb {C}}^{{{N}_{t}}}}\) is the coded transmit message, which consists of three independent components, i.e.,
$$ {\mathbf{x\;}}\; = \;{\mathbf{x}_{0}} + \;{\mathbf{x}_{c}} + \;{\mathbf{x}_{a}}, $$
(2)

where x 0 is the multicast message intended for all receivers, x c is the confidential message intended for receiver 1, and x a is the artificial noise. We assume \(\mathbf {x}_{0} \sim \mathcal {CN}(\mathbf {0},\mathbf {Q}_{0})\), \(\mathbf {x}_{c} \sim \mathcal {CN}(\mathbf {0},\mathbf {Q}_{c})\) [12], where Q 0 and Q c are the transmit covariance matrices. The AN x a follows a distribution \(\mathbf {x}_{a} \sim \mathcal {CN}(\mathbf {0},\mathbf {Q}_{a})\), where Q a is the AN covariance. The CSI on all links is assumed to be perfectly known at the corresponding transmitter and receivers in that all receivers have to register in the network for subscribing the multicast service. In practice, the CSI at the receivers can be obtained from the channel estimation of the downlink pilots. CSI at the transmitter can be acquired via uplink channel estimation in time division duplex (TDD) systems. The design of a high-quality channel estimation scheme is beyond the scope of this paper. Note that the full CSI assumption is commonly adopted in the area of physical layer security/multicasting, especially in MIMO channels [10, 24, 30, 3236].

For ease of exposition, let us define \({\mathcal {K}} \buildrel \Delta \over = \{1,2,\ldots,K\}\) and \({{\mathcal {K}}_{e}} \buildrel \Delta \over = {\mathcal {K}}/\{ 1\} \), which denote the indices of all receivers and of all unauthorized receivers, respectively. Denote R 0 and R c as the achievable rates associated with the multicast and confidential messages, respectively. Then, an achievable secrecy rate region \(R_{s}({\left \{ {\mathbf {H}_{k}} \right \}_{k \in {\mathcal {K}}}},P)\) is given as the set of nonnegative rate pairs (R 0,R c ) satisfying [12]
$$ \begin{array}{ll} &{R_{0}} \le \mathop {\min }\limits_{k \in {\mathcal{K}}} C_{m,k}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}})\\ &{R_{c}} \le C_{b}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) - \mathop {\max}\limits_{k \in {{\mathcal{K}}_{e}}} C_{e,k}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}), \end{array} $$
(3)
where
$$\begin{array}{ll} C_{m,k}&={\log \left| {\mathbf{I} + {{\left(\mathbf{I} + {\mathbf{H}_{k}}({\mathbf{Q}_{c}} + {\mathbf{Q}_{a}})\mathbf{H}_{k}^{H}\right)}^{- 1}}{\mathbf{H}_{k}}{\mathbf{Q}_{0}}\mathbf{H}_{k}^{H}} \right|},\\ C_{b}&= \log \left| {\mathbf{I} + {{\left(\mathbf{I} + {\mathbf{H}_{1}}{\mathbf{Q}_{a}}\mathbf{H}_{1}^{H}\right)}^{- 1}}{\mathbf{H}_{1}}{\mathbf{Q}_{c}}\mathbf{H}_{1}^{H}} \right|,\\ C_{e,k}&= \log \left| {\mathbf{I} + {{\left(\mathbf{I} + {\mathbf{H}_{k}}{\mathbf{Q}_{a}}\mathbf{H}_{k}^{H}\right)}^{- 1}}{\mathbf{H}_{k}}{\mathbf{Q}_{c}}\mathbf{H}_{k}^{H}} \right|, \end{array} $$
and Tr(Q 0+Q c +Q a )≤P with P being total transmit power budget at the transmitter.

The secrecy rate region (3) implies that all receivers first decode their common multicast message by treating the confidential message as noise, and then receiver 1 acquires a clean link for the transmission of its exclusive confidential message, where there is no interference from the multicast message.

To maximize this achievable secrecy rate region, our goal is to find the boundary-achieving Q 0, Q a and Q c , which are also known as Pareto optimal solutions to this SRRM problem. Specifically, we must first solve the following optimization problem, which is a biobjective maximization problem with cone \(K=K^{*}={\mathbb {R}}_ +^{2}\),
$$\begin{array}{*{20}l} \mathop{\max}\limits_{{\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}}& \left({\text{w.r.t.}}\; {\mathbb {R}}_ +^{2} \right)\; \left({\mathop {\min }\limits_{k \in {\mathcal{K}}} C_{m,k},C_{b} - \mathop {\max}\limits_{k \in {{\mathcal{K}}_{e}}} C_{e,k}} \right)\\ \text{s.t.}\quad&\text{Tr}({\mathbf{Q}_{0}} + {\mathbf{Q}_{c}} + \mathbf{Q}_{a}) \le P, \end{array} $$
(4a)
$$\begin{array}{*{20}l} &{\mathbf{Q}_{0}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{c}} \succeq {\boldsymbol{0}}, \mathbf{Q}_{a} \succeq {\boldsymbol{0}}, \end{array} $$
(4b)

where, with a slight abuse of notations but for notational simplicity, the explicit dependence of C m,k , C b and C e,k on (Q 0,Q c ,Q a ) is omitted. Since the SRRM problem is a biobjective maximization problem, it is necessary to harness some methods of scalarization to convert it into an easier-to-handle scalar version.

Remark 1

It is also viable to consider the scenario where all receivers order the confidential service and all confidential messages are propagated concurrently by the transmitter, i.e., the integration of multicasting and confidential broadcasting. The merit of this scheme lies in its higher spectral efficiency and low latency. However, this comes at the expense of much higher operational complexity at the transmitter, especially when the number of users increases. Thus, our considered PHY-SI scheme is particularly desired in delay-tolerant applications or when the transmitter possesses limited computational capacity for security-related computations.

3 A DC-based approach to the SRRM problem

In this section, we develop our first scalarization method to solve (4). The resultant scalar problem is a secrecy rate maximization (SRM) with imposed quality of multicast service (QoMS) constraints.

3.1 Scalarization

In particular, our method is to move the part of multicast rate maximization to the constraint, i.e., we fix at the time being the multicast rate as a constant τ ms ≥0. As a result, the biobjective SRRM problem (4) will be degraded into a scalar maximization problem, which is shown in (5).
$$\begin{array}{*{20}l} R(\tau_{ms})=&\mathop{\max}\limits_{{\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}} C_{b}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}})-\mathop {\max }\limits_{k \in {\mathcal{K}}_{e}}C_{e,k}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}})\\ \text{s.t.}\; &\mathop {\min }\limits_{k \in {\mathcal{K}}}C_{m,k}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) = {\tau_{ms}}, \end{array} $$
(5a)
$$\begin{array}{*{20}l} &\text{Tr}({\mathbf{Q}_{0}} + {\mathbf{Q}_{c}}+{\mathbf{Q}_{a}}) \le P, \end{array} $$
(5b)
$$\begin{array}{*{20}l} &{\mathbf{Q}_{0}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{a}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{c}} \succeq {\boldsymbol{0}}. \end{array} $$
(5c)
In (5), R(τ ms ) is the optimal objective value, τ ms can be interpreted as the preset requirement on the multicast rate, and accordingly, the constraint (5a) can be interpreted as a QoMS constraint. To guarantee the feasibility of problem (5), τ ms cannot exceed a threshold τ max given by
$$ {\tau_{\max }} = \mathop {\max }\limits_{{\mathbf{Q}_{0}} \succeq {\boldsymbol{0}}, \text{Tr}({\mathbf{Q}_{0}}) \le P} \mathop {\min }\limits_{k \in {\mathcal{K}}} \log \left| {\mathbf{I} + {\mathbf{H}_{k}}{\mathbf{Q}_{0}}\mathbf{H}_{k}^{H}} \right|. $$
(6)

The value of τ max can be numerically obtained by solving (6) via the convex optimization solver CVX [37].

This sort of scalarization method, in fact, enables us to find one boundary point (τ ms ,R(τ ms )) of the secrecy rate region \(R_{s}({\left \{ {\mathbf {H}_{k}} \right \}_{k \in {\mathcal {K}}}},P)\) by solving (5). All boundary points of \(R_{s}({\left \{ {\mathbf {H}_{k}} \right \}_{k \in {\mathcal {K}}}},P)\) can be found if we traverse all possible τ ms ’s lying within [0,τ max] and store the corresponding optimal objective values. Since the Pareto optimal solution to (4) must reside on the boundary of \(R_{s}({\left \{ {\mathbf {H}_{k}} \right \}_{k \in {\mathcal {K}}}},P)\), i.e., the Pareto optimal set of (4) is a subset of the boundary set of \(R_{s}({\left \{ {\mathbf {H}_{k}} \right \}_{k \in {\mathcal {K}}}},P)\), all Pareto optimal solution to (5) can also be found by this means.

However, problem (5) is nonconvex. Especially, the determinant equality constraint (5a) is very difficult to handle. To circumvent this difficulty, we pay our attention to the following relaxed problem of (5), in which the equality constraint (5a) is replaced by the inequality constraint (7a).
$$\begin{array}{*{20}l} \tilde R(\tau_{ms}) =&\mathop{\max}\limits_{{\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}} C_{b}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}})-\mathop {\max }\limits_{k \in {\mathcal{K}}_{e}}C_{e,k}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}})\\ \text{s.t.}\; &\mathop {\min }\limits_{k \in {\mathcal{K}}}C_{m,k}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) \ge {\tau_{ms}}, \end{array} $$
(7a)
$$\begin{array}{*{20}l} &\text{Tr}({\mathbf{Q}_{0}} + {\mathbf{Q}_{c}}+{\mathbf{Q}_{a}}) \le P, \end{array} $$
(7b)
$$\begin{array}{*{20}l} &{\mathbf{Q}_{0}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{a}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{c}} \succeq {\boldsymbol{0}}. \end{array} $$
(7c)

Apparently, any optimal solution to (5) is feasible to (7) in the sense that replacing (5a) with (7a) yields a larger feasible solution set. Hence, problem (7) has \(R(\tau _{ms}) \le \tilde R(\tau _{ms})\) in general. Interestingly, we show that \(R(\tau _{ms}) = \tilde R(\tau _{ms})\) can always be achieved without loss of optimality to (7).

Lemma 1

Problem (7) is a tight relaxation to problem (5). In other words, the rate pair \(({\tau _{ms}},\tilde R(\tau _{ms}))\) must be a boundary point of \(R_{s}({\left \{ {\mathbf {H}_{k}} \right \}_{k \in {\mathcal {K}}}},P)\).

Proof

The proof can be easily accomplished by construction. Suppose that the constraint (7a) is satisfied with strict inequality, we can always multiply Q 0 by a scalar ν (ν<1) to make (7a) active, yet without decreasing the objective value of (7) and violating the total power constraint (7b). This fact implies that there always exists an optimal solution to (7) such that the constraint (7a) is satisfied with equality and thus accomplishes the proof. □

Lemma 1 implies that problem (7) admits an optimal \(({\mathbf {Q}^{*}_{0}},{\mathbf {Q}^{*}_{c}},{\mathbf {Q}^{*}_{a}})\) with \(\mathop {\min }\limits _{k \in {\mathcal {K}}}C_{m,k}({\mathbf {Q}^{*}_{0}},{\mathbf {Q}^{*}_{c}},{\mathbf {Q}^{*}_{a}})={\tau _{ms}}\). Hence, \(({\mathbf {Q}^{*}_{0}},{\mathbf {Q}^{*}_{c}},{\mathbf {Q}^{*}_{a}})\) is also optimal to (5). The proof of Lemma 1 reveals that such an optimal \(({\mathbf {Q}^{*}_{0}},{\mathbf {Q}^{*}_{c}},{\mathbf {Q}^{*}_{a}})\) can always be constructed algorithmically based on the following procedures.

Corollary 1

Suppose that \(({\mathbf {Q}^{*}_{0}},{\mathbf {Q}^{*}_{c}},{\mathbf {Q}^{*}_{a}})\) is an optimal solution returned by solving problem (7). If \(\mathop {\min }\limits _{k \in {\mathcal {K}}}C_{m,k}({\mathbf {Q}^{*}_{0}},{\mathbf {Q}^{*}_{c}},{\mathbf {Q}^{*}_{a}})={\tau _{ms}}\), then output \(({\mathbf {Q}^{*}_{0}},{\mathbf {Q}^{*}_{c}},{\mathbf {Q}^{*}_{a}})\) as an optimal solution of problem (5). Otherwise, solve the following equation with regard to ν, i.e., \(\mathop {\min }\limits _{k \in {\mathcal {K}}}C_{m,k}(\nu {\mathbf {Q}^{*}_{0}},{\mathbf {Q}^{*}_{c}},{\mathbf {Q}^{*}_{a}})={\tau _{ms}}\), via bisection search within the unit interval [0,1], and output \((\nu {\mathbf {Q}^{*}_{0}},{\mathbf {Q}^{*}_{c}},{\mathbf {Q}^{*}_{a}})\) as an optimal solution of problem (5).

Next, we will point out two special cases, under which problem (7) is equivalent to problem (5) or, equivalently, any optimal solution to (7) is achieved with constraint (7a) active. This is described in the following proposition.

Proposition 1

Suppose that the system configurations satisfy either one of the following conditions:

Condition 1: The number of antennas at the transmitter is larger than that at the authorized receiver, i.e., N t >N r,1.

Condition 2: The number of antennas at the transmitter is larger than the sum of the antenna number at the unauthorized receivers, i.e., \(N_{t} > {\sum \nolimits }_{k \in {{\mathcal {K}}_{e}}} N_{r,k}\).

Then, the rate pair \(({\tau _{ms}},\tilde R(\tau _{ms}))\) must be a Pareto optimal point of (4), and all Pareto optimal points of (4) can be obtained by solving (5) with different τ ms ’s lying within the interval [0,τ max].

Proof

The proof can be found in Appendix A. □

Remark 2

Proposition 1 bridges the Pareto optimal points of (4) to the boundary points of \(R_{s}({\left \{ {\mathbf {H}_{k}} \right \}_{k \in {\mathcal {K}}}},P)\). When either Condition 1 or Condition 2 is satisfied, all Pareto optimal points of (4) are also the boundary points of \(R_{s}({\left \{ {\mathbf {H}_{k}} \right \}_{k \in {\mathcal {K}}}},P)\) and vice versa.

3.2 DC iterative algorithm

We now focus on solving the relaxed problem (7) derived in the last subsection. Problem (7) still remains nonconvex due to its objective function and constraint (7a). To deal with it, we first equivalently transform it into its epigraph form by introducing a slack variable η, i.e.,
$$\begin{array}{*{20}l} R(\tau_{ms})& =\mathop{\max}\limits_{{\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\eta} C_{b}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}})-\eta\\ \text{s.t.}\; &C_{e,k}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) \le \eta, \forall k \in {\mathcal{K}}_{e} \end{array} $$
(8a)
$$\begin{array}{*{20}l} &C_{m,k}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) \ge {\tau_{ms}}, \forall k \in {\mathcal{K}} \end{array} $$
(8b)
$$\begin{array}{*{20}l} &\text{Tr}({\mathbf{Q}_{0}} + {\mathbf{Q}_{c}}+{\mathbf{Q}_{a}}) \le P, \end{array} $$
(8c)
$$\begin{array}{*{20}l} &{\mathbf{Q}_{0}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{a}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{c}} \succeq {\boldsymbol{0}}. \end{array} $$
(8d)

Next, we will show that problem (8) constitutes a DC-type programming problem, which can be iteratively solved by the DC algorithm.

To begin with, we reformulate the capacity functions C b (Q c ,Q a ), C e,k (Q c ,Q a ), and C m,k (Q 0,Q c ,Q a ) into a DC-type form, given by
$$\begin{array}{*{20}l} C_{b}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}})&= {\phi_{1}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) - {\varphi_{1}}({\mathbf{Q}_{a}}),\\ C_{e,k}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}})&={\phi_{k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) - {\varphi_{k}}({\mathbf{Q}_{a}}),\\ C_{m,k}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}})&={\eta_{k}}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) - {\phi_{k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}), \end{array} $$
(9)
in which we define
$$\begin{array}{*{20}l} {\phi_{k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) &= \log \left| {\mathbf{I} + {\mathbf{H}_{k}}({\mathbf{Q}_{c}} + {\mathbf{Q}_{a}})\mathbf{H}_{k}^{H}} \right|,\\ {\varphi_{k}}({\mathbf{Q}_{a}}) &= \log \left| {\mathbf{I} + {\mathbf{H}_{k}}{\mathbf{Q}_{a}}\mathbf{H}_{k}^{H}} \right|,\\ {\eta_{k}}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) &= \log \left| {\mathbf{I} + {\mathbf{H}_{1}}({\mathbf{Q}_{c}} + {\mathbf{Q}_{a}} + {\mathbf{Q}_{0}})\mathbf{H}_{1}^{H}} \right|. \end{array} $$
(10)
Substituting (9) into problem (8), we obtain
$$\begin{array}{*{20}l} R&(\tau_{ms}) =\mathop{\max}\limits_{{\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\eta} {\phi_{1}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) - {\varphi_{1}}({\mathbf{Q}_{a}})-\eta\\ \text{s.t.}\; &{\varphi_{k}}({\mathbf{Q}_{a}})-{\phi_{k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}})+ \eta \ge 0, \forall k \in {\mathcal{K}}_{e} \end{array} $$
(11a)
$$\begin{array}{*{20}l} &{\eta_{k}}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) - {\phi_{k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) \ge {\tau_{ms}}, \forall k \in {\mathcal{K}} \end{array} $$
(11b)
$$\begin{array}{*{20}l} &\text{Tr}({\mathbf{Q}_{0}} + {\mathbf{Q}_{c}}+{\mathbf{Q}_{a}}) \le P, \end{array} $$
(11c)
$$\begin{array}{*{20}l} &{\mathbf{Q}_{0}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{a}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{c}} \succeq {\boldsymbol{0}}. \end{array} $$
(11d)

Since ϕ k (Q c ,Q a ), φ k (Q a ) and η k (Q 0,Q c ,Q a ) are all concave w.r.t. (Q 0,Q c ,Q a ), one can easily notice that the objective function of (5) and constraints (11a) and (11b) are all in a difference-of-concave form. This property makes problem (4) fall into the context of DC program [29], which can be iteratively solved via DC algorithm.

Our next endeavor is to show the DC approach to (11) mathematically. Its basic idea is to locally linearize the nonconcave parts in (11) at some feasible point via Taylor series expansion (TSE) and then iteratively solve the linearized problem. To this end, we introduce the TSE via the following lemma.

Lemma 2

(Chu et al. [ 31 ]) An affine Taylor series approximation of a function \(f({\mathbf {X}}):{{\mathbb R}^{M \times N}} \to {\mathbb R}\) can be expressed at \(\tilde {\mathbf {X}}\) as below.
$$ f\left(\mathbf{X} \right) \approx f({\tilde{\mathbf{X}}}) + {\text{vec}}\left({f'\left(\mathbf{X} \right)} \right)^{H}{\text{vec}}({\mathbf{X} - \tilde{\mathbf{X}}}). $$
(12)
The TSE above enables us to reformulate the primal nonconcave parts of (11) into a linear form. In particular, by applying Lemma 2 and the fact (log|X|)=Tr(X −1 X), φ 1(Q a ) can be approximated as
$$\begin{array}{*{20}l} {\varphi_{1}}({\mathbf{Q}_{a}})&=\log \left| {\mathbf{I} + {\mathbf{H}_{1}}{\mathbf{Q}_{a}}\mathbf{H}_{1}^{H}} \right|\\ &\approx {\varphi_{1}}({\tilde{\mathbf{Q}}_{a}})+ ({\text{vec}}\left(\mathbf{S}\right))^{H}{\text{vec}}\left({{\mathbf{Q}_{a}} - {{\tilde{\mathbf{Q}}}_{a}}} \right)\\ &\mathop= \limits^{(a)} {\varphi_{1}}({\tilde{\mathbf{Q}}_{a}})+ {\text{Tr}}\left[\mathbf{S}({\mathbf{Q}_{a}}-{{\tilde{\mathbf{Q}}}_{a}})\right]\\ &\buildrel \Delta \over = {\tilde \varphi_{1}}({\mathbf{Q}_{a}}) \end{array} $$
(13)
in the objective function of (11), where \({\tilde {\mathbf {Q}}_{a}}\) is a given transmit covariance matrix,
$$\mathbf{S} \buildrel \Delta \over = {{\mathbf{H}_{1}^{H}}{\left({\mathbf{I} + {\mathbf{H}_{1}}{\tilde{\mathbf{Q}}_{a}}\mathbf{H}_{1}^{H}} \right)^{- 1}}\mathbf{H}_{1}} $$
and the equality (a) is due to the fact that Tr(A H B)=(vec(A)) H vec(B) for appropriate dimensions of A and B. Likewise, ϕ k (Q c ,Q a ), appearing in the constraints (11a) and (11b), can be approximated as
$$\begin{array}{*{20}l} {\phi_{k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}})&=\log \left| {\mathbf{I} + {\mathbf{H}_{k}}({\mathbf{Q}_{c}} + {\mathbf{Q}_{a}})\mathbf{H}_{k}^{H}} \right| \\ &\approx {\phi_{k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}})+{\text{Tr}}\left[\mathbf{U}({\mathbf{Q}_{c}}-{\tilde{\mathbf{Q}}_{c}})\right]\\ &\quad+{\text{Tr}}\left[\mathbf{U}({\mathbf{Q}_{a}}-{\tilde{\mathbf{Q}}_{a}})\right]\\ &\buildrel \Delta \over = {\tilde \phi_{k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}), \end{array} $$
(14)
in which U is determined by
$$ \mathbf{U} = \mathbf{H}_{k}^{H}{\left(\mathbf{I} + {\mathbf{H}_{k}}({\tilde{\mathbf{Q}}_{c}} + {\tilde{\mathbf{Q}}_{a}})\mathbf{H}_{k}^{H}\right)^{- 1}}{\mathbf{H}_{k}}. $$
(15)
Based on the approximations above, the original QoMS-constrained SRM problem (11) can be reformulated as
$$\begin{array}{*{20}l} {\bar{R}}&(\tau_{ms}) =\mathop{\max}\limits_{{\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\eta} {\phi_{1}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) - {\tilde{\varphi}_{1}}({\mathbf{Q}_{a}})-\eta\\ \text{s.t.}\; &{\varphi_{k}}({\mathbf{Q}_{a}})-{\tilde{\phi}_{k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}})+ \eta \ge 0, \forall k \in {\mathcal{K}}_{e} \end{array} $$
(16a)
$$\begin{array}{*{20}l} &{\eta_{k}}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) - {\tilde{\phi}_{k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) \ge {\tau_{ms}}, \forall k \in {\mathcal{K}} \end{array} $$
(16b)
$$\begin{array}{*{20}l} &\text{Tr}({\mathbf{Q}_{0}} + {\mathbf{Q}_{c}}+{\mathbf{Q}_{a}}) \le P, \end{array} $$
(16c)
$$\begin{array}{*{20}l} &{\mathbf{Q}_{0}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{a}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{c}} \succeq {\boldsymbol{0}}. \end{array} $$
(16d)
where \({\bar {R}}(\tau _{ms})\) is the optimal objective value of (8), serving as an approximation to R(τ ms ). According to the relationship between a concave function and its Taylor series expansion, it is immediate to get
$$ \begin{array}{ll} &{\varphi_{1}}({\mathbf{Q}_{a}}) \le {\tilde{\varphi}_{1}}({\mathbf{Q}_{a}}), \forall {\mathbf{Q}_{a}} \succeq {\boldsymbol{0}},\\ &{\phi_{k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) \le {\tilde \phi_{k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}), \forall {\mathbf{Q}_{a}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{c}} \succeq {\boldsymbol{0}}. \end{array} $$
(17)

As a consequence, any feasible solution to (16) should also be feasible to (11), and \({\bar {R}}(\tau _{ms}) \le {R}(\tau _{ms})\) must hold.

This approximated problem (16) is convex with regard to (w.r.t.) (Q 0,Q c ,Q a ) and hence (Q 0,Q c ,Q a ) can be iteratively obtained by solving problem (16) via the off-the-shelf interior-point algorithm, i.e., CVX. We summarize our proposed iterative algorithm for solving (11) in Algorithm 1. To acquire the secrecy rate region, we need to traverse τ ms lying within the interval [0,τ max] and store the corresponding objective value of (16).

Remark 3

In Algorithm 1, the initialization of \(({\tilde {\mathbf {Q}}_{c,0}},{\tilde {\mathbf {Q}}_{a,0}})\) plays a crucial role in influencing the total iteration times. Let us define \(\left ({\mathbf {Q}^{i}_{c}},{\mathbf {Q}^{i}_{a}}\right)\) as the output solution in ith traversal of τ ms . The following “warmstart operation” could be adopted to initialize \(({\tilde {\mathbf {Q}}_{c,0}},{\tilde {\mathbf {Q}}_{a,0}})\) for achieving a fast convergence rate:

Warmstart operation: We start the traversal of τ ms from τ ms =τ max. In the first traversal, \({\tilde {\mathbf {Q}}_{c,0}}\) and \({\tilde {\mathbf {Q}}_{a,0}}\) are both initialized as 0. In the ith (i>1) traversal, \(({\tilde {\mathbf {Q}}_{c,0}},{\tilde {\mathbf {Q}}_{a,0}})\) is initialized as the solution output by Algorithm 1 in the (i−1)th traversal.

3.3 Convergence analysis

As one can see, the basic merit of DC lies in its tractability, which caters to the numerical optimization using the parser-solver. As an additional merit, the proposed DC approach has a theoretically provable guarantee on its solution convergence, which will be demonstrated in the following proposition.

Proposition 2

Every limit point of \(\left ({{\mathbf {Q}_{0}^{*}},{\mathbf {Q}_{c}^{*}}} \right)\) is a stationary point of problem (7)

Proof

The proof is a direct application of ([29], Th 10) and thus omitted here for simplicity. □

4 An AO-based approach to the SRRM problem

In this section, we develop our other scalarization method, referred to as weighted-sum method, to solve (4). The resulting problem is essentially a WSRM problem, which can be solved via an AO-based approach. Here, we should point out that the application of AO to SRM problem has been observed in some existing papers, i.e., [24]. Nonetheless, the AO algorithm we used in this section is a nontrivial extension of that in [24]. Specifically, the objective function in [24] only contains a single secrecy rate term. While in our considered scenario, an extra multicast rate term is incorporated, which brings some new issues, say, the convergence proof, that should be tackled.

4.1 Scalarization

The basic idea of the weighted-sum method is to introduce a so-called weight vector [38] that is positive in the dual cone \(K^{*}={\mathbb {R}}_ +^{2}\) and then to transform the primal vector optimization problem into a scalar optimization problem. By varying the vector, we can obtain different Pareto optimal solutions of (4).

To put into context, the Pareto boundary of (3) can be characterized by the solution of
$$ \begin{array}{ll} &\mathop{\max}\limits_{{\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},R_{0},R_{c}} R_{0} + \lambda_{c} R_{c}\\ \text{s.t.}\quad &{R_{0}} \le \mathop {\min }\limits_{k \in {\mathcal{K}}} C_{m,k}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}})\\ &{R_{c}} \le C_{b}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) - \mathop {\max}\limits_{k \in {{\mathcal{K}}_{e}}} C_{e,k}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}})\\ &\text{(4a)--(4b) satisfied}, \end{array} $$
(18)
in which λ c [0,+) and λ=[1,λ c ] are our introduced weight vector. In general, the optimal (R 0,R c ) to (5) is the point where a straight line with slope −1/λ c is tangent to the Pareto boundary. Before proceeding, let us first point out some special cases of problem (5).
  1. 1.

    When λ=[1,1], the optimal (R 0,R c ) turns out to be the so-called utilitarian point, also referred to as “sum-rate” point in communications.

     
  2. 2.

    The single-service points are the two points where R 0=0 and where R c =0, respectively. When R 0=0, problem (5) is degraded into a conventional AN-aided SRM problem in MIMO wiretap channel. When R c =0, the maximum R 0 can be derived by solving the same convex optimization problem as (6).

     

4.2 AO iterative algorithm

We are now in a position to determine the tractable approaches to the WSRM problem (18). First, one can notice that by discarding R 0 and R c as slack variables, problem (18) is equivalent to the following optimization problem.
$$\begin{array}{*{20}l} R(\lambda_{c})=&\mathop{\max}\limits_{{\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}} \lambda_{c} (C_{b} - \mathop {\max}\limits_{k \in {{\mathcal{K}}_{e}}} C_{e,k}) + \mathop {\min }\limits_{k \in {\mathcal{K}}} C_{m,k}\\ \text{s.t.}\quad &\text{Tr}({\mathbf{Q}_{0}} + {\mathbf{Q}_{c}} + \mathbf{Q}_{a}) \le P, \end{array} $$
(19a)
$$\begin{array}{*{20}l} &{\mathbf{Q}_{0}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{c}} \succeq {\boldsymbol{0}}, \mathbf{Q}_{a} \succeq {\boldsymbol{0}}.\vspace*{8pt} \end{array} $$
(19b)

The obstacle of solving (19) mainly lies in the non-smoothness of its objective function, which negates the use of many derivative-related iterative algorithms. As a result, we next develop a derivative-free AO iterative algorithm to solve (19). To this end, we will first need to transform the WSRM problem (19) into a form amenable to AO.

Lemma 3

(Li et al. [24]) Let \(\mathbf {E} \in {{\mathbb {C}}^{N \times N}}\) be any matrix satisfying E0. Define the function f(S)=−Tr(S E)+ log|S|+N. Then,
$$ \log \left| {{\mathbf{E}^{- 1}}} \right| = \mathop {\max }\limits_{\mathbf{S} \in {{\mathbb{C}}^{N \times N}},\mathbf{S} \succeq 0} f(\mathbf{S}), $$
(20)

and the optimal solution to the right-hand side (RHS) of (33) is S =E −1.

Applying Lemma 3 to C b , C e,k , and C m,k , one can obtain
$$ \begin{array}{ll} {C_{b}}&=\mathop {\max }\limits_{{\mathbf{S}_{1}} \succeq {\boldsymbol{0}}} {\varphi_{b}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\mathbf{S}_{1}),\\ C_{e,k}&=\mathop {\min }\limits_{{\mathbf{S}_{k}} \succeq {\boldsymbol{0}}} {\varphi_{e,k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\mathbf{S}_{k}), \forall k \in {\mathcal{K}}_{e},\\ C_{m,k}&=\mathop {\max }\limits_{{\mathbf{U}_{k}} \succeq {\boldsymbol{0}}} {\varphi_{m,k}}({\mathbf{Q}_{0}}, {\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\mathbf{U}_{k}), \forall k \in {\mathcal{K}}, \end{array} $$
(21)
where we define
$$ {{}\begin{aligned} {\varphi_{b}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\mathbf{S}_{1})&= - {\text{Tr}}({\mathbf{S}_{1}}\left(\mathbf{I} + {\mathbf{H}_{1}}{\mathbf{Q}_{a}}\mathbf{H}_{1}^{H}\right)+ \log\left| {{\mathbf{S}_{1}}} \right|+N_{r,1}\\ &\quad+ \log \left| {\mathbf{I} + {\mathbf{H}_{1}}({\mathbf{Q}_{a}} + {\mathbf{Q}_{c}})\mathbf{H}_{1}^{H}} \right|,\\ {\varphi_{e,k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\mathbf{S}_{k}) &= - \log \left| {{\mathbf{S}_{k}}} \right| - \log \left| {\mathbf{I}+{\mathbf{H}_{k}}{\mathbf{Q}_{a}}\mathbf{H}_{k}^{H}} \right|-N_{r,k}\\ &\quad+{\text{Tr}}\left({\mathbf{S}_{k}}\left(\mathbf{I} + {\mathbf{H}_{k}}({\mathbf{Q}_{a}} + {\mathbf{Q}_{c}})\mathbf{H}_{k}^{H}\right)\right),\\ {\varphi_{m,k}}({\mathbf{Q}_{0}}, {\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\mathbf{U}_{k})&=- {\text{Tr}}({\mathbf{U}_{k}}\left(\mathbf{I} + {\mathbf{H}_{k}}({\mathbf{Q}_{c}}+{\mathbf{Q}_{a}})\mathbf{H}_{k}^{H}\right) + \log\left| {{\mathbf{U}_{k}}} \right|\\ &\quad+ \log \left| {\mathbf{I} + {\mathbf{H}_{k}}({\mathbf{Q}_{0}} + {\mathbf{Q}_{c}} + {\mathbf{Q}_{a}})\mathbf{H}_{k}^{H}} \right|+ N_{r,k}, \end{aligned}} $$
(22)

in which \(\{\mathbf {S}_{k}\}_{k \in {\mathcal {K}}}\) and \(\{\mathbf {U}_{k}\}_{k \in {\mathcal {K}}}\) are slack variables satisfying S k 0 and U k 0 for \(\forall k \in {\mathcal {K}}\).

Following the matrix manipulations in [24], we have
$$ \begin{array}{ll} &\mathop {\max }\limits_{k \in {\mathcal{K}}_{e}} \mathop {\min }\limits_{{\mathbf{S}_{k}}} {\varphi_{e,k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\mathbf{S}_{k})\\ =&\mathop {\min }\limits_{\{ {\mathbf{S}_{k}}\}_{k \in {\mathcal{K}}_{e}}} \mathop {\max }\limits_{k \in {\mathcal{K}}_{e}} {\varphi_{e,k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\mathbf{S}_{k}), \end{array} $$
(23)
and
$$ \begin{array}{ll} &\mathop {\min }\limits_{k \in {\mathcal{K}}} \mathop {\max }\limits_{{\mathbf{U}_{k}}} {\varphi_{m,k}}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\mathbf{U}_{k})\\ =& \mathop {\max }\limits_{\{ {\mathbf{U}_{k}}\}_{k \in {\mathcal{K}}}} \mathop {\min }\limits_{k \in {\mathcal{K}}} {\varphi_{m,k}}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\mathbf{U}_{k}). \end{array} $$
(24)
Substituting (21) into (19) and making use of (23) and (24), one can check that problem (19) is equivalent to the following optimization problem.
$$\begin{array}{*{20}l} R(\lambda_{c})=&\mathop {\max }\limits_{{\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\atop \left\{ \mathbf{S}_{k},\mathbf{U}_{k} \right\}_{k \in {\mathcal{K}}}}f({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\left\{ \mathbf{S}_{k},\mathbf{U}_{k} \right\}_{k \in {\mathcal{K}}})\\ \text{s.t.}\quad&\text{Tr}({\mathbf{Q}_{0}} + {\mathbf{Q}_{c}} + \mathbf{Q}_{a}) \le P, \end{array} $$
(25a)
$$\begin{array}{*{20}l} &{\mathbf{Q}_{0}} \succeq {\boldsymbol{0}}, {\mathbf{Q}_{c}} \succeq {\boldsymbol{0}}, \mathbf{Q}_{a} \succeq {\boldsymbol{0}}, \end{array} $$
(25b)
in which we define
$$ \begin{array}{ll} f\!\!\!\!\!&({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\left\{ \mathbf{S}_{k},\mathbf{U}_{k} \right\}_{k \in {\mathcal{K}}}) = \\ &\;\lambda_{c} [{\varphi_{b}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\mathbf{S}_{1})- \mathop {\max }\limits_{k \in {\mathcal{K}}_{e}}{\varphi_{e,k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\{ {\mathbf{S}_{k}}\}_{k \in {\mathcal{K}}_{e}})]\\ &+ \mathop {\min }\limits_{k \in {\mathcal{K}}}{\varphi_{m,k}}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\left\{ \mathbf{U}_{k} \right\}_{k \in {\mathcal{K}}}). \end{array} $$
(26)
The upshot of this reformation is that problem (19) becomes primal decomposable. Specifically, problem (25) is convex w.r.t. either (Q 0,Q c ,Q a )or \((\left \{ \mathbf {S}_{k} \right \}_{k \in {\mathcal {K}}},\left \{ \mathbf {U}_{k} \right \}_{k \in {\mathcal {K}}})\). Hence, AO is naturally employed to solve (25). With (Q 0,Q c ,Q a ) fixed, the optimal solution of \((\left \{ \mathbf {S}_{k} \right \}_{k \in {\mathcal {K}}},\left \{ \mathbf {U}_{k} \right \}_{k \in {\mathcal {K}}})\) admits an analytical expression, according to Lemma 3, given by
$$\begin{array}{*{20}l} &\mathbf{S}_{1}^ * = {\left(\mathbf{I} + {\mathbf{H}_{1}}{\mathbf{Q}_{a}}\mathbf{H}_{1}^{H}\right)^{- 1}}, \end{array} $$
(27a)
$$\begin{array}{*{20}l} &\mathbf{S}_{k}^ * = {\left(\mathbf{I} + {\mathbf{H}_{k}}({\mathbf{Q}_{a}} + {\mathbf{Q}_{c}})\mathbf{H}_{k}^{H}\right)^{- 1}}, \forall k \in {\mathcal{K}}_{e}, \end{array} $$
(27b)
$$\begin{array}{*{20}l} &\mathbf{U}_{k}^ * = {\left(\mathbf{I} + {\mathbf{H}_{k}}({\mathbf{Q}_{a}} + {\mathbf{Q}_{c}})\mathbf{H}_{k}^{H}\right)^{- 1}}, \forall k \in {\mathcal{K}}, \end{array} $$
(27c)
in which we utilize the fact that \(\left \{ \mathbf {S}_{k} \right \}_{k \in {\mathcal {K}}}\) and \(\left \{ \mathbf {U}_{k} \right \}_{k \in {\mathcal {K}}}\) are decoupled among φ b , φ e,k , and φ m,k . Comparatively, with \((\left \{ \mathbf {S}_{k} \right \}_{k \in {\mathcal {K}}},\left \{ \mathbf {U}_{k} \right \}_{k \in {\mathcal {K}}})\) fixed, the optimal solution of (Q 0,Q c ,Q a ) can be obtained by solving a convex optimization problem as below, i.e.,
$$ \begin{array}{ll} (\mathbf{Q}_{0}^{*},\mathbf{Q}_{c}^ *&,\mathbf{Q}_{a}^ *) = \\ &\arg \mathop {\max }\limits_{({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) \in {\mathcal{F}}} f({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\left\{ \mathbf{S}_{k},\mathbf{U}_{k} \right\}_{k \in {\mathcal{K}}}), \end{array} $$
(28)

where \({\mathcal {F}}\) denotes the feasible set of (19), which is convex.

The whole AO process for solving (25) is given in Algorithm 2. In line 6 of Algorithm 2, the convex subproblem can be solved via CVX. Following the similar warmstart operation introduced in Remark 3, the iteration times of Algorithm 2 can be significantly decreased.

4.3 Convergence analysis

It can be verified that the AO algorithm produces a nondecreasing objective value of (25). Besides, the following convergence result is always guaranteed.

Proposition 3

Suppose that \((\mathbf {Q}_{0}^ n,\mathbf {Q}_{c}^ n,\mathbf {Q}_{a}^ n)\) is the solution generated by the AO algorithm in nth iteration, then the sequence \(\{(\mathbf {Q}_{0}^ n,\mathbf {Q}_{c}^ n,\mathbf {Q}_{a}^ n)\}_{n}\) must converge to one stationary point (i.e., Karush-Kuhn-Tucker (KKT) point) of the primal WSRM problem (19).

Proof

The proof can be found in Appendix B. □

5 Comparison of the proposed methods

In the previous sections, we present two tractable convex formulations of the SRRM problem (4). This naturally leads to the question about the relative performance of the two formulations. In the following subsections, we address this question by comparing their performance and computational complexity in solving (4).

5.1 Performance analysis

As introduced in the preceding sections, the QoMS-based scalarization can yield a complete set of boundary points of \(R_{s}({\left \{ {\mathbf {H}_{k}} \right \}_{k \in {\mathcal {K}}}},P)\), which contains all Pareto optimal points of (4). The resulting scalar problem (7) aims to maximize the secrecy rate and meanwhile maintain the QoMS above a given threshold. Predictably, the use of AN should be effective merely at low QoMS region, since AN exerts a negative effect on the multicasting performance. To guarantee the high demand for QoMS, AN has to be prohibitive at high QoMS region. This QoMS-constrained SRM is a generalization of traditional SRM in physical-layer security and provides the transmitter with some insights in how to tradeoff the security performance and the multicasting performance.

As for the weighted-sum scalarization method, the necessary condition for it to find all Pareto optimal points is that the secrecy rate region should be convex. Besides, its performance is also dependent on the precision of λ c . The traversal of λ c should span from zero to an extremely large number with appropriate step, so that each Pareto optimal points can be detected. Nonetheless, the weighted-sum problem structure has an interesting pricing interpretation from the field of economics. To elaborate a little further, let us define p 0 and p c as the unit price for the secrecy rate and the multicast rate, respectively, charged by the service provider. To maximize its revenue, the service provider should be concerned about how to solve the WSRM problem in (18) with setting λ c =p c /p 0. The use of AN could also be explained in this context. It is evident to see when p 0p c , the revenue from multicasting transmission would dominate the objective function of (18), and thus, eliminating AN would be helpful in increasing the overall revenue.

In all, these two scalarization methods are suitable for different application scenarios and provide different insights. Nonetheless, the QoMS-based scalarization could yield all Pareto optimal points, while the weighted-sum scalarization might only yield some of them, dependent on the shape of the secrecy rate region.

Remark 4

Besides the QoMS-based and weighted-sum scalarization methods, some other scalarization methods have been proposed in literature to find the complete Pareto set for biobjective optimization, e.g., the weighted Tchebycheff method [39]. However, to implement this method, one has to first obtain the single-service point of the confidential message (cf. (29)) and then solve a highly nonconvex max-min optimization problem.
$$\begin{array}{*{20}l} {R_{c}^{\max}} =& \mathop {\max }\limits_{{\mathbf{Q}_{c}} \succeq {\boldsymbol{0}},{\text{Tr}}({\mathbf{Q}_{c}}) \le P} \log \left| {\mathbf{I} + {\mathbf{H}_{1}}{\mathbf{Q}_{c}}\mathbf{H}_{1}^{H}} \right| \\ &- \mathop {\max }\limits_{k \in {\mathcal{K}}} \log \left| {\mathbf{I} + {\mathbf{H}_{k}}{\mathbf{Q}_{c}}\mathbf{H}_{k}^{H}} \right|. \end{array} $$
(29)

Unfortunately, problem (29) is nonconvex, and so the optimal solution to (29) may not be obtained, which invalidates the use of the weighted Tchebycheff method.

5.2 Complexity analysis

The major computational complexity of the two scalarization methods comes from solving the problems (16) and (28). While both of problems (16) and (28) are convex, they are not in a standard semidefinite programming (SDP) form, owing to the logarithm functions therein. To solve them, a successive approximation method embedded with a primal-dual interior-point method (IPM) is employed, say by CVX. As is known, the arithmetic complexity for the generic primal-dual IPM to solve a standard SDP is \({\mathcal {O}}(\max {\{ m,n\}^{4}}{n^{1/2}}\log (1/\varepsilon))\) [40], in which m, n, and ε represent the number of linear constraints, the dimension of the positive semidefinite cone, and the solution accuracy, respectively. Therefore, the complexity of solving (16) or (28) is \({\mathcal {O}}({L_{SA}}\max {\{ 2K,{N_{t}}\}^{4}}N_{t}^{1/2}\log (1/\varepsilon))\), where L SA denotes the number of successive approximations used. Since we are not aware of the relation between L SA and N t , this complexity expression is rather rough.

However, by utilizing the following approximation [41]:
$$ \log \left| {\mathbf{I} + {\mathbf{HQ}}{\mathbf{H}^{H}}} \right| = {\text{Tr}}\left({\mathbf{HQ}}{\mathbf{H}^{H}}\right) + {\mathcal{O}}\left(\left\| {{\mathbf{HQ}}{\mathbf{H}^{H}}} \right\|\right), $$
(30)

all logarithm terms in problems (16) and (28) can be approximated by a trace function at low transmit power. This approximation further converts the convex problems (16) and (28) into SDP ones, which makes it possible to acquire a more accurate big-O expression of the computational complexity for low transmit power.

Specifically, consider (16), which has three linear matrix inequality (LMI) constraints of size N t , and 2K LMI constraints of size 1 after introducing the approximation (30). Moreover, for (16), the number of decision variables is on the order \(n_{1}=3N_{t}^{2}+1\). Then, when a generic path-following IPM is used to solve problem (16), the total arithmetic computation cost is on the order of [42]
$$ \begin{array}{ll} &{T_{1}} = \sqrt {2K + 3{N_{t}}}\phi(n_{1}),\\ &\phi(n_{1})={{n_{1}}\left(2K + 3N_{t}^{3}\right) + n_{1}^{2}\left(2K + 3N_{t}^{2}\right) + n_{1}^{3}} \end{array} $$
(31)

with \(n_{1}={\mathcal {O}}\left (3N_{t}^{2}+1\right)\).

On the other hand, for solving (28), we need to introduce two additional slack variables to move the maximum and minimum terms in the objective function of (28) to the constraints. Hence, the number of decision variables is on the order of \(n_{2}=3N_{t}^{2}+2\), and (28) also has three LMI constraints of size N t , and 2K LMI constraints of size 1. The total arithmetic computation cost for solving (28) is in the order of
$$ \begin{array}{ll} {T_{2}} &= \sqrt {2K + 3{N_{t}}}\phi(n_{2}),\\ \phi(n_{2})&={{n_{2}}\left(2K + 3N_{t}^{3}\right) + n_{2}^{2}\left(2K + 3N_{t}^{2}\right) + n_{2}^{3}} \end{array} $$
(32)

with \(n_{2}={\mathcal {O}}\left (3N_{t}^{2}+2\right)\).

Comparing (31) and (32), one can note that the total arithmetic computation cost of solving the two problems is comparable, with T 2 slightly greater than T 1 due to n 2>n 1. This observation implies that the QoMS-based scalarization is more time-efficient at low transmit power. This is also consistent with our following simulation results, as we shall see in Section 6.

6 Numerical results

In this section, we provide numerical results to illustrate the secrecy rate region derived from the two proposed methods, compared with two other existing strategies. The first one is the no-AN transmission, i.e., prefixing Q a as 0 in problem (4). Thus, its achieved secrecy rate region can also be derived via the DC and AO algorithms. Another one is the traditional service integration using time division multiple address (TDMA), which assigns the confidential message and multicast message to two orthogonal time slots. Its maximum secrecy rate and multicast rate can be obtained by seeking the single-service points of \(R_{s}({\left \{ {\mathbf {H}_{k}} \right \}_{k \in {\mathcal {K}}}},P)\). For the fairness of comparison, the secrecy rate and multicast rate achieved by this TDMA-based strategy should be halved[17].

In the first subsection, the convergence results of both algorithms are presented. The second subsection gives the comparison between these two algorithms in terms of achievable performance and computational complexity.

6.1 Convergence results

In this subsection, we assume N t =5, N r,k =3 for all \(k \in \mathcal {K}\), and K=4. The channel matrices are randomly generated from an i.i.d. complex Gaussian distribution with zero mean and unit variance. According to Proposition 1, since N t >N r,1, the optimal solution to (7) is attained when the constraint (7a) holds with equality.

First, we evaluate the convergence of the DC algorithm. Especially, we are concerned about whether the primal constraint (7a) is violated by our approximation. Setting τ ms as 2 bps/Hz, Fig. 1 shows the convergence of the multicast rate in the iteration with different transmit power. \({\tilde {\mathbf {Q}}_{c,0}}\) and \({\tilde {\mathbf {Q}}_{a,0}}\) are both initiated as 0. The algorithm stops iterating when the difference between two successive values of \({\bar {R}}(\tau _{ms})\) returned by the algorithm is less than or equal to 10−4. One can observe that the multicast rates ultimately converge to our predefined multicast rate with a limited number of iterations in all tested transmit powers. This observation indicates the efficacy of TSE in approximating the multicast rate. Then, we also plot the achieved secrecy rates and the approximated secrecy rates in Fig. 2. The general observation of Fig. 1 is also applicable to Fig. 2.
Fig. 1

DC algorithm: convergence of the multicast rate

Fig. 2

DC algorithm: convergence of the secrecy rate

The convergence results of the AO algorithm are presented in Fig. 3. In Fig. 3, we set λ c =1 to seek the sum-rate point. \(\mathbf {Q}_{c}^{0}\) and \(\mathbf {Q}_{a}^{0}\) are both initialized as \((P/(2N_{t}))\mathbf {I}_{N_{t}}\phantom {\dot {i}\!}\). The algorithm stops iterating when the difference between two successive values of \({\bar {R}}(\lambda _{c})\) is less than or equal to 10−4. As one can observe from Fig. 3, the achieved weighted sum rate is monotonically increasing and finally converges with a limited number of iterations in all tested transmit powers. In addition, we find out that the AN covariance matrix Q a output by AO is no longer diagonal. This implies that the associated AN design is spatially selective rather than isotropic, which blocks the eavesdroppers much more effectively. One can also note that the increase in the weighted sum rate is particularly remarkable when the transmit power is high. After all, higher transmit power means that the transmitter can allocate more power to the confidential message transmission, while not compromising the multicast performance. The extra power allocated to the confidential message can be used to generate more interference at the eavesdropper and/or strengthen the signal reception at the intended receiver, whereby more remarkable improvement is observed.
Fig. 3

AO algorithm: convergence of the weighted sum rate

6.2 Performance comparison

In this subsection, we focus on two sorts of system configuration. The first one is the same as that in the last subsection. Besides, we consider another sort of system configuration: N t =N r,1=4, N r,k =5 for all \(k \in {\mathcal {K}}_{e}\), and K=4. Under the second system configuration, neither Condition 1 nor Condition 2 is satisfied.

First, we will show the secrecy rate regions achieved by the first system configuration. Overall results are shown in Fig. 4, with P set as 10 and 20 dB, respectively. Figure 4 reveals two general trends. First, our AN-aided scheme achieves a secrecy rate region larger than the no-AN one. The striking gap indicates the efficacy of AN in expanding the secrecy rate region. However, the gap between these two strategies dramatically reduces when R 0 increases. This phenomenon agrees with our conjecture in Section 5.1. The second observation is that our proposed strategies, though only attain a lower bound on \(R_{s}({\left \{ {\mathbf {H}_{k}} \right \}_{k \in {\mathcal {K}}}},P)\), is sufficient to achieve significantly larger secrecy rate regions than the TDMA-based one. This observation also implies that PHY-SI is an effective approach to improve the spectral efficiency. Then, let us compare the achievable performance of the two proposed scalarization methods. One can notice that the performance gap between these two methods is negligible in the tested system configuration, especially when P=10 dB.
Fig. 4

Secrecy rate regions with and without AN (Config 1)

Figure 5 plots the secrecy rate regions achieved by the second system configuration. Still, the secrecy rate region with AN is larger than the one without AN and the one achieved by TDMA. Besides, we can observe two very interesting phenomena. First, when we increase the transmit power from 10 to 20 dB, the secrecy rate regions practically expand in the horizontal direction. That is, under the second system configuration, the increasing transmit power mainly contributes to the multicast message transmission, rather than the confidential message transmission. This can be interpreted from the transmit degree of freedom (d.o.f.). The total d.o.f. of unauthorized receivers is \({\sum \nolimits }_{k=2}^{K-1}{N_{r,k}}=15\), much higher than the transmit d.o.f. N t =4. The high d.o.f. at the unauthorized receivers leads to the d.o.f. bottleneck at the transmitter and thus compromises the overall secrecy performance. Second, one can notice that when P=20 dB,
Fig. 5

Secrecy rate regions with and without AN (Config 2)

1) There exist some boundary points residing on a line, marked by the red dashed lines, that are not Pareto optimal to (4). Apparently, these points cannot be detected by the weighted-sum scalarization but can be easily detected by the QoMS-based scalarization.

2) The QoMS-based scalarization detects more Pareto optimal points than the weighted-sum scalarization. This is attributed to the insensitivity of the weighted-sum scalarization to the points residing on an approximately horizontal boundary. To detect these boundary points, one has to precisely adjust the value of λ c to get different tangent points.

6.3 Complexity comparison

Finally, we tabulated the averaged running times of DC and AO for obtaining a boundary point in Table 1 under the same setting as Fig. 4. As seen, the DC algorithm runs faster than the AO algorithm when the transmit power is low. This phenomenon is consistent with our preceding analysis in Section 5.2. However, at high transmit power, the DC algorithm scales nearly exponentially with P and gradually spends more time converging in each iteration than the AO algorithm. This observation indicates that the two proposed scalarization methods might exhibit a performance-complexity tradeoff at high transmit power.
Table 1

Averaged running times (in seconds)

Power (dB)

      

Method

0

4

8

12

16

20

DC

6.07

8.89

12.91

17.35

21.18

30.84

AO

7.57

11.58

11.04

12.61

13.61

17.11

7 Conclusions

In this paper, we considered the AN-aided transmit design for multiuser MIMO broadcast channel with confidential service and multicast service. The transmit covariance matrices of confidential message, multicast message, and AN were designed to maximize the achievable secrecy rate and achievable multicast rate simultaneously. To deal with this biobjective optimization problem, two different sorts of scalarization were introduced to transform this SRRM problem into a scalar optimization problem. In the QoMS-based scalarization, the scalar problem is an SRM problem with QoMS constraints, while in the weighted-sum scalarization, the scalar problem is a WSRM problem. DC and AO algorithms were utilized to solve the QoMS-constrained SRM problem and the WSRM problem, respectively. Both algorithms can converge to a stationary point of the primal problems. Further, we gave a detailed comparison between the two proposed scalarization methods. The comparison results indicated that at low transmit power, the QoMS-based scalarization is superior to the weighted-sum one in terms of achievable performance and computational complexity. On the other hand, at high transmit power, these two methods exhibit a tradeoff between achievable performance and computational complexity. Numerical results also confirmed the effectiveness of AN in expanding the secrecy rate region.

As a future direction, it would be interesting to analyze the robust service integration scheme to combat the possible CSI uncertainties caused by channel aging and to take into account some application-specific requirements in 5G wireless communication system, e.g., the mobility of terminals and the overhead in CSI acquisition.

8 Endnote

1 In this paper, we assume that only one receiver orders the confidential service within a single time slot. In practice, this assumption is valid under the case where the confidential service is provided to all receivers in a round-robin manner, i.e., the time slots are assigned to each subscriber of the confidential service in equal portions and in circular order.

9 Appendix A: proof of Proposition 1

First, we claim that problem (7) has a following interesting property provided that Condition 1 or Condition 2 is satisfied.

Property 1

The maximum objective value of problem (7), R(τ ms ), is obtained only when the equality in (7a) holds.

Proof

The proof of Property 1 can be accomplished by contradiction. Assume that the maximum value of problem (7) is obtained at the solution \(({\hat {\mathbf {Q}}_{0}},{\hat {\mathbf {Q}}_{c}},{\hat {\mathbf {Q}}_{a}})\) and the equality in (7a) does not hold, i.e.,
$$ {\mathop {\min }\limits_{k \in \mathcal{K}} \log \lvert {\mathbf{I} + {{\left({\mathbf{I} + {\mathbf{H}_{k}}({\hat{\mathbf{Q}}_{c}}+{\hat{\mathbf{Q}}_{a}})\mathbf{H}_{k}^{H}}\right)}^{- 1}}{\mathbf{H}_{k}}{\hat{\mathbf{Q}}_{0}}\mathbf{H}_{k}^{H}} \rvert>\tau_{ms}.} $$

Our next step is to construct a new solution \(({\bar {\mathbf {Q}}_{0}},{\bar {\mathbf {Q}}_{c}}, {\bar {\mathbf {Q}}_{a}})\) from \(({\hat {\mathbf {Q}}_{0}},{\hat {\mathbf {Q}}_{c}},{\hat {\mathbf {Q}}_{a}})\), which achieves a larger objective value and satisfies the constraint (7a) with equality. Let us first elaborate upon the construction method under Condition 1.

1) Case for Condition 1: Specifically, we multiply \({\hat {\mathbf {Q}}_{0}}\) by a scaling factor ξ (0<ξ<1), add a positive semidefinite (PSD) matrix \(\mathbf {E}=\rho \mathbf {I} - \rho \mathbf {H}_{1}^{H}{\left ({\mathbf {H}_{1}}\mathbf {H}_{1}^{H}\right)^{- 1}}{\mathbf {H}_{1}}\) to \({\hat {\mathbf {Q}}_{a}}\), and keep \({\hat {\mathbf {Q}}_{c}}\) constant, i.e., \({\bar {\mathbf {Q}}_{0}}=\xi \hat {\mathbf {Q}}_{0}\), \({\bar {\mathbf {Q}}_{a}}={\hat {\mathbf {Q}}_{a}}+\mathbf {E}\) and \({\bar {\mathbf {Q}}_{c}}=\hat {\mathbf {Q}}_{c}\), where the coefficient ρ controls the power of E. Note that E is the orthogonal complement projector of \(\mathbf {H}_{1}^{H}\), and its existence is guaranteed by Condition 1. To keep the total transmit power constant, the coefficient ρ should be chosen to satisfy \((1 - \xi){\text {Tr}}({\hat {\mathbf {Q}}_{0}}) = {\text {Tr}}(\mathbf {E}) = \rho ({N_{t}} - {N_{r,1}})\), that is, \(\rho = \frac {{(1 - \xi){\text {Tr}}({\hat {\mathbf {Q}}_{0}})}}{{{N_{t}} - {N_{r,1}}}}\). To proceed, we need the following lemma.

Lemma 4

(Weingarten et al. [43]) For matrices A,Δ0 and B0, the following inequality hold:
$$ \frac{{\left| {\mathbf{A} + \mathbf{B}} \right|}}{{\left| \mathbf{B} \right|}} \ge \frac{{\left| {\mathbf{A} + \mathbf{B} + \mathbf{\Delta }} \right|}}{{\left| {\mathbf{B} + \mathbf{\Delta }} \right|}}. $$
(33)
Then, by applying Lemma 1, one can obtain
$$\begin{array}{*{20}l} C&_{m,k}({\hat{\mathbf{Q}}_{0}},{\hat{\mathbf{Q}}_{c}},{\hat{\mathbf{Q}}_{a}})\\ &=\log \lvert {\mathbf{I} + {{\left({\mathbf{I} + {\mathbf{H}_{k}}({\hat{\mathbf{Q}}_{c}}+{\hat{\mathbf{Q}}_{a}})\mathbf{H}_{k}^{H}} \right)}^{- 1}}{\mathbf{H}_{k}}{\hat{\mathbf{Q}}_{0}}\mathbf{H}_{k}^{H}} \rvert \\ &> \log \lvert {\mathbf{I} + {{\left({\mathbf{I} + {\mathbf{H}_{k}}({\bar{\mathbf{Q}}_{c}}+{\bar{\mathbf{Q}}_{a}})\mathbf{H}_{k}^{H}} \right)}^{- 1}}{\mathbf{H}_{k}}{\bar{\mathbf{Q}}_{0}}\mathbf{H}_{k}^{H}} \rvert \\ &=C_{m,k}({\bar{\mathbf{Q}}_{0}},{\bar{\mathbf{Q}}_{c}},{\bar{\mathbf{Q}}_{a}}) \end{array} $$
(34)

for any \(k \in {\mathcal {K}}\). Thus, by adjusting the value of ξ, the equality in (7) could be achieved.

To proceed, we will show that a larger objective value could always be achieved by \(({\bar {\mathbf {Q}}_{0}},{\bar {\mathbf {Q}}_{c}},{\bar {\mathbf {Q}}_{a}})\). By reapplying Lemma 1, it is easy to get
$$\begin{array}{*{20}l} C&_{e,k}({\bar{\mathbf{Q}}_{c}},{\bar{\mathbf{Q}}_{a}})\\ &=\log \lvert\mathbf{I} + {\left(\mathbf{I} + {\mathbf{H}_{k}}{\bar{\mathbf{Q}}_{a}}\mathbf{H}_{k}^{H}\right)^{- 1}}{\mathbf{H}_{k}}{\bar{\mathbf{Q}}_{c}}\mathbf{H}_{k}^{H} \rvert \\ &= \log \lvert {\mathbf{I} + {{\left({\mathbf{I} + {\mathbf{H}_{k}}({\hat{\mathbf{Q}}_{a}}+\mathbf{E})\mathbf{H}_{k}^{H}} \right)}^{- 1}}{\mathbf{H}_{k}}{\hat{\mathbf{Q}}_{c}}\mathbf{H}_{k}^{H}} \rvert \\ &< \log \lvert\mathbf{I} + {\left(\mathbf{I} + {\mathbf{H}_{k}}{\hat{\mathbf{Q}}_{a}}\mathbf{H}_{k}^{H}\right)^{- 1}}{\mathbf{H}_{k}}{\hat{\mathbf{Q}}_{c}}\mathbf{H}_{k}^{H} \rvert, \\ &=C_{e,k}({\hat{\mathbf{Q}}_{c}},{\hat{\mathbf{Q}}_{a}}), \forall k \in {\mathcal{K}}_{e}. \end{array} $$
(35)
Meanwhile, due to \({\mathbf {H}_{1}}\mathbf {E}\mathbf {H}_{1}^{H}=\boldsymbol {0}\), it is easy to see
$$ C_{b}({\bar{\mathbf{Q}}_{c}},{\bar{\mathbf{Q}}_{a}})=C_{b}({\hat{\mathbf{Q}}_{c}},{\hat{\mathbf{Q}}_{a}}). $$
(36)
Combining (35) with (36), we obtain
$$\begin{array}{*{20}l} C_{b}(\bar {\mathbf{Q}_{c}}&,\bar {\mathbf{Q}_{a}})-\mathop {\max}\limits_{k \in {{\mathcal{K}}_{e}}}C_{e,k}({\bar{\mathbf{Q}}_{c}},{\bar{\mathbf{Q}}_{a}})\\ &>C_{b}({\hat{\mathbf{Q}}_{c}},{\hat{\mathbf{Q}}_{a}})-\mathop {\max}\limits_{k \in {{\mathcal{K}}_{e}}}C_{e,k}({\hat{\mathbf{Q}}_{c}},{\hat{\mathbf{Q}}_{a}}), \end{array} $$
(37)

i.e., a larger objective value can be found with \(({\bar {\mathbf {Q}}_{0}},{\bar {\mathbf {Q}}_{c}},{\bar {\mathbf {Q}}_{a}})\). This fact is contrary to the primal assumption.

2) Case for Condition 2: The only difference between the proof for Condition 1 and Condition 2 lies in the construction method of \(({\bar {\mathbf {Q}}_{0}},{\bar {\mathbf {Q}}_{c}},{\bar {\mathbf {Q}}_{a}})\). To begin with, let us first define a matrix \({\mathbf {H}_{ua}} \buildrel \Delta \over = {[\mathbf {H}_{2}^{H},\mathbf {H}_{3}^{H}, \cdots,\mathbf {H}_{K}^{H}]^{H}} \in {{\mathbb {C}}^{{\sum \nolimits }_{k \in {{\mathcal {K}}_{e}}} {{N_{r,k}}} \times {N_{t}}}}\), which stacks all of the unauthorized receivers’ channel matrices. Then, we multiply \({\hat {\mathbf {Q}}_{0}}\) by a scaling factor ξ (0<ξ<1), add a PSD matrix \(\mathbf {E}=\rho \mathbf {I} - \rho \mathbf {H}_{ua}^{H}{\left ({\mathbf {H}_{ua}}\mathbf {H}_{ua}^{H}\right)^{- 1}}{\mathbf {H}_{ua}}\) to \({\hat {\mathbf {Q}}_{c}}\), and keep \({\hat {\mathbf {Q}}_{a}}\) constant, i.e., \({\bar {\mathbf {Q}}_{0}}=\xi \hat {\mathbf {Q}}_{0}\), \({\bar {\mathbf {Q}}_{c}}={\hat {\mathbf {Q}}_{c}}+\mathbf {E}\) and \({\bar {\mathbf {Q}}_{a}}=\hat {\mathbf {Q}}_{a}\), where the coefficient ρ controls the power of E. E is the orthogonal complement projector of \(\mathbf {H}_{ua}^{H}\), the existence of which is guaranteed by Condition 2. The coefficient ρ should be chosen to satisfy \(\rho = \frac {{(1 - \xi){\text {Tr}}({\hat {\mathbf {Q}}_{0}})}}{{N_{t}} - {\sum \nolimits }_{k \in {{\mathcal {K}}_{e}}} {{N_{r,k}}}}\) to keep the total transmit power constant.

Again, by exploiting Lemma 1 and carrying out some matrix manipulations, one can verify that \(({\bar {\mathbf {Q}}_{0}},{\bar {\mathbf {Q}}_{c}},{\bar {\mathbf {Q}}_{a}})\) can achieve a larger objective value than \(({\hat {\mathbf {Q}}_{0}},{\hat {\mathbf {Q}}_{c}},{\hat {\mathbf {Q}}_{a}})\) with the constraint (7a) active. This fact contradicts the primal assumption.

Summarizing the conclusions drawn from the two cases above, we have accomplished the proof of Property 1.

Property 1 makes the proof of ([19], Theorem 1) fully applicable to the proposition here. The remaining parts of the proof can be found in [19] and are omitted here for simplicity.

10 Appendix B: proof of Proposition 3

Firstly, we introduce slack variables α and β to reexpress (8) as
$$\begin{array}{*{20}l} \mathop {\max}\limits_{{\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\alpha,\beta}&\; \lambda_{c} (C_{b} - \beta)+ \alpha\\ \text{s.t.}\quad& {C_{e,k}} \le \beta, \forall k \in {\mathcal{K}}_{e}, \end{array} $$
(38a)
$$\begin{array}{*{20}l} & {C_{m,k}} \ge \alpha, \forall k \in {\mathcal{K}},\\ &\text{(19a)--(19b) are satisfied.} \end{array} $$
(38b)

Equivalently, it suffices to prove that every limit point \((\tilde {\mathbf {Q}}_{0},\tilde {\mathbf {Q}}_{c},\tilde {\mathbf {Q}}_{a})\) of the iterates generated by the AO algorithm, together with \(\tilde \alpha = \mathop {\min }\limits _{k \in {\mathcal {K}}} {C_{m,k}}(\tilde {\mathbf {Q}}_{0},\tilde {\mathbf {Q}}_{c},\tilde {\mathbf {Q}}_{a})\) and \(\tilde \beta = \mathop {\max }\limits _{k \in {\mathcal {K}}_{e}} {C_{e,k}}(\tilde {\mathbf {Q}}_{0},\tilde {\mathbf {Q}}_{c},\tilde {\mathbf {Q}}_{a})\), is a KKT point of (38).

Due to the compactness of (Q 0,Q c ,Q a ), there must exist a subsequence, denoted by
$${\left\{ \left(\mathbf{Q}_{0}^{{n_{l}}},\mathbf{Q}_{c}^{{n_{l}}},\mathbf{Q}_{a}^{{n_{l}}}, \left\{\mathbf{S}_{k}^{{n_{l}}},\mathbf{U}_{k}^{{n_{l}}}\right\}_{k = 1}^{K}\right)\right\}_{l}}, $$
such that \({\left \{ \left (\mathbf {Q}_{0}^{{n_{l}}},\mathbf {Q}_{c}^{{n_{l}}},\mathbf {Q}_{a}^{{n_{l}}},\left \{ \mathbf {S}_{k}^{{n_{l}}}\right \}_{k = 1}^{K},\left \{ \mathbf {U}_{k}^{{n_{l}}}\right \}_{k = 1}^{K}\right)\right \}_{l}}\) converges to a limit point \(\left (\tilde {\mathbf {Q}}_{0},\tilde {\mathbf {Q}}_{c},\tilde {\mathbf {Q}}_{a},\{\tilde {\mathbf {S}}_{k}\}_{k = 1}^{K},\{\tilde {\mathbf {U}}_{k}\}_{k = 1}^{K}\right)\) as l. Next, our proof is composed of two steps. First, we will show that the limit point
$$\left(\tilde{\mathbf{Q}}_{0},\tilde{\mathbf{Q}}_{c},\tilde{\mathbf{Q}}_{a},\{\tilde{\mathbf{S}}_{k},\tilde{\mathbf{U}}_{k}\}_{k = 1}^{K}\right) $$
satisfies the following properties.
$$\begin{array}{*{20}l} &\tilde{\mathbf{S}}_{1} = \arg \;\mathop {\max }\limits_{{\mathbf{S}_{1}} \succeq {\boldsymbol{0}}} {\varphi_{b}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\mathbf{S}_{1}}), \end{array} $$
(39a)
$$\begin{array}{*{20}l} &\tilde{\mathbf{S}}_{k} = \arg \;\mathop {\min }\limits_{{\mathbf{S}_{k}} \succeq {\boldsymbol{0}}} {\varphi_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\mathbf{S}_{k}}),\forall k \in {\mathcal{K}}_{e} \end{array} $$
(39b)
$$\begin{array}{*{20}l} &\tilde{\mathbf{U}}_{k} = \arg \;\mathop {\max }\limits_{{\mathbf{U}_{k}} \succeq {\boldsymbol{0}}} {\varphi_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\mathbf{U}_{k}}),\forall k \in {\mathcal{K}} \end{array} $$
(39c)
$$\begin{array}{*{20}l} &({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}})= \\ &\arg\;\mathop {\max}\limits_{({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) \in {\mathcal{F}}} \;f\left({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\{\tilde{\mathbf{S}}_{k},\tilde{\mathbf{U}}_{k}\}_{k = 1}^{K}\right). \end{array} $$
(39d)

Second, we will check the KKT conditions of problems (39a)–(39d) to build a bridge between (39) and the KKT conditions of problem (38).

Step 1. By noting that
$$\begin{array}{*{20}l} &{S}_{1}^{{n_{l}}} = \arg \;\mathop{\max}\limits_{{\mathbf{S}_{1}} \succeq {\boldsymbol{0}}} {\varphi_{b}}\left(\mathbf{Q}_{c}^{{n_{l}} - 1},\mathbf{Q}_{a}^{{n_{l}} - 1},{\mathbf{S}_{1}}\right), \end{array} $$
(40a)
$$\begin{array}{*{20}l} &{S}_{k}^{{n_{l}}} = \arg \;\mathop{\min}\limits_{{\mathbf{S}_{k}} \succeq {\boldsymbol{0}}} {\varphi_{e,k}}\left(\mathbf{Q}_{c}^{n_{l} - 1},\mathbf{Q}_{a}^{{n_{l}} - 1},{\mathbf{S}_{k}}\right), \forall k \in {\mathcal{K}}_{e} \end{array} $$
(40b)
$$\begin{array}{*{20}l} &\mathbf{U}_{k}^{{n_{l}}} = \arg \;\mathop{\max}\limits_{{\mathbf{U}_{k}} \succeq {\boldsymbol{0}}} {\varphi_{m,k}}\left(\mathbf{Q}_{0}^{{n_{l}} - 1},\mathbf{Q}_{c}^{{n_{l}} - 1},\mathbf{Q}_{a}^{{n_{l}} - 1},{\mathbf{U}_{k}}\right), \\ &\forall k \in {\mathcal{K}} \end{array} $$
(40c)
$$\begin{array}{*{20}l} &(\mathbf{Q}_{0}^{{n_{l}}},\mathbf{Q}_{c}^{{n_{l}}},\mathbf{Q}_{a}^{{n_{l}}}) = \\ &\arg\;\mathop {\max}\limits_{({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) \in {\mathcal{F}}} \;f\left({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\{ \mathbf{S}_{k}^{{n_{l}}},\mathbf{U}_{k}^{{n_{l}}}\}_{k = 1}^{K}\right), \end{array} $$
(40d)
we have
$$\begin{array}{*{20}l} &{\varphi_{b}}\left(\mathbf{Q}_{c}^{{n_{l}} - 1},\mathbf{Q}_{a}^{{n_{l}} - 1},\mathbf{S}_{1}^{{n_{l}}}\right) \ge {\varphi_{b}}\left(\mathbf{Q}_{c}^{{n_{l}} - 1},\mathbf{Q}_{a}^{{n_{l}} - 1},{\mathbf{S}_{1}}\right), \\ &\forall {\mathbf{S}_{1}} \succeq \boldsymbol{0} \end{array} $$
(41a)
$$\begin{array}{*{20}l} &{\varphi_{e,k}}\left(\mathbf{Q}_{c}^{{n_{l}} - 1},\mathbf{Q}_{a}^{{n_{l}} - 1},\mathbf{S}_{k}^{{n_{l}}}\right) \le {\varphi_{e,k}}\left(\mathbf{Q}_{c}^{{n_{l}} - 1},\mathbf{Q}_{a}^{{n_{l}} - 1},{\mathbf{S}_{k}}\right),\\ &\forall {\mathbf{S}_{k}} \succeq \mathbf{0}, \forall k \in {\mathcal{K}}_{e} \end{array} $$
(41b)
$$\begin{array}{*{20}l} &{\varphi_{m,k}}\left(\mathbf{Q}_{0}^{{n_{l}} - 1},\mathbf{Q}_{c}^{{n_{l}} - 1},\mathbf{Q}_{a}^{{n_{l}} - 1},\mathbf{U}_{k}^{{n_{l}}}\right) \ge {\varphi_{m,k}}\left(\mathbf{Q}_{0}^{{n_{l}} - 1},\right.\\ &\left.\mathbf{Q}_{c}^{{n_{l}} - 1},\mathbf{Q}_{a}^{{n_{l}} - 1},{\mathbf{U}_{k}}{\vphantom{\mathbf{Q}_{0}^{{n_{l}} - 1}}}\right),\forall {\mathbf{U}_{k}} \succeq \mathbf{0}, \forall k \in {\mathcal{K}}, \end{array} $$
(41c)
and for any \(({\mathbf {Q}_{0}},{\mathbf {Q}_{c}},{\mathbf {Q}_{a}}) \in {\mathcal {F}}\), the following inequality holds, i.e.,
$$ \begin{array}{ll} &f\left({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\{ \mathbf{S}_{k}^{{n_{l}}}\}_{k = 1}^{K},\{ \mathbf{U}_{k}^{{n_{l}}}\}_{k = 1}^{K}\right) \\ \le &f\left(\mathbf{Q}_{0}^{{n_{l}}},\mathbf{Q}_{c}^{{n_{l}}},\mathbf{Q}_{a}^{{n_{l}}},\{ \mathbf{S}_{k}^{{n_{l}}}\}_{k = 1}^{K},\{ \mathbf{U}_{k}^{{n_{l}}}\}_{k = 1}^{K}\right)\\ \le &f\left({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},\{ {\tilde{\mathbf{S}}_{k}}\}_{k = 1}^{K}\right), \left.\{{\tilde{\mathbf{U}}_{k}}\}_{k = 1}^{K}\right), \end{array} $$
(42)

where the second inequality of (42) holds for the reason that AO algorithm yields non-descending objective values. Then, letting l in (41) and (42) will lead to (39a)–(39d).

Step 2. Then, it follows from (39a) to (39d) and the positive definiteness of \(\{ {\tilde {\mathbf {S}}_{k}}\}_{k = 1}^{K}\) and \(\{ {\tilde {\mathbf {U}}_{k}}\}_{k = 1}^{K}\) that
$$\begin{array}{*{20}l} &{\nabla_{{\mathbf{S}_{1}}}}{\varphi_{b}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{S}}_{1}})=\mathbf{0}, {\tilde{\mathbf{S}}_{1}} \succeq \mathbf{0}, \end{array} $$
(43a)
$$\begin{array}{*{20}l} &{\nabla_{{\mathbf{S}_{k}}}}{\varphi_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{S}}_{k}}) = {\boldsymbol{0}},{\tilde{\mathbf{S}}_{k}} \succeq {\boldsymbol{0}}, \forall k \in {\mathcal{K}}_{e} \end{array} $$
(43b)
$$\begin{array}{*{20}l} &{\nabla_{{\mathbf{U}_{k}}}}{\varphi_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{U}}_{k}}) = {\boldsymbol{0}},{\tilde{\mathbf{U}}_{k}} \succeq {\boldsymbol{0}}, \forall k \in {\mathcal{K}}. \end{array} $$
(43c)
By carrying out some matrix manipulations to (43), it is easy to obtain that
$$\begin{array}{*{20}l} &{\tilde{\mathbf{S}}_{1}} = {\left(\mathbf{I} + {\mathbf{H}_{1}}{\tilde{\mathbf{Q}}_{a}}\mathbf{H}_{1}^{H}\right)^{- 1}} \succeq {\boldsymbol{0}}, \end{array} $$
(44a)
$$\begin{array}{*{20}l} &{\tilde{\mathbf{S}}_{k}} = {\left(\mathbf{I} + {\mathbf{H}_{k}}({\tilde{\mathbf{Q}}_{a}} + {\tilde{\mathbf{Q}}_{c}})\mathbf{H}_{k}^{H}\right)^{- 1}}\succeq {\boldsymbol{0}}, \forall k \in {\mathcal{K}}_{e} \end{array} $$
(44b)
$$\begin{array}{*{20}l} &{\tilde{\mathbf{U}}_{k}} = {\left(\mathbf{I} + {\mathbf{H}_{k}}({\tilde{\mathbf{Q}}_{a}} + {\tilde{\mathbf{Q}}_{c}})\mathbf{H}_{k}^{H}\right)^{- 1}}\succeq {\boldsymbol{0}}, \forall k \in {\mathcal{K}}. \end{array} $$
(44c)
Meanwhile, by introducing slack variables α and β, (39d) is shown to be equivalent to
$$\begin{array}{*{20}l} \mathop {\max}\limits_{{\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},\alpha,\beta} &\lambda_{c} ({\varphi_{b}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}},{{\tilde{\mathbf{S}}}_{1}}) - \beta) + \alpha \\ \text{s.t.}\quad &{\varphi_{e,k}}({\mathbf{Q}_{c}},{\mathbf{Q}_{a}},{\tilde{\mathbf{S}}_{k}}) \le \beta, \forall k \in {\mathcal{K}}_{e}, \end{array} $$
(45a)
$$\begin{array}{*{20}l} &{\varphi_{m,k}}({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}},{\tilde{\mathbf{U}}_{k}}) \ge \alpha, \forall k \in {\mathcal{K}}, \end{array} $$
(45b)
$$\begin{array}{*{20}l} &({\mathbf{Q}_{0}},{\mathbf{Q}_{c}},{\mathbf{Q}_{a}}) \in {\mathcal{F}}. \end{array} $$
(45c)
It is easy to see that \(({\tilde {\mathbf {Q}}_{0}},{\tilde {\mathbf {Q}}_{c}},{\tilde {\mathbf {Q}}_{a}})\), together with \(\tilde \beta = \mathop {\max }\limits _{k \in {\mathcal {K}}_{e}} {\varphi _{e,k}}(\tilde {\mathbf {Q}}_{c},\tilde {\mathbf {Q}}_{a},{\tilde {\mathbf {S}}_{k}})\) and \(\tilde \alpha = \mathop {\min }\limits _{k \in {\mathcal {K}}} {\varphi _{m,k}}(\tilde {\mathbf {Q}}_{0},\tilde {\mathbf {Q}}_{c},\tilde {\mathbf {Q}}_{a},{\tilde {\mathbf {U}}_{k}})\), is an optimal solution of problem (11). Consequently, \(({\tilde {\mathbf {Q}}_{0}},{\tilde {\mathbf {Q}}_{c}},{v{\mathbf {Q}}_{a}},\tilde \beta,\tilde \alpha)\) satisfy the KKT conditions of (45), shown in (46).
$$ \begin{array}{ll} &\lambda_{c}{\nabla_{{\mathbf{Q}_{c}}}}{\varphi_{b}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{S}}_{1}})- \sum\limits_{k \in {\mathcal{K}}_{e}} {{\rho_{k}}{\nabla_{{\mathbf{Q}_{c}}}}{\varphi_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{{\tilde{\mathbf{S}}}_{k}})}\\ &\;+\sum\limits_{k \in {\mathcal{K}}} {{\mu_{k}}{\nabla_{{\mathbf{Q}_{c}}}}{\varphi_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{U}}_{k}})}- \gamma \mathbf{I} + \mathbf{C} = {\boldsymbol{0}}, \\ &\lambda_{c}{\nabla_{{\mathbf{Q}_{a}}}}{\varphi_{b}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{{\tilde{\mathbf{S}}}_{1}})- \sum\limits_{k \in {\mathcal{K}}_{e}} {{\rho_{k}}{\nabla_{{\mathbf{Q}_{a}}}}{\varphi_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{{\tilde{\mathbf{S}}}_{k}})}\\ &\;+\sum\limits_{k \in {\mathcal{K}}} {{\mu_{k}}{\nabla_{{\mathbf{Q}_{a}}}}{\varphi_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{U}}_{k}})}- \gamma \mathbf{I} + \mathbf{A} = {\boldsymbol{0}},\\ &\sum\limits_{k \in {\mathcal{K}}} {{\mu_{k}}{\nabla_{{\mathbf{Q}_{0}}}}{\varphi_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{U}}_{k}})}- \gamma \mathbf{I} + \mathbf{B} = {\boldsymbol{0}}, \\ &{\varphi_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{{\tilde{\mathbf{S}}}_{k}}) \le \tilde \beta, \forall k \in {\mathcal{K}}_{e} \\ &{\rho_{k}}({\varphi_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{{\tilde{\mathbf{S}}}_{k}}) - \tilde \beta) = 0, \forall k \in {\mathcal{K}}_{e} \\ &{\varphi_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{U}}_{k}}) \ge \tilde \alpha, \forall k \in {\mathcal{K}}\\ &{\mu_{k}}({\varphi_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{U}}_{k}}) - \tilde \alpha) = 0, \forall k \in {\mathcal{K}}\\ &{\sum\nolimits}_{k = 1}^{K} {{\rho_{k}}} = 1, {\sum\nolimits}_{k = 1}^{K} {{\mu_{k}}} = 1,\\ &\mathbf{A} \succeq {\boldsymbol{0}},\mathbf{B} \succeq {\boldsymbol{0}},\mathbf{C} \succeq {\boldsymbol{0}},\\ &\gamma \ge 0, {{\rho_{k}}} \ge 0, \forall k \in {\mathcal{K}}_{e}, {{\mu_{k}}} \ge 0, \forall k \in {\mathcal{K}},\\ &{\text{Tr}}({\tilde{\mathbf{Q}}_{0}} +{\tilde{\mathbf{Q}}_{c}} + {\tilde{\mathbf{Q}}_{a}}) \le P,{\tilde{\mathbf{Q}}_{0}} \succeq {\boldsymbol{0}},{\tilde{\mathbf{Q}}_{c}} \succeq {\boldsymbol{0}},{\tilde{\mathbf{Q}}_{a}} \succeq {\boldsymbol{0}},\\ &\gamma ({\text{Tr}}({\tilde{\mathbf{Q}}_{0}} +{\tilde{\mathbf{Q}}_{c}} + {\tilde{\mathbf{Q}}_{a}}) - P) = 0,\\ &{\text{Tr}}(\mathbf{B}{\tilde{\mathbf{Q}}_{0}}) = 0, {\text{Tr}}(\mathbf{C}{\tilde{\mathbf{Q}}_{c}}) = 0,{\text{Tr}}(\mathbf{A}{\tilde{\mathbf{Q}}_{a}}) = 0. \end{array} $$
(46)

In (46), \(\left (\{{\rho _{k}}\}_{k \in {\mathcal {K}}_{e}},\{{\mu _{k}}\}_{k \in {\mathcal {K}}},\gamma,\mathbf {A},\mathbf {B},\mathbf {C}\right)\) are all dual variables pertaining to the constraints in (45).

To proceed, by applying Danskin’s theorem [44], one can verify that the following equalities must hold.
$$ \begin{aligned} {\nabla_{\mathbf{Q}_{c}}}{C_{b}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}}) &= {\nabla_{\mathbf{Q}_{c}}}{\varphi_{b}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{S}}_{1}}),\\ {\nabla_{\mathbf{Q}_{a}}}{C_{b}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}}) &= {\nabla_{\mathbf{Q}_{a}}}{\varphi_{b}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{S}}_{1}}),\\ {\nabla_{\mathbf{Q}_{c}}}{C_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}}) &= {\nabla_{\mathbf{Q}_{c}}}{\varphi_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{S}}_{k}}), \\ {\nabla_{\mathbf{Q}_{a}}}{C_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}}) &= {\nabla_{\mathbf{Q}_{a}}}{\varphi_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{S}}_{k}}), \\ {\nabla_{\mathbf{Q}_{c}}}{C_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}}) &= {\nabla_{\mathbf{Q}_{c}}}{\varphi_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{U}}_{k}}),\\ {\nabla_{\mathbf{Q}_{a}}}{C_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}}) &= {\nabla_{\mathbf{Q}_{a}}}{\varphi_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{U}}_{k}}), \\ {\nabla_{\mathbf{Q}_{0}}}{C_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}}) &= {\nabla_{\mathbf{Q}_{0}}}{\varphi_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{U}}_{k}}). \end{aligned} $$
(47)
Then, substituting (44b) and (44c) into \({\varphi _{e,k}}({\tilde {\mathbf {Q}}_{c}},{\tilde {\mathbf {Q}}_{a}},{\tilde {\mathbf {S}}_{k}})\) and \({\varphi _{m,k}}({\tilde {\mathbf {Q}}_{0}},{\tilde {\mathbf {Q}}_{c}},{\tilde {\mathbf {Q}}_{a}},{\tilde {\mathbf {U}}_{k}})\), one can obtain
$$ \begin{aligned} {C_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{p}}_{m}) &= {\varphi_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{{\tilde{p}}_{m}},{\tilde{\mathbf{S}}_{k}}), \forall k \in {\mathcal{K}}_{e}\\ {C_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}}) & = {\varphi_{e,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}},{\tilde{\mathbf{U}}_{k}}), \forall k \in {\mathcal{K}}. \end{aligned} $$
(48)
Finally, by plugging (47) and (48) into (46), we obtain
$$ \begin{array}{ll} &\lambda_{c}{\nabla_{{\mathbf{Q}_{c}}}}{C_{b}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}})- \sum\limits_{k \in {\mathcal{K}}_{e}} {{\rho_{k}}{\nabla_{{\mathbf{Q}_{c}}}}{C_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}})}- \gamma \mathbf{I}\\ &\;+ \mathbf{C}+\sum\limits_{k \in {\mathcal{K}}} {{\mu_{k}}{\nabla_{{\mathbf{Q}_{c}}}}{C_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}})} = {\boldsymbol{0}}, \\ &\lambda_{c}{\nabla_{{\mathbf{Q}_{a}}}}{C_{b}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}})- \sum\limits_{k \in {\mathcal{K}}_{e}} {{\rho_{k}}{\nabla_{{\mathbf{Q}_{a}}}}{C_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}})}- \gamma \mathbf{I}\\ &\; + \mathbf{A} +\sum\limits_{k \in {\mathcal{K}}} {{\mu_{k}}{\nabla_{{\mathbf{Q}_{a}}}}{C_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}})}= {\boldsymbol{0}}, \\ &\sum\limits_{k \in {\mathcal{K}}}{\mu_{k}}{\nabla_{{\mathbf{Q}_{0}}}}{C_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}})- \gamma \mathbf{I} + \mathbf{B} = {\boldsymbol{0}},\\ &{\rho_{k}}({C_{e,k}}({\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}}) - \tilde \beta) = 0, \forall k \in {\mathcal{K}}_{e}\\ &{C_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}}) \ge \tilde \alpha, \forall k \in {\mathcal{K}}\\ &{\mu_{k}}({C_{m,k}}({\tilde{\mathbf{Q}}_{0}},{\tilde{\mathbf{Q}}_{c}},{\tilde{\mathbf{Q}}_{a}}) - \tilde \alpha) = 0, \forall k \in {\mathcal{K}}. \end{array} $$
(49)

Remarkably, (49), together with the last six lines of (46), represents the KKT conditions of the WSRM problem (38). This fact completes the proof.

Declarations

Acknowledgements

This work was supported in part by the National Natural Science Foundation of China under grant 61571089 and by the High-Tech Research and Development (863) Program of China under grant 2015AA01A707.

Competing interests

The authors declare that they have no competing interests.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Authors’ Affiliations

(1)
National Key Laboratory of Science and Technology on Communications, University of Electronic Science and Technology of China

References

  1. JG Andrews, S Buzzi, W Choi, SV Hanly, A Lozano, ACK Soong, JC Zhang, What will 5G be?IEEE J. Sel. Areas Commun. 32(6), 1065–1082 (2014).View ArticleGoogle Scholar
  2. Y-S Shiu, SY Chang, H-C Wu, SC-H Huang, H-H Chen, Physical layer security in wireless networks: a tutorial. IEEE Wirel. Commun. 18(2), 66–74 (2011).View ArticleGoogle Scholar
  3. B He, X Zhou, TD Abhayapala, Wireless physical layer security with imperfect channel state information: a survey (2013). http://arxiv.org/abs/1307.4146. Accessed June 2013.Google Scholar
  4. Y-WP Hong, P-C Lan, C-CJ Kuo, Enhancing physical-layer secrecy in multiantenna wireless systems: an overview of signal processing approaches. IEEE Signal Process. Mag. 30(5), 29–40 (2013).View ArticleGoogle Scholar
  5. A Mukherjee, SA Fakoorian, J Huang, AL Swindlehurst, et al, Principles of physical layer security in multiuser wireless networks: a survey. IEEE Commun. Surv. Tuts. 16(3), 1550–1573 (2014).View ArticleGoogle Scholar
  6. Y Liu, H-H Chen, L Wang, Physical layer security for next generation wireless networks: theories, technologies, and challenges. IEEE Commun. Surv. Tuts. 19(1), 347–376 (2017).View ArticleGoogle Scholar
  7. R Liu, HV Poor, Secrecy capacity region of a multi-antenna Gaussian broadcast channel with confidential messages. IEEE Trans. Inf. Theory. 55(3), 1235–1249 (2009).View ArticleGoogle Scholar
  8. R Liu, T Liu, HV Poor, S Shamai, Multiple-input multiple-output Gaussian broadcast channels with confidential messages. IEEE Trans. Inf. Theory. 56(9), 4215–4227 (2010).MathSciNetView ArticleMATHGoogle Scholar
  9. SAA Fakoorian, AL Swindlehurst, On the optimality of linear precoding for secrecy in the MIMO broadcast channel. IEEE J. Sel. Areas Commun. 31(9), 1701–1713 (2013).View ArticleGoogle Scholar
  10. D Park, Weighted sum rate maximization of MIMO broadcast and interference channels with confidential messages. IEEE Trans. Wirel. Commun. 15(3), 1742–1753 (2016).View ArticleGoogle Scholar
  11. I Csiszár, J Körner, Broadcast channels with confidential messages. IEEE Trans. Inf. Theory. 24(3), 339–348 (1978).MathSciNetView ArticleMATHGoogle Scholar
  12. HD Ly, T Liu, Y Liang, Multiple-input multiple-output Gaussian broadcast channels with common and confidential messages. IEEE Trans. Inf. Theory. 56(11), 5477–5487 (2010).MathSciNetView ArticleGoogle Scholar
  13. E Ekrem, S Ulukus, Capacity region of gaussian MIMO broadcast channels with common and confidential messages. IEEE Trans. Inf. Theory. 58(9), 5669–5680 (2012).MathSciNetView ArticleMATHGoogle Scholar
  14. R Liu, T Liu, HV Poor, S Shamai, in Proc. IEEE Int. Symp. Inf. Theory (ISIT’2010). MIMO Gaussian broadcast channels with confidential and common messages (IEEEAustin, 2010), pp. 2578–2582.Google Scholar
  15. R Liu, T Liu, HV Poor, S Shamai (Shitz), New results on multiple-input multiple-output broadcast channels with confidential messages. IEEE Trans. Inf. Theory. 59(3), 1346–1358 (2013).MathSciNetView ArticleMATHGoogle Scholar
  16. RF Wyrembelski, H Boche, in Proc. IEEE Global Communication Conf. Workshops. Service integration in multiantenna bidirectional relay networks: public and confidential messages (IEEEHouston, 2011), pp. 884–888.Google Scholar
  17. RF Wyrembelski, H Boche, Physical layer integration of private, common, and confidential messages in bidirectional relay networks. IEEE Trans. Wirel. Commun. 11(9), 3170–3179 (2012).View ArticleGoogle Scholar
  18. R Schaefer, H Boche, Physical layer service integration in wireless networks: signal processing challenges. IEEE Signal Process. Mag. 31(3), 147–156 (2014).View ArticleGoogle Scholar
  19. W Mei, Z Chen, J Fang, Secrecy capacity region maximization in Gaussian MISO channels with integrated services. IEEE Signal Process. Lett. 23(8), 1146–1150 (2016).Google Scholar
  20. W Mei, L Li, Z Chen, C Huang, in Proc. IEEE Global Conf. Signal Info. Process. (GlobalSIP). Artificial-noise aided transmit design for multi-user MISO systems with integrated services (IEEEOrlando, 2015), pp. 1382–1386.Google Scholar
  21. W Mei, Z Chen, C Huang, in Proc. IEEE ICASSP. Robust artificial-noise aided transmit design for multi-user MISO systems with integrated services (IEEEShanghai, 2016), pp. 3856–3860.Google Scholar
  22. W Mei, L Li, Z Chen, C Huang, in Proc. IEEE Int. Conf. Commun. Artificial-noise aided transmit design for outage constrained service integration (Kuala Lumpur, 2016), pp. 1–7.Google Scholar
  23. W Mei, Z Chen, J Fang, GSVD-based precoding in MIMO systems with integrated services. IEEE Signal Process. Lett. 23(11), 1528–1532 (2016).View ArticleGoogle Scholar
  24. Q Li, M Hong, H-T Wai, Y-F Liu, W-K Ma, Z-Q Luo, Transmit solutions for MIMO wiretap channels using alternating optimization. IEEE J. Sel. Areas Commun. 31(9), 1714–1727 (2013).View ArticleGoogle Scholar
  25. Q Li, W-K Ma, Spatially selective artificial-noise aided transmit optimization for MISO multi-Eves secrecy rate maximization. IEEE Trans. Signal Process. 61(10), 2704–2717 (2013).MathSciNetView ArticleGoogle Scholar
  26. Z Chu, K Cumanan, Z Ding, M Johnston, SY Le Goff, Robust outage secrecy rate optimizations for a MIMO secrecy channel. IEEE Wirel. Commun. Lett. 4(1), 86–89 (2015).View ArticleGoogle Scholar
  27. Z Chu, H Xing, M Johnston, SY Le Goff, Secrecy rate optimizations for a MISO secrecy channel with multiple multiantenna eavesdroppers. IEEE Trans. Wirel. Commun. 15(1), 283–297 (2016).View ArticleGoogle Scholar
  28. T-X Zheng, H-M Wang, J Yuan, D Towsley, MH Lee, Multi-antenna transmission with artificial noise against randomly distributed eavesdroppers. IEEE Trans. Commun. 63(11), 4347–4362 (2015).View ArticleGoogle Scholar
  29. GR Lanckriet, BK Sriperumbudur, in Proc. Advances Neural Inf. Process. Syst. On the convergence of the concave-convex procedure (NIPS FoundationVancouver, 2009), pp. 1759–1767.Google Scholar
  30. B Fang, Z Qian, W Shao, W Zhong, Precoding and artificial noise design for cognitive MIMOME wiretap channels. IEEE Trans. Veh. Technol. 65(8), 6753–6758 (2016).View ArticleGoogle Scholar
  31. Z Chu, K Cumanan, Z Ding, M Johnston, SY Le Goff, Secrecy rate optimizations for a MIMO secrecy channel with a cooperative jammer. IEEE Trans. Veh. Technol. 64(5), 1833–1847 (2015).View ArticleGoogle Scholar
  32. J Yang, I-M Kim, DI Kim, Optimal cooperative jamming for multiuser broadcast channel with multiple eavesdroppers. IEEE Trans. Wirel. Commun. 12(6), 2840–2852 (2013).MathSciNetView ArticleGoogle Scholar
  33. SX Wu, W-K Ma, AM-C So, Physical-layer multicasting by stochastic transmit beamforming and Alamouti space-time coding. IEEE Trans. Signal Process. 61(17), 4230–4245 (2013).MathSciNetView ArticleGoogle Scholar
  34. H Zhu, N Prasad, S Rangarajan, Precoder design for physical layer multicasting. IEEE Trans. Signal Process. 60(11), 5932–5947 (2012).MathSciNetView ArticleGoogle Scholar
  35. W Lee, H Park, HB Kong, JS Kwak, I Lee, A new beamforming design for multicast systems. IEEE Trans. Veh. Technol. 62(8), 4093–4097 (2013).View ArticleGoogle Scholar
  36. B Du, Y Jiang, X Xu, X Dai, Optimum beamforming for MIMO multicasting. EURASIP J. Adv. Signal Process. 2013(121), 1–15 (2013).Google Scholar
  37. M Grant, S Boyd, CVX: Matlab software for disciplined convex programming (2011). http://cvxr.com/cvx. Accessed Apr 2011.
  38. S Boyd, L Vandenberghe, Convex optimization (Cambridge university press, Cambridge, 2009).MATHGoogle Scholar
  39. RT Marler, JS Arora, Survey of multi-objective optimization methods for engineering. Struct. Multidiscip. Optim. 26(6), 369–395 (2004).MathSciNetView ArticleMATHGoogle Scholar
  40. Z-Q Luo, W-K Ma, AM-C So, Y Ye, S Zhang, Semidefinite relaxation of quadratic optimization problems. IEEE Signal Process. Mag. 27(3), 20–34 (2010).View ArticleGoogle Scholar
  41. K Cumanan, Z Ding, B Sharif, GY Tian, KK Leung, Secrecy rate optimizations for a MIMO secrecy channel with a multiple-antenna eavesdropper. IEEE Trans. Veh. Technol. 63(4), 1678–1690 (2014).View ArticleGoogle Scholar
  42. A Ben-Tal, A Nemirovski, Lectures on modern convex optimization: analysis, algorithms, and engineering applications. vol. 2 (SIAM, Philadelphia, 2001).View ArticleMATHGoogle Scholar
  43. H Weingarten, Y Steinberg, S Shamai (Shitz), The capacity region of the Gaussian multiple-input multiple-output broadcast channel. IEEE Trans. Inf. Theory. 52(9), 3936–3964 (2006).MathSciNetView ArticleMATHGoogle Scholar
  44. D Bertsekas, Nonlinear programming, 2nd edn. (Athena Scientific, Belmont, 1999).MATHGoogle Scholar

Copyright

© The Author(s) 2017