 Research
 Open Access
Joint source and relay design for twohop amplifyandforward relay networks with QoS constraints
 Jafar Mohammadi^{1},
 Feifei Gao^{2}Email author,
 Yue Rong^{3} and
 Wen Chen^{4}
https://doi.org/10.1186/168714992013108
© Mohammadi et al.; licensee Springer. 2013
Received: 31 May 2012
Accepted: 25 March 2013
Published: 19 April 2013
Abstract
In this paper, we consider the joint design of source precoding matrix and the relay precoding matrix in a twohop multipleinput multipleoutput relay network. The goal is to find a pair of matrices in order to minimize the power consumption and at the same time meet preselected quality of service constraints that are defined as the mean square error of each data stream. Using majorization theory, we simplify the matrixvalued optimization problem into a scalarvalued one. We then propose a lower bound and an upper bound of the original problem, both in convex forms. Specifically, the latter is solved by a multilevel waterfilling algorithm that is much efficient than directly applying the interior point method. Numerical examples corroborate the proposed studies and also demonstrate the tightness of both bounds to the original problem.
Keywords
 Joint precoding design
 MIMO relay networks
 Quality of service
 Majorization
 Convex optimization
1 Introduction
Relay networks have recently attracted much attention because of their promising capability in achieving reliable communication and wide coverage for the next generation of wireless systems [1, 2]. Different types of relaying strategies, e.g., amplify and forward (AF), decode and forward (DF), and compress and forward (CF) were introduced in [3–5], respectively. DF and CF decode data before retransmission and are, thus, also known as regenerative strategies; AF relay amplifies the received data only and is known as nonregenerative strategy. The computational simplicity in AF relay makes it highly attractive and a strong candidate for the realtime application. On the other side, multiinput multioutput (MIMO) technique [6] can enhance the data transmission rate by introducing the spatial diversity gain. Therefore, combining relaying and MIMO becomes a natural way to further advance the future wireless communication systems.
Most of the existing works focus on the AF MIMO relay networks, where a certain performance criterion is optimized subject to power constraints at both source and relay. For example, the mutual information and the total mean square error (MSE) criteria are selected as objective functions in [7, 8], respectively. The authors in [9] investigated the diversity multiplexing tradeoff for MIMO relays. Moreover, there are some works on beamforming design for special types of AF MIMO relays; for instance, the authors in [10] considered codebook design for halfduplex AF MIMO relays, while the author in [11] considered beamforming design for rather vast types of MIMO relays. Applying the majorization theory [12], the authors in [13] proposed a unified framework for most performance criteria. The extension of [13] to the multiple relay case was introduced in [14].
All of the mentioned methods above considered enhancing the system performance by maximizing or minimizing a certain objective function constraint on power consumption at some or all nodes. Although this model improves the system performance, it does not guarantee a certain quality of service (QoS) requirement for an individual user. The importance of considering QoS becomes more obvious in practical applications supporting several types of service each with a different reliability requirement. One of the pioneering works considering QoS in MIMO pointtopoint systems is [15], where the authors optimized the transmission power subject to predefined sets of QoS, e.g., individual MSE, signaltonoise ratio (SNR), and bit error rate. In [16], the authors investigated the optimization of source beamforming and relay weighting matrix in order to minimize the total power subject to a given set of QoS for multiinput singleoutput broadcast channel. On the aspect of relay network, AF MIMO relay network with QoS constraints has been investigated in [17]. Applying majorization theory, the author in [17] proposed a unified framework for the design of the optimal structure of the source precoding and relay amplifying matrices. The author applied a successive geometric programming method to obtain the optimal power loading among data streams. Unfortunately, the computational complexity of the proposed solution in [17] compromises its suitability for practical implementation. With similar assumptions, the authors in [18] considered a simplified version of the problem in which only the relay power is minimized. Then, the minimization is executed over a convex lower bound of the objective function. In a rather more general setup, the authors in [19] studied the joint relay and source power minimization and applied the majorization theory to reduce the problem to a scalar one. Then, using QoS convex relaxation, an upper bound and a lower bound on the optimal results are presented.
In this paper, building upon the results in [19], we take a specific look into a dualhop^{a} AF MIMO relay network. We first jointly design the source and the relay precoding matrices such that the overall transmission power is minimized subject to a given set of QoS constraints. Applying the majorization theory, we reduce the original matrixvalued problem to a scalarvalued one and then propose two new convex optimization problems whose objective values serve as the lower bound and the upper bound of the original problem. While both new problems can be handled by the existing convex optimization tools, e.g., CVX[20], we specifically design a multilevel waterfilling (MLWF) algorithm to solve the upper bound problem that can further reduce the computational complexity. Compared with the successive geometric programming approach developed in [17], the MLWF algorithm does not require any optimization tool and thus is easier to implement for practical relay systems. Numerical results corroborate the proposed studies and clearly demonstrate the tightness of the proposed lower bound and upper bound, especially over low MSE region.
The rest of this paper is arranged as follows: Section 2 presents the system model and formulates the optimization problem in matrix form. In Section 3, the optimization is simplified to a scalarvalued problem from the majorization theory. Two suboptimal problems whose objectives serve as the upper bound and the lower bound of the original problems are derived in the same section. In Section 4, the upper bound problem is solved from a multilevel waterfilling algorithm coupled with decomposition methods. The simulation results are presented in Section 5, and conclusions are drawn in Section 6.
1.1 Notations
Vectors and matrices are boldface small and capital letters, respectively; the transpose, complex conjugate, Hermitian, inverse, and pseudoinverse of A are denoted by A^{ T }, A^{∗}, A^{ H }, A^{−1}, and A^{ † }, respectively; ∥ a ∥ denotes the Euclidean norm of the vector A; diag{a} is the diagonal matrix with diagonal elements given by A, while diag{a} is a vector with entries taken from the diagonal elements of A ; I is the identity matrix; and E {·} is the statistical expectation. Moreover, basic notations and definitions of majorization theory can be found in Appendix 1.
2 System model
where v_{ r } and v_{ d } are additive white complex Gaussian noise at relay and destination, respectively, i.e., ${\mathbf{v}}_{r}\in \mathcal{C}\mathcal{N}(0,{\sigma}_{r}^{2}{I}_{N})$ and ${\mathbf{v}}_{d}\in \mathcal{C}\mathcal{N}(0,{\sigma}_{d}^{2}{I}_{N})$. Without loss of generality, we set ${\sigma}_{d}^{2}={\sigma}_{r}^{2}=1$. Since the correlation and power can be designed through B, the data streams from the source can be assumed independent from each other, i.e., E{xx^{ H }} = I.
As in [15], the QoS measurement here is taken as the MSE of each individual data stream. Let us denote the QoS vector as ρ = [ρ_{1},…,ρ_{ L }]^{ T }, i.e., the MSE of the i th data stream is required to be smaller than ρ_{ i }. Note that ρ_{ i }<1 is necessary to avoid trivial solutions.
where ‘ ≤’ denotes the elementwise operation if used between vectors.
3 Optimization problem
The goal is to find the optimal B and F to minimize the overall power spent by the source and relay and at the same time meet the QoS requirements.
Unfortunately, the problem is nonconvex and cannot be solved in an efficient way.
3.1 Equivalent problem
where ≻^{ w } means weak majorization, and the details can be found in Appendix 1.
Theorem 1. Problems (P1) and (P2) are equivalent.
Proof. The idea is to prove that for each feasible point of (P1), there is a corresponding feasible point in (P2) that yields the same objective value and vice versa.
where $\mathbf{Z}\triangleq {(\mathbf{I}+{\mathbf{M}}^{H}\mathbf{R}\mathbf{M})}^{1}$. From Z_{ i i }≤ ρ_{ i }, one can conclude that diag{Z}≻^{ w }ρ. We further know from Lemma 2 in Appendix 1 that λ {Z} ≻^{ w }diag{Z}. Therefore, λ{Z}≻^{ w } ρ and we achieve the conclusion that for any feasible point (B,F) in (P1), there is always a corresponding feasible point $(\stackrel{~}{\mathbf{B}},\mathbf{F})$ in (P2) that gives the same objective value.
(P2) →(P1): Define $\stackrel{~}{\mathbf{Z}}={(\mathbf{I}+{\stackrel{~}{\mathbf{M}}}^{H}\mathbf{R}\stackrel{~}{\mathbf{M}})}^{1}$ and assume that $(\stackrel{~}{\mathbf{B}},\mathbf{F})$ is a feasible point of (P2). From (P2), we know that $\mathit{\text{diag}}\left\{\stackrel{~}{\mathbf{Z}}\right\}=\mathit{\lambda}\left\{\stackrel{~}{\mathbf{Z}}\right\}{\succ}^{w}\mathit{\rho}$ holds. From Lemma 3 in Appendix 1, we know that there exists a vector c satisfying both $\mathit{\lambda}\left\{\stackrel{~}{\mathbf{Z}}\right\}\succ \mathbf{c}$ and c ≤ ρ. From Lemma 2, we know that for each $\mathbf{c}\prec \mathit{\lambda}\left\{\stackrel{~}{\mathbf{Z}}\right\}$, there exists a matrix W with diag{W}=c and $\mathit{\lambda}\left\{\mathbf{W}\right\}=\mathit{\lambda}\left\{\stackrel{~}{\mathbf{Z}}\right\}$. Let $\mathbf{W}={\mathbf{Q}}^{H}\stackrel{~}{\mathbf{Z}}\mathbf{Q}$ and define $\mathbf{B}=\stackrel{~}{\mathbf{B}Q}$. Then, diag{(I+M^{ H }R M)^{− 1}} = diag {W} ≤ρ. Moreover, the objective function of (P1) with (B,F) is the same as the objective function of (P2). □
3.2 Processing matrix variables
Based on Theorem 1, we can solve (P2) instead of (P1).
where U_{ X } and U_{ Y } are the N × L and P × L orthonormal matrices, respectively. Throughout this paper, we always sort the singular values and eigenvalues in an increasing order. Note that it is still possible that Λ_{ X } or Λ_{ Y } contains some zero entries.
where V_{ X } can be any L × L unitary matrix, and V_{ Y } can be any N × L orthonormal matrices. These two matrices will be designed later to fulfill the optimality requirement.
be the SVD of H_{ s } and H_{ r } respectively. We further express the singular matrices as ${\mathbf{U}}_{s}\triangleq [{\mathbf{U}}_{s,2},{\mathbf{U}}_{s,1}]$ and ${\mathbf{U}}_{r}\triangleq [{\mathbf{U}}_{r,2},{\mathbf{U}}_{r,1}]$, in which U_{s,1} and U_{r,1} contain L first columns.
There are three cases to be discussed:

If N = M = L, then $\stackrel{~}{\mathbf{B}}$ can be reexpressed from (18) as$\begin{array}{l}\stackrel{~}{\mathbf{B}}={\mathbf{V}}_{s}{\mathit{\Sigma}}_{s}^{1}{\mathbf{U}}_{s,1}^{H}{\mathbf{U}}_{X}{\mathbf{\Lambda}}_{X}^{\frac{1}{2}}{\mathbf{V}}_{X}^{H}.\end{array}$(19)

If M > N = L, (18) can be simplified as$\begin{array}{l}\left[\begin{array}{ll}{\mathit{\Sigma}}_{s}^{1}& {\mathbf{0}}_{L\times (ML)}\end{array}\right]{\mathbf{V}}_{s}^{H}\stackrel{~}{\mathbf{B}}={\mathbf{U}}_{s,1}^{H}{\mathbf{U}}_{X}{\mathbf{\Lambda}}_{X}^{\frac{1}{2}}{\mathbf{V}}_{X}^{H}.\end{array}$(20)The solution for $\stackrel{~}{\mathbf{B}}$ is not unique, and the one which minimizes the objective function should be chosen. Basically, we need to solve$\begin{array}{ll}\underset{\stackrel{~}{\mathbf{B}}}{\mathrm{min}}& \text{tr}\left(\stackrel{~}{\mathbf{B}}{\stackrel{~}{\mathbf{B}}}^{H}\right)\phantom{\rule{2em}{0ex}}\\ \text{subject to}& \left[\begin{array}{ll}{\mathit{\Sigma}}_{s}^{1}& {\mathbf{0}}_{L\times (ML)}\end{array}\right]{\mathbf{V}}_{s}^{H}\stackrel{~}{\mathbf{B}}\phantom{\rule{0.3em}{0ex}}=\phantom{\rule{0.3em}{0ex}}{\mathbf{U}}_{s,1}^{H}{\mathbf{U}}_{X}{\mathbf{\Lambda}}_{X}^{\frac{1}{2}}{\mathbf{V}}_{X}^{H}.\phantom{\rule{2em}{0ex}}\end{array}$(21)Note that the second term in the objective function is not included in (21) since it will be taken cared by F. From Lemma 9 in [14], the optimal structure of $\stackrel{~}{\mathbf{B}}$ is as follows:$\begin{array}{l}\stackrel{~}{\mathbf{B}}={\mathbf{V}}_{s}{\left[\begin{array}{ll}{\mathit{\Sigma}}_{s}^{1}& {\mathbf{0}}_{L\times (ML)}\end{array}\right]}^{T}{\mathbf{U}}_{s,1}^{H}{\mathbf{U}}_{X}{\mathbf{\Lambda}}_{X}^{\frac{1}{2}}{\mathbf{V}}_{X}^{H}.\end{array}$(22)

If M > L and N > L, then (20) holds if and only if ${\mathbf{U}}_{s,2}^{H}{\mathbf{U}}_{X}={\mathbf{0}}_{\left(\right(NL)\times N)}$. In this case, (22) is still the optimal structure for $\stackrel{~}{\mathbf{B}}$. Please refer to [14] for the detailed derivation.
where the pseudoinverses of ${\mathbf{\Lambda}}_{s}^{\u2020}$ and ${\mathbf{\Lambda}}_{r}^{\u2020}$ are $\left[{\mathit{\Sigma}}_{s}^{1}\right.$ 0_{L× (N − L)}]^{ T } and ${\left[{\mathit{\Sigma}}_{r}^{1}{\mathbf{0}}_{L\times (ML)}\right]}^{T}$, respectively.
Remark 1. Representing the problem in terms of these new variables decreases the dependency between the objective function and constraints and thus facilitates the optimization procedure. For example, U_{ Y } is only involved in objective function, but not in the constraints. Hence, we can adjust U_{ Y } to change the objective without affecting the constraint. Reversely, V_{ X } and V_{ Y } only affect the constraint, but not the objective function.
Remark 2. On the other hand, the diagonality constraint in (P2) can be satisfied by only adjusting V_{ X } so that the other constraint and the objective functions will not be affected.
3.3 Simplification
In the following subsections, we derive the optimum structures for V_{ X }, V_{ Y }, U_{ X }, and U_{ Y }, respectively.
3.3.1 Optimal V_{ X }and V_{ Y }
which indicates that $\mathit{\text{diag}}\{\mathbf{I}\widehat{\mathbf{G}}\}$ is a simultaneous lower bound for all the elements in diag{I−G}. Therefore, $\mathbf{G}=\widehat{\mathbf{G}}$ must hold at the optimal point, and this can be achieved if V_{ Y } = U_{ X } and V_{ X } = I.
3.3.2 Optimal U_{ X }and U_{ Y }
and the minimum is achieved when U_{ Y } = U_{r,1}.
Replacing the corresponding term in (40) by the righthand side (RHS) in (41) or (42) while keeping the same objective function, we can obtain the upper or the lower bound of the original problem, respectively. Both the lower bound and the upper bound problems can be solved by the existing convex optimization tools based on the interior point method, e.g., CVX, [20, 23].
4 Algorithm for the upper bound problem
It is hard to obtain the closed form solution even with the KarushKuhnTucker (KKT) conditions. Observing the symmetry, we can apply the primal decomposition method [23] to break down the problem into two simpler subproblems.
4.1 The decomposition method
In many cases, the subproblems do not have closedform solution. Therefore, searching the optimum T for the master problem can be iteratively done from the subgradient method. Note that the convergence of the decomposed problem to the global optimal point can be guaranteed since (43) is convex.
4.2 Solving the subproblems
Next, we propose an efficient algorithm that could find the solution that satisfies (53), (54), and (55) simultaneously.
Algorithm 1 Multilevel waterfilling algorithm
We next need to prove the optimality of the above MLWF. Since the problem is convex, the output of the algorithm is the optimal solution if and only if all of the KKT conditions are satisfied. The conditions (55) are simultaneously satisfied from the step 4 in the algorithm. Condition (53) is itself satisfied by the nature of the waterfilling algorithm, that is the water levels are always nonnegative. The above MLWF algorithm satisfies (54) as well, as seen from the following lemma.
Lemma 1. Successive water levels achieved by the proposed MLWF algorithm are ordered decreasingly.
Proof. The algorithm has two loops: one is the inner loop ‘steps 3 → 4 → 5 → 3’ and the other is the outer loop ‘steps 3 → 4 → 5 → 6 → 3.’ Each time when the inner loop finishes, one water level $\stackrel{~}{\mu}(L,H)$ will be achieved. Then, we proceed to compute other water levels. In the inner loop, we first adopt the water level given by standard waterfilling algorithm. Our hypothesis is that the aforementioned water level satisfies all of the conditions (55). Then, we check our hypothesis by searching whether there is a k=l_{0} for which the corresponding condition in (55) is violated.
Therefore, using $\stackrel{~}{\mu}(L,l)$ in both (58) and (59) yields $\sum _{i={l}_{0}+1}^{l}\frac{1}{{x}_{i}+1}<\sum _{i={l}_{0}+1}^{l}{\rho}_{i}$. On the other hand, $\stackrel{~}{\mu}({l}_{0}+1,l)$ gives $\sum _{i={l}_{0}+1}^{l}\frac{1}{{x}_{i}+1}=\sum _{i={l}_{0}+1}^{l}{\rho}_{i}$. Therefore, we infer $\stackrel{~}{\mu}({l}_{0}+1,l)<\stackrel{~}{\mu}(L,l)$. Together with (57), we conclude $\stackrel{~}{\mu}({l}_{0}+1,l)<\stackrel{~}{\mu}(L,{l}_{0})$. □
Meanwhile, the subproblem Φ_{ y }(t) can be solved with the same MLWF, which will not be restated here.
Algorithm 2 The master algorithm
5 Simulation results
 1.Diagonalization method. We reduce the problem into a scalar problem using SVD of channel matrices. If we consider the structure of B = U _{ s }Λ _{ B } and $\mathbf{F}={\mathbf{U}}_{r}{\mathbf{\Lambda}}_{F}{\mathbf{V}}_{s}^{H}$, we can reduce the problem into a simple scalar problem. In this method we basically choose the precoding matrices to match the channel matrices. The optimization problem of (10) is simplified to the following:$\begin{array}{ll}\text{(P4):}\underset{{x}_{i},{y}_{i}}{\mathrm{min}}& \phantom{\rule{1em}{0ex}}\sum _{i=1}^{L}{a}_{i}{x}_{i}+{b}_{i}{y}_{i}\phantom{\rule{2em}{0ex}}\\ \text{subject to}& \phantom{\rule{1em}{0ex}}\frac{{y}_{i}+{x}_{i}+1}{{y}_{i}+{x}_{i}+{y}_{i}{x}_{i}+1}\le {\rho}_{i}\phantom{\rule{1em}{0ex}}\forall i\phantom{\rule{2em}{0ex}}\\ \phantom{\rule{1em}{0ex}}{x}_{i}\ge 0,{y}_{i}\ge 0,\phantom{\rule{1em}{0ex}}\forall i,\phantom{\rule{2em}{0ex}}\end{array}$(60)
where a_{ i }, b_{ i }, x_{ i }, and y_{ i } are the i th diagonal entries of ${\left({\mathbf{\Lambda}}_{s}^{\u2020}\right)}^{2}$, ${\left({\mathbf{\Lambda}}_{r}^{\u2020}\right)}^{2}$, Λ_{ B }, and Λ_{ F }, respectively. Note that the problem in (60) is a special case of (P3) in (39).
 2.Naive method. We consider a solution that satisfies all of the constraints in (43) with equality. In this method, x and y are given by the following:$\mathrm{x}=\mathrm{y}=2/\mathit{\rho}\mathbf{1}.$
This is because the smallest QoS constraint drags down the overall performance and thus demands for more power. Once again, we find that the upper bound and the lower bound are quite close to each other especially at high SNR values. Meanwhile, the proposed method coincides with the upper bound from the CVX. Moreover, the advantage of the proposed method over the suboptimal diagonalization method and the naive method is fairly explicit.
Average time spent by each approach
Case  Proposed decomposition (s)  CVX (s) 

Equal MSE  0.4377  3.2094 
Unequal MSE  0.2237  2.6747 
Remark 3. Note that resorting to simulation cannot serve as a rigorous measurement of the complexity analysis. Nevertheless, the large difference in the running time fairly indicates the much higher efficiency of the proposed decomposition method over the regular convex optimization tool.
6 Conclusion
In this paper, we considered the joint design of source precoding matrix and relay precoding matrix in a twohop AF MIMO relay network. We minimized the power consumption subject to a set of predefined QoS constraints of each data stream. Using matrix calculus and majorization theory, we simplified the original matrixvalued problem to a relatively simpler scalar one and proposed two bounding problems that are convex and can be solved efficiently. We specifically designed a primary decomposition method to solve the upper bound problem that has less complexity than directly applying the interior point method. Numerical examples are provided to corroborate the proposed studies.
Appendix 1
Basics of majorization theory
Here, we briefly introduce the basics of majorization theory while the more comprehensive discussion can be found in [12].
represents the elements of x in a decreasing order.
and is denoted by x≺y or y≻x.
We denote this with x≺ _{ w } y (or equivalently y≻ _{ w } x).
where$\stackrel{\u0304}{\mathit{\text{diag}}}{\left\{\mathrm{M}\right\}}_{i}=\text{mean}\left(\mathit{\text{diag}}\right\{\mathrm{M}\left\}\right)$.
Reversely, given vectors a and b with a≺b, then there exists an n × n Hermitian matrix M whose diagonal elements are a and eigenvalues are b.
Appendix 2
Standard waterfilling algorithm
The waterfilling algorithm that yields the optimum x_{ i }’s is given by the following:

Input: The number of all positive eigenvalues N, all inverse eigenvalues ${\left\{{a}_{i}\right\}}_{i}^{N}$, and a positive feasible initial vector for ρ_{ i }.

Output: Allocated powers ${\left\{{x}_{i}\right\}}_{i}^{N}$ and the optimum water level {μ}.
 1.
Sort all of the eigenvalues in the increasing order and a _{N + 1} = ∞. Set L = N.
 2.
If a _{ L } = a _{L+1}, then L = L − 1. Set μ = a _{ N }
 3.
If $\mu \le \frac{\sum _{i=1}^{N}{a}_{i}^{1/2}}{{(\rho (NL\left)\right)}^{2}}$, then the optimum solution is $\mu =\frac{\sum _{i=1}^{N}{a}_{i}^{1/2}}{{(\rho (NL\left)\right)}^{2}}$ and ${x}_{i}={\mu}^{1/2}{a}_{i}^{1/2}1$, or else, go to the next step.
 4.
Set L=L−1 and μ=a _{ L } go back to step 3.
Endnotes
^{a}Dualhop relay network is of particular importance and has been the most frequently studied type in the past decades.
^{b}We assume perfect channel knowledge in this paper, while the channel estimation can be performed by the method in [24].
Declarations
Acknowledgements
This work was supported in part by the National Basic Research Program of China (973 Program) under grants 2013CB336600 and 2012CB316102, the National Natural Science Foundation of China under grant 61201187, and the Tsinghua University Initiative Scientific Research Program under grant 20121088074. Jafar Mohammadi’s work was also supported by Helmholtz Research School on Security Technologies. Y. Rong’s work was supported under Australian Research Council’s Discovery Projects funding scheme (project numbers DP110100736, DP110102076).
Authors’ Affiliations
References
 Laneman JN, Wornell GW: Distributed space time block coded protocols for exploiting cooperative diversity in wireless networks. IEEE Trans. Inform. Theory 2003, 49: 24152425. 10.1109/TIT.2003.817829MathSciNetView ArticleGoogle Scholar
 Laneman JN, Tse DNC, Wornell GW: Cooperative diversity in wireless networks: efficient protocols and outage behavior. IEEE Trans. Inform. Theory 2004, 50: 30623080. 10.1109/TIT.2004.838089MathSciNetView ArticleGoogle Scholar
 Van De Meulen EC: Threeterminal communication channels. Adv. Appl. Prob 1971, 3: 120154. 10.2307/1426331View ArticleGoogle Scholar
 Information transmission through a channel with relay, Technical Report B767, The Aloha System, University of Hawaii, Honolulu. 1976.Google Scholar
 Cover T, El Gamal A: Capacity theorems for the relay channel. IEEE Trans. Inform. Theory 1979, 25: 572584. 10.1109/TIT.1979.1056084MathSciNetView ArticleGoogle Scholar
 Telatar IE: Capacity of multiantenna Gaussian channels. Eur. Trans. Telecom 1999, 10: 585595. 10.1002/ett.4460100604View ArticleGoogle Scholar
 Tang X, Hua Y: Optimal design of nonregenerative MIMO wireless relays. IEEE Trans. Wireless Commun 2007, 6(4):13981407.View ArticleGoogle Scholar
 Hammerstrom I, Wittneben A: Power allocation schemes for amplifyandforward MIMOOFDM relay links. IEEE Trans. Wireless Commun 2007, 6(8):27982802.View ArticleGoogle Scholar
 Yang S, Belfiore JC: Towards the optimal amplifyandforward cooperative diversity scheme. IEEE Trans. Inform. Theory 2007, 53(9):3114126.MathSciNetView ArticleGoogle Scholar
 Khoshnevis B, Yu W, Adve R: Grassmannian beamforming for MIMO amplifyandforward relaying. IEEE J. Sel. Areas Commun 2008, 26: 1397407.View ArticleGoogle Scholar
 Hua Y: An overview of beamforming and power allocation for MIMO relays. In Proceedings of IEEE Military Communications Conference. San Jose, CA; 2010:375380.Google Scholar
 Marshall AW, Olkin I: Inequalities: Theory of Majorization and Its Applications. Academic, New York; 1979.Google Scholar
 Rong Y, Tang X, Hua Y: A unified framework for optimizing linear nonregenerative multicarrier MIMO relay communication systems. IEEE Trans. Signal Process 2009, 57: 48374851.MathSciNetView ArticleGoogle Scholar
 Rong Y, Hua Y: Optimality of diagonalization of multihop MIMO relays. IEEE Trans. Wireless Commun 2009, 8: 60686077.View ArticleGoogle Scholar
 Palomar DP, Lagunas MA, Cioffi JM: Optimum linear joint transmitreceive processing for MIMO channels with QoS constraints. IEEE Trans. on Signal Processing 2004, 52(5):11791197. 10.1109/TSP.2004.826164MathSciNetView ArticleGoogle Scholar
 Zhang R, Chai CC, Liang Y: Joint beamforming power control for multiantenna relay broadcast channel with QoS constraints. IEEE Trans. Sig. Proc 2009, 57(2):726737.MathSciNetView ArticleGoogle Scholar
 Y Rong: Multihop nonregenerative MIMO relaysQoS considerations. IEEE Trans. Sig. Proc 2011, 59(1):290303.View ArticleGoogle Scholar
 Fu Y, Yang L, Zhu W, Liu C: Optimum linear design of twohop MIMO relay network with QoS requirements. IEEE Trans. Sig. Proc 2011, 59(5):22572269.MathSciNetView ArticleGoogle Scholar
 Mohammadi J, Gao F, Rong Y: Design of amplify and forward MIMO relay networks with QoS constraint. In Proceedings of the Global Telecommunications Conference (GLOBECOM 2010). Miami, USA; 2010:610.Google Scholar
 Grant M, Boyd S: CVX: Matlab software for disciplined convex programming, version 2.0 beta. 2012.http://cvxr.com/cvxGoogle Scholar
 Kay SM: Fundumentals of Statistical Signal Processing: Estimation Theory. PrenticeHall, Englewood Cliffs; 1993.Google Scholar
 Petersen KB, Pedersen MS: The Matrix Cookbook. 2007.Google Scholar
 Boyd S, Vandenberghe L: Convex Optimization. Cambridge University Press, New York; 2004. Available (online) at http://www.stanford.edu/~boyd/cvxbook.htmlView ArticleGoogle Scholar
 Gao F, Cui T, Nallanathan A: On channel estimation and optimal training design for amplify and forward relay network. IEEE Trans. Wireless Commun 2008, 7(5):19071916.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License(http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.