next up previous
Next: Detailed Description Up: Matlab codes for optimization Previous: COPYRIGHT and TERMS OF

General description

This set of codes may be used to optimize a smooth (differentiable) cost function $ {\mathcal J}({\mathbf W})$ over the Lie group of $ n \times n$ unitary matrices. The optimization problem with unitary matrix constraint is stated as:

\includegraphics[height=1cm,width=9.5cm]{figures/problem.eps}
where $ {\mathbf W}$ is an $ n \times n$ unitary matrix and $ {\mathbf I}_n$ is the $ n \times n$ identity matrix.

The codes we provide can be used to either minimize or maximize an arbitrary smooth cost function under unitary matrix constraint. The proposed Riemannian gradient algorithms and line search methods are implemented step-by-step, as in the Tables given in [1,2].

All combinations of Riemannian gradient methods and line search methods proposed in [1,2] are tested. In practical applications, just one of these algorithms (SD/SA, CG-PR/CG-FR) together with one of the line search methods (Armijo, polynomial of DFT-based) is sufficient to solve the problem at hand. The algorithm/method to be used may be chosen based on experimental testing. The difference in performance may depend on the cost function. In general, based on our experience, first we recommend CG-PR with the polynomial-based line search method, and second, SD/SA with the DFT-based line search method.

As a test example, the Brockett cost function: $ {\mathcal J}_\textrm{B}=$trace$ \{{\mathbf W}^H{\mathbf S}{\mathbf W}{\mathbf N}\}$ is minimized/maximized, where $ {\mathbf S}$ is an n-times-n positive Hermitian matrix, and $ {\mathbf N}$ is a diagonal matrix whose diagonal elements are $ 1,\ldots,n$ . The solution $ {\mathbf W}$ of this optimization problem is known, i.e. it is given by the eigenvectors of $ {\mathbf S}$ . Therefore, the matrix $ {\mathbf W}^H{\mathbf S}{\mathbf W}$ will be a diagonal matrix that contains the eigenvalues of $ {\mathbf S}$ along the diagonal. The eigenvectors can be obtained either by minimizing, or by maximizing the Brockett cost function, only the ordering along the diagonal of the diagonalized matrix $ {\mathbf W}^H{\mathbf S}{\mathbf W}$ will be different (increasing/decreasing order).

NOTE: This set of codes was designed such that the Riemannian optimization scripts are separated from the cost function specific scripts. Therefore, these codes can also be easily used to optimize other smooth cost functions, simply by changing the cost function-specific parameters/scripts. The cost function specific parts are: the cost function evaluation, the gradient computation and the order of the cost function in the coefficients of $ {\mathbf W}$ (details are provided in the next section).


next up previous
Next: Detailed Description Up: Matlab codes for optimization Previous: COPYRIGHT and TERMS OF
Traian Emanuel Abrudan 2014-03-06