Jump to content

Givens rotation

From Wikipedia, the free encyclopedia

In numerical linear algebra, a Givens rotation is a rotation in the plane spanned by two coordinates axes. Givens rotations are named after Wallace Givens, who introduced them to numerical analysts in the 1950s while he was working at Argonne National Laboratory.

As action on matrices

[edit]

A Givens rotation acting on a matrix from the left is a row operation, moving data between rows but always within the same column. Unlike the elementary operation of row-addition, a Givens rotation changes both of the rows addressed by it. To understand how it is a rotation, one may denote the elements of one target row by through and the elements of the other target row by through : Then the effect of a Givens rotation is to rotate each subvector by the same angle. As with row-addition, algorithms often choose this angle so that one specific element becomes zero, and whatever happens in remaining columns is considered acceptable side-effects.

A Givens rotation acting on a matrix from the right is instead a column operation, moving data between two columns but always within the same row. As with action from the left, it rotates each subvector by the same angle, but here these named elements occur in the matrix as Some algorithms, especially those concerned with preserving matrix similarity, apply Givens rotations as a conjugate action: both rotating by one angle between two rows, and rotating by the same angle between the corresponding columns. In this case the effect on the four elements affected by both rotations is more complicated; a Jacobi rotation is such a conjugate action to the end of zeroing the two off-diagonal elements among these four.

The main use of Givens rotations in numerical linear algebra is to transform vectors or matrices into a special form with zeros in certain coefficients. This effect can, for example, be employed for computing the QR decomposition of a matrix. One advantage over Householder transformations is that they can easily be parallelised, and another is that often for very sparse matrices they have a lower operation count.

Matrix representation

[edit]

A Givens rotation is represented by a matrix of the form

where c = cos θ and s = sin θ appear at the intersections ith and jth rows and columns. That is, for fixed i > j, the non-zero elements of Givens matrix are given by:

The product G(i, j, θ)x represents a counterclockwise rotation of the vector x in the (i, j) plane of θ radians, hence the name Givens rotation.

Stable calculation

[edit]

When a Givens rotation matrix, G(i, j, θ), multiplies another matrix, A, from the left, G A, only rows i and j of A are affected. Thus we restrict attention to the following counterclockwise problem. Given a and b, find c = cos θ and s = -sin θ such that

where is the length of the vector . Explicit calculation of θ is rarely necessary or desirable. Instead we directly seek c and s. An obvious solution would be

[1]

However, the computation for r may overflow or underflow. An alternative formulation avoiding this problem (Golub & Van Loan 1996, §5.1.8) is implemented as the hypot function in many programming languages.

The following Fortran code is a minimalistic implementation of Givens rotation for real numbers. If the input values 'a' or 'b' are frequently zero, the code may be optimized to handle these cases as presented here.

subroutine givens_rotation(a, b, c, s, r)

real a, b, c, s, r
real h, d

if (b .ne. 0.0) then
    h = hypot(a, b)
    d = 1.0 / h
    c = abs(a) * d
    s = sign(d, a) * b
    r = sign(1.0, a) * h
else
    c = 1.0
    s = 0.0
    r = a
end if

return
end


Furthermore, as Edward Anderson discovered in improving LAPACK, a previously overlooked numerical consideration is continuity. To achieve this, we require r to be positive.[2] The following MATLAB/GNU Octave code illustrates the algorithm.

function [c, s, r] = givens_rotation(a, b)
    if b == 0;
        c = sign(a);
        if (c == 0);
            c = 1.0; % Unlike other languages, MatLab's sign function returns 0 on input 0.
        end;
        s = 0;
        r = abs(a);
    elseif a == 0;
        c = 0;
        s = -sign(b);
        r = abs(b);
    elseif abs(a) > abs(b);
        t = b / a;
        u = sign(a) * sqrt(1 + t * t);
        c = 1 / u;
        s = -c * t;
        r = a * u;
    else
        t = a / b;
        u = sign(b) * sqrt(1 + t * t);
        s = -1 / u;
        c = t / u;
        r = b * u;
    end
end

The IEEE 754 copysign(x,y) function, provides a safe and cheap way to copy the sign of y to x. If that is not available, |x|⋅sgn(y), using the abs and sgn functions, is an alternative as done above.

Triangularization

[edit]

Given the following 3×3 Matrix:

two iterations of the Givens rotation (note that the Givens rotation algorithm used here differs slightly from above) yield an upper triangular matrix in order to compute the QR decomposition.

In order to form the desired matrix, zeroing elements (2,1) and (3,2) is required; element (2,1) is zeroed first, using a rotation matrix of:

The following matrix multiplication results:

where

Using these values for c and s and performing the matrix multiplication above yields A2:

Zeroing element (3,2) finishes off the process. Using the same idea as before, the rotation matrix is:

Afterwards, the following matrix multiplication is:

where

Using these values for c and s and performing the multiplications results in A3:

This new matrix A3 is the upper triangular matrix needed to perform an iteration of the QR decomposition. Q is now formed using the transpose of the rotation matrices in the following manner:

Performing this matrix multiplication yields:

This completes two iterations of the Givens Rotation and calculating the QR decomposition can now be done.

QR iteration variant

[edit]

If performing the above calculations as a step in the QR algorithm for finding the eigenvalues of a matrix, then one next wants to compute the matrix , but one should not do so by first multiplying and to form , instead rather by multiplying each by (on the right). The reason for this is that each multiplication by a Givens matrix on the right changes only two columns of , thus requiring a mere arithmetic operations, which for Givens rotations sums up to arithmetic operations; multiplying by the general matrix would require arithmetic operations. Likewise, storing the full matrix amounts to elements, but each Givens matrix is fully specified by its pair and of them can thus be stored in elements.

In the example,

Complex matrices

[edit]

Another method can extend Givens rotations to complex matrices. A diagonal matrix whose diagonal elements have unit magnitudes but arbitrary phases is unitary. Let A be a matrix for which it is desired to make the ji element be zero using the rows and columns i and j>i. Let D be a diagonal matrix whose diagonal elements are one except the ii and jj diagonal elements which also have unit magnitude but have phases which are to be determined. The phases of the ii and jj elements of D can be chosen so as to make the ii and ji elements of the product matrix D A be real. Then a Givens rotation G can be chosen using the i and j>i rows and columns so as to make the ji element of the product matrix G D A be zero. Since a product of unitary matrices is unitary, the product matrix G D is unitary and so is any product of such matrix pair products.

In Clifford algebra

[edit]

In Clifford algebra and its child structures such as geometric algebra, rotations are represented by bivectors. Givens rotations are represented by the exterior product of the basis vectors. Given any pair of basis vectors Givens rotations bivectors are:

Their action on any vector is written:

where

Dimension 3

[edit]

There are three Givens rotations in dimension 3:

[note 1]

Given that they are endomorphisms they can be composed with each other as many times as desired, keeping in mind that g ∘ ff ∘ g.

These three Givens rotations composed can generate any rotation matrix according to Davenport's chained rotation theorem. This means that they can transform the standard basis of the space to any other frame in the space.[clarification needed]

When rotations are performed in the right order, the values of the rotation angles of the final frame will be equal to the Euler angles of the final frame in the corresponding convention. For example, an operator transforms the basis of the space into a frame with angles roll, pitch and yaw in the Tait–Bryan convention z-x-y (convention in which the line of nodes is perpendicular to z and Y axes, also named Y-X′-Z″).

For the same reason, any rotation matrix in 3D can be decomposed in a product of three of these rotation operators.

The meaning of the composition of two Givens rotations g ∘ f is an operator that transforms vectors first by f and then by g, being f and g rotations about one axis of basis of the space. This is similar to the extrinsic rotation equivalence for Euler angles.

Table of composed rotations

[edit]

The following table shows the three Givens rotations equivalent to the different Euler angles conventions using extrinsic composition (composition of rotations about the basis axes) of active rotations and the right-handed rule for the positive sign of the angles.

The notation has been simplified in such a way that c1 means cos θ1 and s2 means sin θ2). The subindexes of the angles are the order in which they are applied using extrinsic composition (1 for intrinsic rotation, 2 for nutation, 3 for precession)

As rotations are applied just in the opposite order of the Euler angles table of rotations, this table is the same but swapping indexes 1 and 3 in the angles associated with the corresponding entry. An entry like zxy means to apply first the y rotation, then x, and finally z, in the basis axes.

All the compositions assume the right hand convention for the matrices that are multiplied, yielding the following results.

xzx xzy
xyx xyz
yxy yxz
yzy yzx
zyz zyx
zxz zxy

See also

[edit]

Notes

[edit]
  1. ^ The rotation matrix immediately below is not a Givens rotation. The matrix immediately below respects the right-hand rule and is this usual matrix one sees in Computer Graphics; however, a Givens rotation is simply a matrix as defined in the Matrix representation section above and does not necessarily respect the right-hand rule. The matrix below is actually the Givens rotation through an angle of -.

Citations

[edit]
  1. ^ Björck, Ake (1996). Numerical Methods for Least Squares Problems. United States: SIAM. p. 54. ISBN 9780898713602. Retrieved 16 August 2016.
  2. ^ Anderson, Edward (4 December 2000). "Discontinuous Plane Rotations and the Symmetric Eigenvalue Problem" (PDF). LAPACK Working Note. University of Tennessee at Knoxville and Oak Ridge National Laboratory. Retrieved 16 August 2016.

References

[edit]