You are currently browsing the tag archive for the ‘matrices’ tag.

Deriving double angle trigonometric formulas using transformations. A really nice elementary proof.

Double Angle Trigonometric Formulas.

In the previous post, two proofs were given for the *Cauchy-Schwarz inequality*. We will now consider another proof.

**Definition 1** Let be an matrix written as a block matrix

where is a matrix, is a matrix, is a , and is a , so . Assuming is nonsingular, then

is called the *Schur complement* of in ; or the *Schur complement* of relative to .

The Schur complement probably goes back to Carl Friedrich Gauss (1777-1855) (for Gaussian elimination). To solve the linear system

that is

by mimicking Gaussian elimination, that is, if is square and nonsingular, then by eliminating , by multiplying the first equation by and subtracting the second equation, we get

Note, the matrix is the Schur complement of in , and if it is square and nonsingular, then we can obtain the solution to our system.

The Schur complement comes up in Issai Schur’s (1875-1941) seminal lemma published in 1917, in which the *Schur determinate formula* was introduced. By considering elementary operations of partitioned matrices, let

where is square and nonsingular. We can change so that the lower-left and upper-right submatrices become . More precisely, we can make the lower-left and upper-right submatrices by subtracting the first row multiplied by from the second row, and by subtracting the first column multiplied by from the second column. In symbols,

and in equation form,

Note that we have obtain the following factorization of :

By taking the determinants

we obtain the Schur’s determinant formula for block matrices,

Mathematician Emilie Virginia Haynsworth (1916-1985) introduced a name and a notation for the Schur complement of a square nonsingular (or invertible) submatrix in a partitioned (two-way block) matrix. The term Schur complement first appeared in Emily’s 1968 paper **On the Schur Complement** in *Basel Mathematical Notes*, then in *Linear Algebra and its Applications Vol. 1* (1968), AMS Proceedings (1969), and in *Linear Algebra and its Applications Vol. 3* (1970).

We will now present a block matrix proof, focusing on complex matrices.

**Proof:** Let . Then

By taking the Schur complement of , we arrive at

and hence

which ensures, when and are square, that

Equality occurs if and only if rank rank ; that is, by the *Guttman rank additivity formula*, rank rank rank if and only if . When is nonsingular, is nonsingular if and only if is nonsingular.

**References**

[1] Zhang, Fuzhen. *Matrix theory: basic results and techniques*. Springer Science & Business Media, 2011.

http://www.phdcomics.com/comics/archive.php?comicid=27

**Definition 1** A vector space over the number field or is called an *inner product space* if it is equipped with an *inner product* satisfying for all and scalar ,

- , if and only if ,
- ,
- , and
- .

is an inner product space over with the inner product

An inner product space over is usually called a *Euclidean space.*

The following properties of an inner product can be deduced from the four axioms in Definition 1:

- ,
- ,
- ,
- for all if and only if , and
- .

An important property shared by all inner products is the *Cauchy-Schwarz inequality* and, for an inner product space, one of the most useful inequalities in mathematics.

**Theorem 1**(Cauchy-Schwarz Inequality) Let be an inner product space. Then for all vectors and in over the field or ,

Equality holds if and only if and are linearly dependent.

The proof of this can be done in a number of different ways. The most common proof is to consider the quadratic function in

and derive the inequality from the non-positive discriminant. We will first present this proof.

**Proof:** Let be given. If , the assertion is trivial, so we may assume that . Let and consider

which is a real quadratic polynomial with real coefficients. Because of axiom (1.), we know that for all real , and hence can have no real simple roots. The discriminant of must therefore be non-positive

and hence

Since this inequality must hold for any pair of vectors, it must hold if is replaced by , so we also have the inequality

But , so

If , then the statement of the theorem is trivial; if not, then we may divide equation (2) by the quantity to obtain the desired inequality

Because of axiom (1.), can have a real (double) root only if for some . Thus, equality can occur in the discriminant condition in equation (1) if and only if and are linearly dependent.

We will now present a matrix proof, focusing on the complex vector space, which is perhaps the simplest proof of the Cauchy-Schwarz inequality.

**Proof:** For any vectors we noticed that,

By taking the determinant for the matrix,

the inequality follows at once,

Equality occurs if and only if the matrix has rank 1; that is, and are linearly dependent.

**References **

[1] Zhang, Fuzhen. *Matrix theory: basic results and techniques*. Springer Science & Business Media, 2011.

Do you know these matrices described by Alan Rendall? If so, please point out a source where he may find more information about them. I am interested in knowing too!

I have come across a class of matrices with some interesting properties. I feel that they must be known but I have not been able to find anything written about them. This is probably just because I do not know the right place to look. I will describe these matrices here and I hope that somebody will be able to point out a source where I can find more information about them. Consider an $latex n\times n$ matrix $latex A$ with elements $latex a_{ij}$ having the following properties. The elements with $latex i=j$ (call them $latex b_i$) are negative. The elements with $latex j=i+1\ {\rm mod}\ n$ (call them $latex c_i$) are positive. All other elements are zero. The determinant of a matrix of this type is $latex \prod_i b_i+(-1)^{n+1}\prod_i c_i$. Notice that the two terms in this sum always have opposite signs. A property of these matrices which I…

View original post 305 more words