In order for `lambda^2` to be an eigenvalue of `A^2` , we know that the following equation must hold:
`(A^2 - lambda^2I)vec(x) = vec(0)`
where `vec(0)` is the zero vector of length n and `vec(x)` is the eigenvector of `A^2` corresponding to eigenvalue `lambda^2`.
In much the same way as constants, the above matrix expression can be factored, as long as we replace `I` with `I^2` :
Side Note: I know I don't feel comfortable factoring matrices unless I check whether the multiplication holds up the way I expect it to (because matrix multiplication not often commutative), so let's just check to make sure:
I know that multiplication with constants and the identity matrix is commutative, though, so I can do some rearranging:
`A^2 + AlambdaI - AlambdaI - lambda^2I^2=A^2 - lambda^2I`
Looks like we were ok in terms of factoring the first expression!
Now, we'll continue with our factored expression:
`(A+lambdaI)(A-lambdaI)vec(x) = vec(0)`
If we have a nontrivial solution for the eigenvector `vec(x)` , we know that `vec(x)` is not the zero vector. Therefore, either `A+lambdaI` or `A-lambdaI` is an nxn matrix containing only zeros. We can then separate the above equation into two possible cases:
1) `(A+lambdaI)vec(x) = vec(0)`
If we simplify these two cases we get to possible equations:
1) `Avec(x) = -lambdavec(x)`
2) `Avec(x) = lambdavec(x)`
And there you have it: given that `lambda^2` is an eigenvalue of `A^2` we can say that either `lambda` or `-lambda` is an eigenvalue of `A`.
Wow, I'm sorry, I was reading through this, and I came up on a small mistake. Thankfully, it doesn't affect the proof.
`A+lambdaI` and `A-lambdaI` do not have to be matrices containing only zeros. However, that does not affect the problem, as you simply distribute the eigenvector `vec(x)` and subtract `+-lambdavecx` from both sides.
Again, I apologize, and I hope it didn't affect your understanding too much!