The algorithm for finding the eigenvalues is:det (A - LI) = 0where A is your matrix, I is the identity matrix, and the possibilities for L are the eigenvalues. Thus:`"det"[[0-L,1],[1,0-L]]=0``L^2-1=0``L= +-1`So, your eigenvalues are correct. To find the eigenvectors, take each possibility for L,...

## See

This Answer NowStart your **48-hour free trial** to unlock this answer and thousands more. Enjoy eNotes ad-free and cancel anytime.

Already a member? Log in here.

The algorithm for finding the eigenvalues is:

det (A - LI) = 0

where A is your matrix, I is the identity matrix, and the possibilities for L are the eigenvalues. Thus:

`"det"[[0-L,1],[1,0-L]]=0`

`L^2-1=0`

`L= +-1`

So, your eigenvalues are correct.

To find the eigenvectors, take each possibility for L, plug it into A - LI, and find the null space. That is, find a vector which, if you multiplied it by A-LI, would be 0.

So:

Take L=1

`A-LI = [[-1,1],[1,-1]]`

You want a vector `v` such that `[[-1,1],[1,-1]]v=[[0],[0]]`

We can get this by trial and error:

`[[-1,1],[1,-1]] [[1],[z]] = [[-1+z],[1-z]] = [[0],[0]]`

This will be true if z = 1

And we have:

the eigenvalue L=1 has eigenvector `[[1],[1]]`

Now, take L=-1

`A-LI = [[1,1],[1,1]]`

You want a vector `v` such that `[[1,1],[1,1]]v=[[0],[0]]`

Again, we can get this by trial and error:

`[[1,1],[1,1]] [[1],[z]] = [[1+z],[1+z]] = [[0],[0]]`

This will be true if z = -1

And we have:

the eigenvalue L=-1 has eigenvector `[[1],[-1]]`

More intuitively, we can think about this geometrically:

An eigenvector is a vector that, if you apply the transformation, points in the same direction (or possibly the exact opposite direction), but is possibly stretched or shrunk.

So, we are looking for a vector that, if we reflect it over the line y=x, points in the same (or the exact opposite) direction as it did before reflection.

Consider:

The red lines are vectors before the transformation, the blue lines are vectors after the transformation. I don't know if it makes intuitive sense from the picture, but the vectors that will point in the same direction before and after reflection, are the vectors that lie exactly on the line y=x. That is, they are multiples of

`[[1],[1]]`

And the vectors that are pointing in the exact opposite after reflection are those that are perpendicular to the line y=x. Perpendicular lines have negative reciprocal slopes; thus we looking for vectors that lie along the line y=-x. In other words, multiples of

`[[1],[-1]]`

Moreover, the length of the vectors isn't changed by reflection, which is why we don't have eigenvalues of, say, 2 or 3 or 1/2