/ Home / math / Linear_Algebra /

## Linear Algebra## Basics of Linear AlgebraThere are a lot of systems that are linear, therefore it's nice to have a complete mathematical theory for systems of linear equations. That's it. It's not magical, it's not general enough (*cough* non-linear systems), and (looked at it right) it's simple. Linear algebra's objects of discussion are vectors and maps. What vectors are everyone learned in school. Suffice to say vectors are any objects which can be added together, multiplied by a scalar, and then form a commutative ring with respect to addition; and multiplication and addition distribute over in the usual way (like for real numbers). Notably, there's no division! This is not a complete definition, but you get the idea. (It is also useful, but not necessary, to introduce a so-called "inner" product. Inner because the result stays inside the vector space. Nevermind!) In any case, you can then have equations with vectors in them. Like , where , are vectors. This works like any other system of equations. Here, there are not enough equations for those two unknowns and . So what about this system of equations: That's silly, you say? Ok then, what about this system of equations: Still doesn't work? So what's the rule here? Turns out it has to do with those upper triangular matrices. A system that has no interesting solution (that is not ): Booring. Anyway, let's introduce functions on vectors, so-called "maps" (here, f and g are maps). THAT is the basic philosophy of linear algebra. Now for some more practical stuff: It's annoying to have those abstract and be around. In a lot of cases, you are only interested in coordinates with respect to some basis (the basis is a set of axes that you choose--usually your axes of measurement. That is the axes your measurement devices measure along). It can be shown that the number of basis vectors you need for that (to represent any abstract vector!) is equal to the dimension of the vector space, usually 3. Let the basis be . These basis vectors obviously must be independent, so is out because that's cheating. ? Out. ? Stop it already. Now you can find numbers q, r, s such that . If you want to be more conserative with your letter consumption, better call q and r and s: , , instead (it's just a notation, and the ^1 and ^2 and ^3 are part of the name for now). So: Or in non-annoying notation: Or in even less annoying notation, let the sum be implied when an index repeats in a term: Ahhh. Much better. If the f is a LINEAR map, then you can do this (by definition, which you have to look up): So in order to specify the linear map f completely, you only need , i.e. how f acts on the individual basis vectors each. That's it. So for f and g in non-annoying notation: Also, So: So: And NOW we can go to matrices. Those are annoying, and the are even more annoying. So let's get rid of them. Now we'd like all those abstract basis vectors to disappear. But and are different bases (if they are bases at all--neeevermind for now :P), we'd like these to be specified only with the b⃗_i. c := (_i (_j c^j_i)) That's the first matrix! v := (_i v^i) That's a column "vector"--a matrix w := (_i w^i) That's a column "vector"--a matrix Note: Likewise for the , with the killed and replaced by the . Author: Danny (remove the ".nospam" to send) Last modification on: Sat, 04 May 2024 in LA/. |