Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Any books how to read and write proofs ? The above statement of zero vector is unique, I have no idea what is that means.


Unique means there is only one of something.

A vector space is defined as having a zero vector, that is, a vector v such that for any other vector w, v + w = w.

Saying the zero vector is unique means that only one vector has that property, which we can prove as follows. Assume that v and v’ are zero vectors. Then v + v’ = v’ (because v is a zero vector). But also, v + v’ = v’ + v = v, where the first equality holds because addition in a vector space is commutative, and the second because v’ is a zero vector. Since v’ + v = v’ and v’ + v = v, v’ = v.

We have shown that any two zero vectors in a vector space are in fact the same, and therefore that there is actually only one unique zero vector per vector space.


I would recommend Book of Proof: https://www.people.vcu.edu/~rhammack/BookOfProof/

We used this in my Discrete Mathematics class (MATH 2001 @ CU Boulder) (it is a pre-requisite for most math classes). The section about truth tables did overlap a bit with my philosophy class (PHIL 1440 Critical Thinking)


> The above statement of zero vector is unique, I have no idea what is that means.

In isolation, nothing. (Neither does the word “vector”, really.) In the context of that book, the idea is more or less as follows:

Suppose you are playing a game. That game involves things called “vectors”, which are completely opaque to you. (I’m being serious here. If you’ve encountered about some other thing called “vectors”, forget about it—at least until you get to the examples section, where various ways to implement the game are discussed.)

There’s a way to make a new vector given two existing ones (denoted + and called “addition”, but not the same as real-number addition) and a way to make a new vector given an existing one and a real number (denoted by juxtaposition and called “multiplication”, but once again that’s a pun whose usefulness will only become apparent later) (we won’t actually need that one here). The inner workings of these operations in turn are also completely opaque to you. However, the rules of the game tell you that

1. It doesn’t matter in which order you feed your two vectors into the “addition” operation (“add” them): whatever existing vectors v and w you’re holding, the new vector v+w will turn out to be the same as the other new vector w+v.

2. When you “add” two vectors and then “add” the third to the result, you’ll get the exact same thing as when you “add” the first to the “sum” of the second and third; that is, whatever the vectors u, v, and w are, (u+v)+w is equal to u+(v+w).

(Why three vectors and not four or five? It turns out that you have the rule for three, you can prove those for four, five, and so on, even though there are going to be many more ways to place the parens there. See Spivak’s “Calculus” for a nice explanation, or if you like compilers, look up “reassociation”.)

3. There is [at least one] vector, call it 0, such that adding it to anything else doesn’t make a difference: for this distinguished 0 and whatever v, v+0 is the same as v.

Let’s now pause for a moment and split the last item into two parts.

We’ll say a vector u deserves to be called a “zero” if, whatever other vector we take [including u itself!], we will get it back again if we add u to it; that is, for any v we’ll get v+u=v.

This is not an additional rule. It doesn’t actually tell us anything. It’s just a label we chose to use. We don’t even know if there are any of those “zeros” around! And now we can restate rule 3, which is a rule:

3. There is [at least one] “zero”.

What the remark says is that, given these definitions and the three rules, you can show, without assuming anything else, that there is exactly one “zero”.

(OK, what the remark actually says is that you can prove that from the full set of eight rules that the author gives.

But that is, frankly, sloppy, because the way rule 4 is phrased actually assumes that the zero is unique: either you need to say that there’s a distinguished zero such that for every v there’s a w with v+w= that zero, or you need to say that for every v there’s a w such that v+w is a zero, possibly a different one for each v. Of course, it doesn’t actually matter!—there can only be one zero even before we get to rule 4. But not making note of that is, again, sloppy.

This kind of sloppiness is perfectly acceptable among people who have seen this sort of thing before, say done finite groups or something like that. But if the book is supposed to be give a first impression, this seems like a bad idea. Perhaps a precalculus course of some sort is assumed.

Read Spivak, seriously. He’s great. Not linear algebra, though.)


Nice explanation, thanks!

Did you mean Spivak, Michael [1]?

[1] https://en.wikipedia.org/wiki/Michael_Spivak


Yes, and this book: https://openlibrary.org/books/OL28292750M (no opinion about differences between the editions, this is the final one). Yes, it’s a doorstopper, but unlike most other people’s attempts at the kitchen sink that is the traditional calculus course, it actually affords proper respect to the half-dozen or so different subjects whose basics are crammed in there. The discussion of associativity I was referring to is in Chapter 1, so perhaps it can serve as a taster.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: