Last time I promised pictures to illustrate whay I mean by this imaginary number \(i\), and here they are. First of all, let's go all the way back to the ancient Greeks. Let's say all we have to work with are circles and straight edges. How do we represent numbers? Remember I gave an example of how they would deal with adding one to two (or was it two to one?) - they would draw a line and mark off equal sized sections to represent the whole numbers. Let's do that here. First we need a line:
Now, let's start with the very left hand side as zero. Why zero at the left? Well, this page is written in english and we read english left to right. The is no other particularly good reason to have zero at the left rather than the right. So I have just made a choice.
I am not going to bother drawing in the circles that would result if we actually used a circle drawing device to make sure all the numbers are marked off equally. let's just say they are. I want to mark off numbers up to four. Why four? Well, \(\pi\) was between three and four, and \(e\) was between two and three, so if we go all the way to four, we can actually mark on all the numbers that we have talked about so far. The line will now look like this:
What we have marked out there are five "numbers". Or, four if you do not accept zero as a number. Or, actually, three if you do not accept one as a number. But that is really getting pretty semantic at this point. You can actually see what we are talking about. There is a dot marked two, which is twice as far from the zero as the dot marked one is. The dot marked four is four times as far away as the one, and twice as far again as the two. These are our whole, natural, numbers. These are the things that you count cows or apples in.
Our addition operation can also be visualised here. If we wanted to add two to one, we start at one and take two "hops" to the right ending up on three. It's the same thing with two plus two - you would end up at four.
When we looked at the different operations you could apply to these "natural" numbers we uncovered different kinds of numbers. For instance, we found fractions. How do these fit on the line? Easy, they just appear at some point between the natural numbers, like this:
In that picture the mark for a half is exactly half way between zero and one. What about irrational numbers? Can we show them? Yes, of course. Remember the square root of two? We cannot write it down as one whole number divided by another whole number. But it does have a size. If you use the random rooter spreadsheet that I made for the first blog post, you will find that the first five digits of the square root of two are \(1.41421\). We could get more and more precision, but that is fine for our number line:
There you go, between one and two, but not quite halfway. Now, what about our more esoteric numbers represented by the symbols \(e\) and \(\pi\)? We can just slot 'em on there:
We are probably, to be honest, getting a bit beyond what the Greeks could do with their number lines, but bear with me. There's \(e\) just before the three, and \(\pi\) just after the three. Those are the numbers that we are talking about - that's what they "look" like on the number line. This is all perfectly real. You can happily say that \(e\) is smaller than \(\pi\) because it is nearer the zero than \(\pi\).
OK, we have two other types of numbers that we found when looking at the operations you can perform, negative numbers and imaginary numbers. Can we make a mark on this number line to show either of these? Nope. Negative numbers are less than zero, but our line stops at zero. And imaginary numbers are ... something else entirely. Lets fix the negative numbers first. All we have to do is extend our line left beyond the zero:
And then we mark off our negative numbers:
Our operations to get negative numbers now make sense. To deduct three from two, we just start at two, and then take three "hops" left to get negative one. If we wanted to add two to one, we start at one and take two "hops" to the right ending up on three.
So now we come to imaginary numbers. We know already that they are neither positive or negative. How do we know that? Well remember back to when we tried to work out if the square root of negative one was positive or negative. We came to the conclusion that it was neither. This was because both a positive and a negative number square to a positive number. We need something new that squares to a negative. So the number \(i\) cannot be drawn on our line to either the left or right of the zero. So where do we draw it? We draw it ABOVE the zero:
In fact we draw a whole new \(i\) number line perpendicular to our \(1\) number line:
I had to move the zero digit to the left a bit, but it is still supposed to be the points where the lines cross over.
This is all a bit much to take in. It is tempting to ask ones self "how on earth there can be a whole new number line perpendicular to the original one?", or "It doesn't make any sense", or "What does it all mean?". Do not feel bad about asking these questions, but understand that these are exactly the sort of questions people would have asked about adding more number line to the left of the zero to accomodate the negative numbers. From the Egyptians to the century America won independence mathematicians pretended negative solutions to problems did not exist. They would have stared in disbelief at the existence of line to the left of the zero. So don't feel bad if that was also your first reaction to the up and down line.
Incidentally, just as there was no right 'end' to mark as zero, I could just as easily have drawn the line with the positive \(i\)'s underneath the real number line.
So what then does it mean to say that a number has real and imaginary parts? It basically means that instead of living in the one dimensional world of our original number line, actually numbers live in the two dimensional world of the plain delineated by the real number line and the imaginary number line. Say, what?
To find a real number, all you need to have is one piece of information. Where it is positioned on the number line. However, to find an actual number with imaginary and real parts you need two pieces of information. First, where it lies on the real number line (the real part) and second where it lies on the imaginary number line (the imaginary part). If a number has a real part of two and an imaginary part of \(i\) it would be where the black dot is on this picture:
What you do is go along the real line to two, and then go up to \(i\). We write this number as \(2+i\). It has an imaginary part of one and a real part of two. All of the "natural" numbers exist on this plain, it's just that they are all in the form \(x+0i\). In other words their imaginary part is zero. But they still have one!
So how does all this fit into \(i\) being the square root of negative one? Simple. It turns out that multiplying by \(i\) is the same as rotating about the zero point by 90 degrees. Say, what? OK, OK. Lets look at what I mean. Take our number above \(2+i\) and lets multiply it by \(i\). How do we do that? Its just the same as any other time we multiply two equations in brackets. Remember that \(i\) also has a real component, which is just zero. So we just need to follow the standard procedure for multiplying brackets together (blue and red baskets with apples and oranges):
\[(0+i)\cdot(2+i)\]
We need to multiply the first two terms of each bracket together:
\[0\cdot 2=0\]
Then the first from the first bracket and the second from the second bracket:
\[0\cdot i=0\]
Then the second and the first:
\[i\cdot 2=2i\]
And finally the second and the second:
\[i\cdot i=-1\]
Remember that \(i\) is the square root of negative one, so of course if you multiply \(i\) by \(i\) you get negative one. Not let's add all that up:
\[0+0+(2i)+(-1)\]
We need to rearrange that a bit, because tradition has the real part coming first in our number:
\[-1+2i\]
What does that look like on our number plain?
Now we can multiply that number (\(-1+2i\)) by \(0+i\) again to see what happens:
\[(0+i)\cdot (-1+2i)\]
\[(0\cdot -1)+(0\cdot 2i)+(i\cdot -1)+(i\cdot 2i)\]
\[0+0+(-1\cdot i)+(2\cdot i\cdot i)\]
\[0+0-i+(2\cdot -1)\]
\[0+0-i-2\]
\[(-2-i)\]
See, we hve multiplied by \(i\) twice so we have come a total of 180 degrees from the start. We can get our last dot by running the multiplication again.
\[(0+i)\cdot (-2-i)\]
\[(0\cdot -2)+(0\cdot -i)+(i\cdot -2)+(i\cdot -i)\]
\[(0)+(0)+(-2i)+(i\cdot -i)\]
Hang on a minute. What is negative \(i\) multiplied by \(i\)? Well think of it like this. Negative \(i\) is the same as negative one times \(i\). So that sum in brackets that the end of the last line is the same as \(i\cdot -1\cdot i\). Which in turn is the same as \(-1\cdot i \cdot i\). We know what \(i\) multiplied by \(i\) is, so that gives us \(-1 \cdot -1\), which we already know to be the same as one. That gives us the next line:
\[(0)+(0)+(-2i)+(1)\]
\[(1-2i)\]
On our graph that is here:
We are nearly there, and we can check the seemingly inevitable destination by multipling once more. This time we can dispense with multiplying by zero because we know that gets us nowhere. So instead we have: \[\begin{multline}
i\cdot(1-2i)=(i\cdot 1)+(i\cdot -2i)=(i)+(-2\cdot i\cdot i)=\\
i+(-2\cdot -1)=i+2=2+i
\end{multline}\]
And that, of course, is exactly where we started.
Of course, we now know that if you start at \(i\) and multiply that number by \(i\) you rotate through 90 degrees counter clockwise, bringing you to ... drumroll ... negative one. That looks like this:
Right, brilliant, we can now see what it means to multiply something by \(i\), but is any of this actually real? What I mean is, is all this just an invention of mathematicians to fill a gap, or did imaginary numbers exists before mathematicians "found" them? You could ask this question of the whole of maths, and very earnest people with wild hair and ink spots on their shirt pockets often do. I am not going that far, but I am going to try and convince you that imaginary numbers do affect "normal" maths even when you are dealing purely with real numbers and not trying to find the square root of anything negative. To do that we are going to have to dive into some infinite sums and series.
Monday, 25 July 2011
Monday, 18 July 2011
Such an Imagination
OK. Lets have a go at imaginary numbers. These are NOT straightforwards. The name is also very annoying because, they are very real indeed, and I will try to satisfy you that that is the case in due course.
First of all what the hell are they? We have already seen four different types of numbers. We have seen whole numbers such as one, two, three, ten and so on. These are obvious in our every day world. How many pens are on the desk? Three. How many cows are in that field? Twenty. They are used to count separate and distinct objects. Looked at formally, the \(x\) in the following equation is a whole number:
\[x+5=7\]
We can see without too much effort that \(x\) is two. Two is obviously a whole number so we are quite happy with that as a value of \(x\) that makes the equation work.
When we looked at the opposite of addition, we found a different type of number, a negative number. We agreed that saying you had negative five cows in a field did not make sense. The concept was so odd that for a long time mathematicians refused to accept that these numbers really existed. Nowadays we are perfectly happy with the concept (unless your bank account is very, very, negative, in which case you will be perfectly unhappy with the concept).
So what about a slightly different equation which works with a negative value of x?
\[x+5=2\]
To work, we need to set \(x\) equal to negative three. \(x\) cannot be a positive number, because there are no positive numbers that are five smaller than two. Again, we are quite happy with this, but weirdly even just a few hundred years ago the best mathematicians in the world would have said that there was NO ANSWER to that problem.
OK, next we considered fractions. We agreed that while you could have half a cow, it would not be very pleasant to look at. More fundamentally we realised that fractions are ratios between two whole numbers. So if I have five apples, and my friend has ten apples then I have half as many apples as my friend. That is a statement about the ratio of our apple collections to each other. What would this look like stated as an algebra question?
\[3\cdot x=2\]
This is just a little bit trickier because it is saying three what's are two? Or, what is two divided by three? We are quite happy with the answer two thirds. And in general people have always been comfortable with this idea. After all, you are just comparing two whole numbers.
OK, moving on we then learned about irrational numbers. These are numbers that cannot be written as one whole number divided by another whole number. We satisfied ourselves that the number which you multiply by itself to get two is one of these numbers. The equation which has one of these as an answer looks like this:
\[x^2-2=0\]
For the hard of thinking, you add two to both sides and then take the square root of both sides getting \(x\) equal to the square root of two. The ancient greeks really did not like this. They felt that all numbers should be rational, and they were really disturbed to find out that was not the case. it's a bit more abstract today, but we are generally not bothered by the idea that there are some numbers which would just go on for ever if you tried to write them out.
Now then. What is the answer to the following puzzle. What number, if you multiply it by itself, and then add one, gives zero? Or algebraically:
\[x^2+1=0\]
It is very similar to the square root of two equation just above isn't it? So what do we do? We subtract one from each side, getting:
\[x^2=-1\]
And we then take the square root of each side:
\[x=\sqrt{-1}\]
So the answer is the number, that if you multiply it by itself, makes negative one. OK, so what would that be then? One multiplied by itself is one, so is negative one multiplied by itself negative one? We need to think about multiplying negative things to get an answer to that.
We said that multiplying is just a special type of addition. So that three multiplied by four is:
\[3+3+3+3=12\]
Notice that there are four threes there. So what would negative three multiplied by four look like? Well, we would just add together four negative threes. That would look like this:
\[(-3)+(-3)+(-3)+(-3)=-12\]
Remember that adding a negative is the same as subtracting a positive. And also remember that you do stuff in brackets first. So while you have plus signs in between each set of brackets, the fact that there the numbers INSIDE the brackets are negative means that you end up subtracting. Didn't we say though that it doesn't matter which way round you multiply things? So what does four multiplied by negative three look like as addition? Well, sticking with our definitions, it is four added together negative three times. How do you add something a negative amount of times? Remember that a negative something is the same as the something subtracted from zero. So, for positive multiplication you add up a group of things, but for negative multiplication you subtract your number from zero the same amount of times you are supposed to negatively multiply it by. So it looks like this:
\[(0)-(4)-(4)-(4)=-12\]
(The zero looks a bit odd there, and I suppose it could be implied in the same way that positive one is implied to be zero plus one. So we could have written the four threes above as zero plus the four threes.)
As we would expect that is also negative twelve. So we have now looked at a positive multiplied by a positive (where both the number and the sign between the numbers are positive). That gives a positive result. We have looked at a negative multiplied by a positive (the numbers are negative but the sign between them is positive). And we just looked at a positive multiplied by a negative (the numbers are positive but the sign in between them changes to a subtraction instead of addition sign). What about the last option, multiplying a negative number, a negative amount of times? What does that look like as an addition?
Well, you will be multiplying a negative number, so the numbers IN the brackets are going to be negative. And we are going to be doing it a negative amount of times so the numbers BETWEEN the brackets will also be negative. The sum looks like this:
\[(0)-(-3)-(-3)-(-3)-(-3)=12\]
Because we are duplicating a negative number a negative amount of times we end up with two negative signs. Adding a negative is the same as subtracting the number, so subtracting a negative is the same as adding the number. Sound weird? Not really, if I lend you £10, then you owe me £10 (lets call that negative £10). If I then subtract, or cancel, the debt I have effectively gifted you £10. So I turn a negative obligation (you have to give me £10) into a positive benefit (I have given you £10).
So what you actually get when you subtract all those negative numbers is a positive number. In my minds eye, I see the two minus signs combine into a plus sign, with one of them rotating through ninety degrees. So for every two minus signs you create a plus.
So what about our hypothesis that if you multiply negative one by itself negative one times you get negative one? That would mean that you subtract (-) negative (-) one (1) from zero once (1). That would look like this:
\[(0)-(-1)\]
The two minus signs combine to form a plus, and you get positive one. So the square root of negative one cannot be negative one. It cannot be positive one either, because two positive numbers multiplied together always give a positive answer. So if the answer cannot be negative and cannot be positive, then what the hell is it? All the numbers that we know about - all of the numbers above, are either less than zero or greater than zero (or zero itself, I grant you). So how can a number be neither positive or negative?
Oh dear. It appears that we are stumped. And indeed for a long time people treated equations which produced answers that were the square roots of negative numbers in the same way as they treated equations which gave negative numbers themselves as answers. In other words they ignored them.
We do not do that any more though. Instead we say that there IS a number which is the square root of negative one, or a number which if you multiply it by itself gives negative one. We have a symbol for that number, and the symbol is \(i\). What we say, and just work with me on this, is that each number has a 'real' part which is a multiple of one, and an 'imaginary' part which is a multiple of \(i\). The number that is actually the square root of negative one is a number with no real part (or technically a real part multiplied by zero), and an imaginary part which is \(i\) multiplied by one. We say that \(i\) is not positive or negative in the sense of being more or less than zero. Instead we say that it has a whole positive and negative spectrum all to itself. So the following equation makes perfect sense:
\[i-2i=-i\]
This all sounds a bit abstract. What would such a number look like, and how does it relate to the other numbers that we are familiar with? I'll try and make it more visual next time.
First of all what the hell are they? We have already seen four different types of numbers. We have seen whole numbers such as one, two, three, ten and so on. These are obvious in our every day world. How many pens are on the desk? Three. How many cows are in that field? Twenty. They are used to count separate and distinct objects. Looked at formally, the \(x\) in the following equation is a whole number:
\[x+5=7\]
We can see without too much effort that \(x\) is two. Two is obviously a whole number so we are quite happy with that as a value of \(x\) that makes the equation work.
When we looked at the opposite of addition, we found a different type of number, a negative number. We agreed that saying you had negative five cows in a field did not make sense. The concept was so odd that for a long time mathematicians refused to accept that these numbers really existed. Nowadays we are perfectly happy with the concept (unless your bank account is very, very, negative, in which case you will be perfectly unhappy with the concept).
So what about a slightly different equation which works with a negative value of x?
\[x+5=2\]
To work, we need to set \(x\) equal to negative three. \(x\) cannot be a positive number, because there are no positive numbers that are five smaller than two. Again, we are quite happy with this, but weirdly even just a few hundred years ago the best mathematicians in the world would have said that there was NO ANSWER to that problem.
OK, next we considered fractions. We agreed that while you could have half a cow, it would not be very pleasant to look at. More fundamentally we realised that fractions are ratios between two whole numbers. So if I have five apples, and my friend has ten apples then I have half as many apples as my friend. That is a statement about the ratio of our apple collections to each other. What would this look like stated as an algebra question?
\[3\cdot x=2\]
This is just a little bit trickier because it is saying three what's are two? Or, what is two divided by three? We are quite happy with the answer two thirds. And in general people have always been comfortable with this idea. After all, you are just comparing two whole numbers.
OK, moving on we then learned about irrational numbers. These are numbers that cannot be written as one whole number divided by another whole number. We satisfied ourselves that the number which you multiply by itself to get two is one of these numbers. The equation which has one of these as an answer looks like this:
\[x^2-2=0\]
For the hard of thinking, you add two to both sides and then take the square root of both sides getting \(x\) equal to the square root of two. The ancient greeks really did not like this. They felt that all numbers should be rational, and they were really disturbed to find out that was not the case. it's a bit more abstract today, but we are generally not bothered by the idea that there are some numbers which would just go on for ever if you tried to write them out.
Now then. What is the answer to the following puzzle. What number, if you multiply it by itself, and then add one, gives zero? Or algebraically:
\[x^2+1=0\]
It is very similar to the square root of two equation just above isn't it? So what do we do? We subtract one from each side, getting:
\[x^2=-1\]
And we then take the square root of each side:
\[x=\sqrt{-1}\]
So the answer is the number, that if you multiply it by itself, makes negative one. OK, so what would that be then? One multiplied by itself is one, so is negative one multiplied by itself negative one? We need to think about multiplying negative things to get an answer to that.
We said that multiplying is just a special type of addition. So that three multiplied by four is:
\[3+3+3+3=12\]
Notice that there are four threes there. So what would negative three multiplied by four look like? Well, we would just add together four negative threes. That would look like this:
\[(-3)+(-3)+(-3)+(-3)=-12\]
Remember that adding a negative is the same as subtracting a positive. And also remember that you do stuff in brackets first. So while you have plus signs in between each set of brackets, the fact that there the numbers INSIDE the brackets are negative means that you end up subtracting. Didn't we say though that it doesn't matter which way round you multiply things? So what does four multiplied by negative three look like as addition? Well, sticking with our definitions, it is four added together negative three times. How do you add something a negative amount of times? Remember that a negative something is the same as the something subtracted from zero. So, for positive multiplication you add up a group of things, but for negative multiplication you subtract your number from zero the same amount of times you are supposed to negatively multiply it by. So it looks like this:
\[(0)-(4)-(4)-(4)=-12\]
(The zero looks a bit odd there, and I suppose it could be implied in the same way that positive one is implied to be zero plus one. So we could have written the four threes above as zero plus the four threes.)
As we would expect that is also negative twelve. So we have now looked at a positive multiplied by a positive (where both the number and the sign between the numbers are positive). That gives a positive result. We have looked at a negative multiplied by a positive (the numbers are negative but the sign between them is positive). And we just looked at a positive multiplied by a negative (the numbers are positive but the sign in between them changes to a subtraction instead of addition sign). What about the last option, multiplying a negative number, a negative amount of times? What does that look like as an addition?
Well, you will be multiplying a negative number, so the numbers IN the brackets are going to be negative. And we are going to be doing it a negative amount of times so the numbers BETWEEN the brackets will also be negative. The sum looks like this:
\[(0)-(-3)-(-3)-(-3)-(-3)=12\]
Because we are duplicating a negative number a negative amount of times we end up with two negative signs. Adding a negative is the same as subtracting the number, so subtracting a negative is the same as adding the number. Sound weird? Not really, if I lend you £10, then you owe me £10 (lets call that negative £10). If I then subtract, or cancel, the debt I have effectively gifted you £10. So I turn a negative obligation (you have to give me £10) into a positive benefit (I have given you £10).
So what you actually get when you subtract all those negative numbers is a positive number. In my minds eye, I see the two minus signs combine into a plus sign, with one of them rotating through ninety degrees. So for every two minus signs you create a plus.
So what about our hypothesis that if you multiply negative one by itself negative one times you get negative one? That would mean that you subtract (-) negative (-) one (1) from zero once (1). That would look like this:
\[(0)-(-1)\]
The two minus signs combine to form a plus, and you get positive one. So the square root of negative one cannot be negative one. It cannot be positive one either, because two positive numbers multiplied together always give a positive answer. So if the answer cannot be negative and cannot be positive, then what the hell is it? All the numbers that we know about - all of the numbers above, are either less than zero or greater than zero (or zero itself, I grant you). So how can a number be neither positive or negative?
Oh dear. It appears that we are stumped. And indeed for a long time people treated equations which produced answers that were the square roots of negative numbers in the same way as they treated equations which gave negative numbers themselves as answers. In other words they ignored them.
We do not do that any more though. Instead we say that there IS a number which is the square root of negative one, or a number which if you multiply it by itself gives negative one. We have a symbol for that number, and the symbol is \(i\). What we say, and just work with me on this, is that each number has a 'real' part which is a multiple of one, and an 'imaginary' part which is a multiple of \(i\). The number that is actually the square root of negative one is a number with no real part (or technically a real part multiplied by zero), and an imaginary part which is \(i\) multiplied by one. We say that \(i\) is not positive or negative in the sense of being more or less than zero. Instead we say that it has a whole positive and negative spectrum all to itself. So the following equation makes perfect sense:
\[i-2i=-i\]
This all sounds a bit abstract. What would such a number look like, and how does it relate to the other numbers that we are familiar with? I'll try and make it more visual next time.
Monday, 11 July 2011
Are Infinite Sums Infinite?
So, infinite sums. We covered these. These are an infinitely long list of numbers which you add up. If we want to add up all of the positive whole numbers we would write out:
\[\sum_{x=1}^\infty x\]
If we wanted to add up all the even numbers we would write out:
\[\sum_{x=1}^\infty 2\cdot x\]
Of course, both of these additions is a pointless exercise, because the answers are themselves infinite. There are an infinite amount of whole numbers, and if you add them all up you get infinity. Hell, even if you just added one to itself an infinite amount of times you would also get infinity:
\[\sum_{x=1}^\infty \tfrac{x}{x}\]
(Any number divided by itself is automatically one).
Does this always hold true? Do infinite sums ALWAYS add up to infinity? What about this sum:
\[\sum_{x=0}^\infty \frac{1}{2^x}\]
Looks a bit more complicated doesn't it? First of all notice that I am going to start adding from the zero'th position in the series. So first of all I plug in 0 for \(x\). I get one divided by two to the power of zero. Remember that anything to the power of zero is one. So the first term in this series is one divided by one, or one.
The next term is one divided by two to the power of one. Anything to the power of one is a just one copy of itself. So this is just two. So the term is one divided by two or a half. So far our sum is one plus a half.
The next term is one divided by two squared. Two squared is four, so this is one quarter. The next term is going to be one over two cubed, or an eighth and so on. Basically the series is one plus a whole long list of the inverses of the powers of two. Looks a bit like this:
\[1+\tfrac{1}{2}+\tfrac{1}{4}+\tfrac{1}{8}+\tfrac{1}{16}+\tfrac{1}{32}+\tfrac{1}{64}+\tfrac{1}{128}+\tfrac{1}{256}+\tfrac{1}{512}+\ldots\]
Does that add up to infinity as well? Hmm. Maybe not - look at each term, they all get smaller very very quickly. If they get smaller quickly enough, then adding them all up may not reach infinity.
Lets try some mathematical wizardry. Lets create a variable, which we will call \(x\). Actually, no, lets call it \(S\) for series instead. now let us give make the variable equal to the series that we have created from our sum. To keep things simple we'll cut down the number of terms on display:
\[S=1+\tfrac{1}{2}+\tfrac{1}{4}+\tfrac{1}{8}+\tfrac{1}{16}+\ldots\]
OK, now we will divide each side by two:
\[\frac{S}{2}=\frac{1+\tfrac{1}{2}+\tfrac{1}{4}+\tfrac{1}{8}+\tfrac{1}{16}+\ldots}{2}\]
If we divide a long series of additions by two, that is the same as dividing each individual number in the series by two. Starting with the one at the beginning, that will become a half, and then the half becomes a quarter and the quarter an eighth and so on. Can you see that we are effectively just throwing away the one, and moving every other entry in the series one to the left. Because the series is infinitely long, it is still infinitely long because infinity minus one is still infinity. So once we have done all our divisions by two we get:
\[\frac{S}{2}=\tfrac{1}{2}+\tfrac{1}{4}+\tfrac{1}{8}+\tfrac{1}{16}+\ldots\]
OK. What can we do with that then? Look up two equations to the one where we first defined what \(S\) was going to be. You can see that it is one plus a string of fractions. Now look at the equation above. We have established that one half of \(S\) is a string of fractions. Now consider the string of fractions itself. It looks similar. In fact it looks identical, and it will remain identical no matter how many terms you write down for either one. So far, so good. We can now say that \(S\) is one plus that string of fractions, but we know that the string of fractions is actually \(\tfrac{S}{2}\) so we can actually write:
\[S=1+\frac{S}{2}\]
And if we subtract one half of \(S\) from both sides we get:
\[S-\frac{S}{2}=1\]
\(S\) minus half of \(S\) is obviously just the other half of \(S\), so:
\[\frac{S}{2}=1\]
\[S=1\cdot2\]
\[S=2\]
So we have proved that the whole series up there adds up to two, even though there are an infinite amount of terms in it.
\[\sum_{x=1}^\infty x\]
If we wanted to add up all the even numbers we would write out:
\[\sum_{x=1}^\infty 2\cdot x\]
Of course, both of these additions is a pointless exercise, because the answers are themselves infinite. There are an infinite amount of whole numbers, and if you add them all up you get infinity. Hell, even if you just added one to itself an infinite amount of times you would also get infinity:
\[\sum_{x=1}^\infty \tfrac{x}{x}\]
(Any number divided by itself is automatically one).
Does this always hold true? Do infinite sums ALWAYS add up to infinity? What about this sum:
\[\sum_{x=0}^\infty \frac{1}{2^x}\]
Looks a bit more complicated doesn't it? First of all notice that I am going to start adding from the zero'th position in the series. So first of all I plug in 0 for \(x\). I get one divided by two to the power of zero. Remember that anything to the power of zero is one. So the first term in this series is one divided by one, or one.
The next term is one divided by two to the power of one. Anything to the power of one is a just one copy of itself. So this is just two. So the term is one divided by two or a half. So far our sum is one plus a half.
The next term is one divided by two squared. Two squared is four, so this is one quarter. The next term is going to be one over two cubed, or an eighth and so on. Basically the series is one plus a whole long list of the inverses of the powers of two. Looks a bit like this:
\[1+\tfrac{1}{2}+\tfrac{1}{4}+\tfrac{1}{8}+\tfrac{1}{16}+\tfrac{1}{32}+\tfrac{1}{64}+\tfrac{1}{128}+\tfrac{1}{256}+\tfrac{1}{512}+\ldots\]
Does that add up to infinity as well? Hmm. Maybe not - look at each term, they all get smaller very very quickly. If they get smaller quickly enough, then adding them all up may not reach infinity.
Lets try some mathematical wizardry. Lets create a variable, which we will call \(x\). Actually, no, lets call it \(S\) for series instead. now let us give make the variable equal to the series that we have created from our sum. To keep things simple we'll cut down the number of terms on display:
\[S=1+\tfrac{1}{2}+\tfrac{1}{4}+\tfrac{1}{8}+\tfrac{1}{16}+\ldots\]
OK, now we will divide each side by two:
\[\frac{S}{2}=\frac{1+\tfrac{1}{2}+\tfrac{1}{4}+\tfrac{1}{8}+\tfrac{1}{16}+\ldots}{2}\]
If we divide a long series of additions by two, that is the same as dividing each individual number in the series by two. Starting with the one at the beginning, that will become a half, and then the half becomes a quarter and the quarter an eighth and so on. Can you see that we are effectively just throwing away the one, and moving every other entry in the series one to the left. Because the series is infinitely long, it is still infinitely long because infinity minus one is still infinity. So once we have done all our divisions by two we get:
\[\frac{S}{2}=\tfrac{1}{2}+\tfrac{1}{4}+\tfrac{1}{8}+\tfrac{1}{16}+\ldots\]
OK. What can we do with that then? Look up two equations to the one where we first defined what \(S\) was going to be. You can see that it is one plus a string of fractions. Now look at the equation above. We have established that one half of \(S\) is a string of fractions. Now consider the string of fractions itself. It looks similar. In fact it looks identical, and it will remain identical no matter how many terms you write down for either one. So far, so good. We can now say that \(S\) is one plus that string of fractions, but we know that the string of fractions is actually \(\tfrac{S}{2}\) so we can actually write:
\[S=1+\frac{S}{2}\]
And if we subtract one half of \(S\) from both sides we get:
\[S-\frac{S}{2}=1\]
\(S\) minus half of \(S\) is obviously just the other half of \(S\), so:
\[\frac{S}{2}=1\]
\[S=1\cdot2\]
\[S=2\]
So we have proved that the whole series up there adds up to two, even though there are an infinite amount of terms in it.
Monday, 4 July 2011
Inifinite Sums
We are moving on to the last symbol shortly, \(i\). Before we do so I just want to lay a little ground work with infinite sums.
You do not need to actually know about infinite sums to understand about imaginary numbers. So why are we bothering to look at them now? Two reasons. Firstly because the notation for these sums looks frightening, but isn't, and it crops up from time to time in maths texts. So it helps to know what we are looking at. Secondly, and more importantly, imaginary numbers sound a bit, well, imaginary, and actually they are very, very real and can be demonstrated using infinite sums - which do not actually involve any imaginary numbers at all.
So what the hell are we talking about? We all know what sums are - right? Colloquially they are arithmetical exercises, and also perhaps algebraic exercises. You "do your sums" if you are carrying out these types of exercises. In a more formal sense "sum" can be used as a synonym of "add". So if I "sum" two numbers I add them together. So, infinite sums are not homework that never, never, ends, but instead an addition operation that never, never ends. By way of example this is a sum:
\[1+2+3+4+5+6\]
and this is an infinite sum:
\[1+2+3+4+5+6+\ldots\]
The dots at the end just mean, and so on. Incidentally, these dots are called an ellipsis. The pattern is obvious - the numbers being added increase by one each time. So instead of writing out all the natural numbers (which would take literally for ever), you just stick the dots on the end. The result of the first sum is twenty one, the result of the second sum is infinity.
Even if we are just writing out the first sum above, it does take up quite a bit of space. We can use a notation to represent this sum in much less space, albeit it looks quite scary to start with. So, what we want to do is firstly work out how we describe each individual number to be added. We intuitively know that the numbers above are a pattern. There is an obvious logic behind the numbers. Each one is one larger than the last one. A list of numbers like this, with a logic behind how you get the next one, is called a series of numbers. So what we want to do is work out how to get any particular number in the series if we are just told which position it has in the series. How do we show that? We use functions! Remember a function just takes a variable (like the position in the series) and then does things to it to produce a result.
Once we have done that, we need to show that we are adding lots of things together. We need to have some sort of symbol to represent lots of addition. We also need to show which position in the series we are going to start counting from, and when to stop. That should do it.
In fact the symbol used in maths is this:
\[\sum\]
All that says is that what follows it is going to be added up. We put our formula for working out the numbers to the right of the symbol like this:
\[\sum f(x)\]
We then show what position we start in the series below the symbol:
\[\sum_{x=1} f(x)\]
Which for us in our example above is the first position, so we start with \(x\) equal to one. Remember, the one just means the first position in the series which goes into the function, not the outcome of the function. Lastly we need to show at which position in the series we want to stop adding entries. We do this by putting that number at the top of the symbol:
\[\sum_{x=1}^6 f(x)\]
So that complicated mess of stuff just means take a series where you generate each entry by putting the position of the entry in the series into the function, put the numbers one to six into the function in turn, and add up all the outcomes. To generate the series above the function \(f(x)\) is just \(x\) because the output of the function is the same as the input.
To symbolise adding up the infinite series illustrated above, you would put infinity at the top of the summation symbol to show that you just kept adding and adding forever:
\[\sum_{x=1}^\infty f(x)\]
What if you wanted to add up all the even numbers? Remember in our proof of the irrationality of \(\sqrt{2}\) we said that any even number was a number that could be divided by two to give another whole number. So \(f(x)\) would be \(2\cdot x\). That would make the first entry in the series two multiplied by one, the second two multiplied by two and the third two multiplied by three, or 2, 4, 6 etc etc ellipsis.
It is traditional that instead of \(x\) the variable we use to denote the position in the sequence is \(n\). Typically we would also dispense with the \(f(n)\) stuff and we would write \(a_n\) instead. That's a bit bloody weird though, using two variables for one number. All that means is that \(a\) is the \(n^{th}\) number in the series. I prefer to stick with \(f(x)\) or at least set out the function, because that way you can be sure what is generating the series of numbers.
You do not need to actually know about infinite sums to understand about imaginary numbers. So why are we bothering to look at them now? Two reasons. Firstly because the notation for these sums looks frightening, but isn't, and it crops up from time to time in maths texts. So it helps to know what we are looking at. Secondly, and more importantly, imaginary numbers sound a bit, well, imaginary, and actually they are very, very real and can be demonstrated using infinite sums - which do not actually involve any imaginary numbers at all.
So what the hell are we talking about? We all know what sums are - right? Colloquially they are arithmetical exercises, and also perhaps algebraic exercises. You "do your sums" if you are carrying out these types of exercises. In a more formal sense "sum" can be used as a synonym of "add". So if I "sum" two numbers I add them together. So, infinite sums are not homework that never, never, ends, but instead an addition operation that never, never ends. By way of example this is a sum:
\[1+2+3+4+5+6\]
and this is an infinite sum:
\[1+2+3+4+5+6+\ldots\]
The dots at the end just mean, and so on. Incidentally, these dots are called an ellipsis. The pattern is obvious - the numbers being added increase by one each time. So instead of writing out all the natural numbers (which would take literally for ever), you just stick the dots on the end. The result of the first sum is twenty one, the result of the second sum is infinity.
Even if we are just writing out the first sum above, it does take up quite a bit of space. We can use a notation to represent this sum in much less space, albeit it looks quite scary to start with. So, what we want to do is firstly work out how we describe each individual number to be added. We intuitively know that the numbers above are a pattern. There is an obvious logic behind the numbers. Each one is one larger than the last one. A list of numbers like this, with a logic behind how you get the next one, is called a series of numbers. So what we want to do is work out how to get any particular number in the series if we are just told which position it has in the series. How do we show that? We use functions! Remember a function just takes a variable (like the position in the series) and then does things to it to produce a result.
Once we have done that, we need to show that we are adding lots of things together. We need to have some sort of symbol to represent lots of addition. We also need to show which position in the series we are going to start counting from, and when to stop. That should do it.
In fact the symbol used in maths is this:
\[\sum\]
All that says is that what follows it is going to be added up. We put our formula for working out the numbers to the right of the symbol like this:
\[\sum f(x)\]
We then show what position we start in the series below the symbol:
\[\sum_{x=1} f(x)\]
Which for us in our example above is the first position, so we start with \(x\) equal to one. Remember, the one just means the first position in the series which goes into the function, not the outcome of the function. Lastly we need to show at which position in the series we want to stop adding entries. We do this by putting that number at the top of the symbol:
\[\sum_{x=1}^6 f(x)\]
So that complicated mess of stuff just means take a series where you generate each entry by putting the position of the entry in the series into the function, put the numbers one to six into the function in turn, and add up all the outcomes. To generate the series above the function \(f(x)\) is just \(x\) because the output of the function is the same as the input.
To symbolise adding up the infinite series illustrated above, you would put infinity at the top of the summation symbol to show that you just kept adding and adding forever:
\[\sum_{x=1}^\infty f(x)\]
What if you wanted to add up all the even numbers? Remember in our proof of the irrationality of \(\sqrt{2}\) we said that any even number was a number that could be divided by two to give another whole number. So \(f(x)\) would be \(2\cdot x\). That would make the first entry in the series two multiplied by one, the second two multiplied by two and the third two multiplied by three, or 2, 4, 6 etc etc ellipsis.
It is traditional that instead of \(x\) the variable we use to denote the position in the sequence is \(n\). Typically we would also dispense with the \(f(n)\) stuff and we would write \(a_n\) instead. That's a bit bloody weird though, using two variables for one number. All that means is that \(a\) is the \(n^{th}\) number in the series. I prefer to stick with \(f(x)\) or at least set out the function, because that way you can be sure what is generating the series of numbers.
Subscribe to:
Comments (Atom)