Point nines recurring[1] equals one.
I understand that this claim is intuitively displeasing to many intelligent people. "Point nine recurring doesn't equal one!", they say, and give various reasonable accounts of why the two should be distinguished. However, as an engineer, I submit to the reader that we should equate them, however displeasing it should appear, on the following practical grounds.
Decimal notation is an incredibly effective way of describing numbers. (Or, more accurately, positional notation is. We use decimal specifically because we have ten fingers. I think senary or quarternary notation would be a superior substitute, but I'm not in charge.) Using a small set of symbols and a few characters, we can represent in a very close-grained fashion numbers of a huge span of orders of magnitude - and even larger if we permit exponential notation (e.g. 6.022 * 1023). Further, in decimal notation, comparison of the magnitudes of numbers is dead simple. (Is 14/25 larger than 9/16? 14/25 = .56 and 9/16 = .5625, so no. Compare that with performing the fraction subtraction!) However, it pays for these advantages in the corresponding flaw: there are many[2] numbers which cannot be written in standard decimal notation. Included among these are the majority of fractions. A good example of this is 1/3.
1/3 = 10/30 = 9/30 + 1/30 = 9/3 * 1/10 + 1/3 * 1/10 = 3 * 1/10 + 1/3 * 1/10 = 3 * 1/10 + 3 * 1/10 * 1/10 + 1/3 * 1/10 * 1/10 = 3 * 1/10 + 3 * 1/102 + 3 * 1/103 + ...
As is obvious, there's no way to write this completely out as a decimal, like you could with 14/25 and 9/16. So, what do you do? You make up a new notation. Instead of trying to hit the end, you go until you've got the pattern, and then...
1/3 = 3 * 1/10 + 3 * 1/102 + 3 * 1/103 + ... = .3 + .03 + .003 + ... = .333...
Now, recall, there's no requirement that this notation exist. If, for example, you were programming a digital computer, you might choose not to bother with this notation, as you would never store an infinite decimal. However, it is convenient, defined this way. One especially convenient thing about it, for example, is that one can perform the regular arithmetic operations without trouble:
2/3 = 2 * 1/3 = 2 * .333... = .666... 4/3 = 4 * 1/3 = 4 * .333... = 1.2 + .12 +.012 +... = 1.333...
Et cetera.
However, the other consequence is the following.
3/3 = 3 * 1/3 = 3 * .333... = .999... and 3/3 = 1 thus 1 = .999...
So, one is left with the dilemma. Preserve this useful notation, or throw it out to avoid this (to some) distasteful equality? And, looking at it like that, is it really any sort of choice?
Thus, point nine recurring equals one. Q.E.D.
1. Some may say, for example, "repeating", rather than "recurring". (Google suggests the terms are approximately equal in popularity.) Naturally, the specific term is irrelevant. ^
2. "Many", in this case, meaning "an infinite number of" - specifically, 2 raised to the (infinite) number of counting numbers. What aleph infinity this is depends on whether you accept the continuum hypothesis. ^
no subject
I fully realize that I'm stubbornly refusing to accept something regarded as rigorously proven canon by a very large number of people who are both smarter and better trained in arcane mathematics than I am.
Y'know, having written this proof, I'm not so sure the proofs are that rigorous. It's sorta definitional, in some ways. (Then again, all math is. :D )
no subject
Yep. I was very carefully going over the content of Sam Hughes' page that you linked to earlier. In any case where somebody is presenting a large volume of well-structured evidence supporting an idea that I have trouble accepting, I try to find the first point in their argument that I disagree with.
What I found was right near the top, where he says, "Different sets of numbers have different properties." (Which I don't disagree with, it's just from there I begin to follow a different line of reasoning.) I think -- without the necessary background in mathematics to give this notion any kind of legitimacy whatsoever -- that real numbers which necessarily have an infinitely repeating decimal value must be treated as a separate set of numbers, with some of their own properties. So, I see the case of 1 being represented as 1.000000... not any different from saying, 1 = 1 + 0 + 0 + ...; none of the decimals add any value at all to the number being operated on. However, in the case of something like .333..., each subsequent decimal adds some small value to the number in question.
Things begin to make some sense from there, although it's inconvenient. For example, that makes the precise value of the number somewhat difficult to deal with. It reminds me of the Koch snowflake; you have this thing with finite bounds but an infinitely long boundary.
So, let's say you wanted to add one Koch snowflake to another Koch snowflake. How would you do that? Well ... you sort of can't. You can get a very close approximation, close enough to do whatever it is you're trying to do. But you can't get the exact result, unless you first return the Koch snowflake you're working with to the original equation which described it. So, if I wanted to operate on .333..., the first thing I would do is return it to the equation that described it: 1/3. Then I could do useful things with it, like add another 1/3, and get the correct answer.
I think I'm making a distinction between the idea of a particular number, and its value, or its representation. I realize that that's something that would make a lot of mathematicians cringe. :-) But, again, there is a small but not insignificant difference between saying, "Koch snowflake of unit side four", and asking somebody to compare the drawing of such a snowflake to the drawing of another Koch snowflake. It's also better to say, "add a Koch snowflake of unit side 4 to a Koch snowflake of unit side 3 and tell me what you get", than it is to hand a person a drawing -- a representation -- of two Koch snowflakes, and say, "add these up for me".
no subject
no subject
grins, ducks, and runs
no subject
Incidentally: in a way, you're right, except that professional mathematicians are well-known to say ".999... equals 1". So amateurs should probably treat it the way they treat Gödel's incompleteness theorem (http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems) - with a humility born of acknowledging the (heh!) incompleteness of their understanding. :D