(Apologies for the repeated comments. Free LJ accounts apparently aren't cool enough to edit their comments, and LJ's comment preview feature bites hard, so I have to try a couple times to find the appropriate tags. Sorry.)
Exactly. If, at any point in working with a non-terminating decimal, you choose to operate on the number as though it did terminate, then you're working on an approximation.
Sorry, let me back up for a second.
I think we can agree that 1/3 != .333 (i.e., without the ... notation), right? We can represent 1/3 as .333... repeating infinitely. But, then what happens if you perform an operation on such a number?
You might say that 3 * .333... must be equal to .999... infinitely, but you've just induced a rounding error; to get that result, you had to behave as though .333... terminated at some point.
no subject
Exactly. If, at any point in working with a non-terminating decimal, you choose to operate on the number as though it did terminate, then you're working on an approximation.
Sorry, let me back up for a second.
I think we can agree that 1/3 != .333 (i.e., without the ... notation), right? We can represent 1/3 as .333... repeating infinitely. But, then what happens if you perform an operation on such a number?
You might say that 3 * .333... must be equal to .999... infinitely, but you've just induced a rounding error; to get that result, you had to behave as though .333... terminated at some point.
Or, let me try yet another tack:
The math I'm presenting here is as algebraically correct as what anybody else has presented, yet it arrives at a different conclusion.
I think you have to be really careful if you're dealing with infinitely repeating decimals and you're concerned about accuracy.