Computer Science's Math

There is nothing wrong if you apply your calculus to say (a+b)/2 = a + (b-a)/2. However, computer science says (a+b)/2 is NOT always equals to a + (b-a)/2. But why ? Think for a while.

Here is the answer: To compute (a+b) /2, one has to add two numbers first and then divide the sum by 2. Let's say we have a computer with 32 bit register size and both a and b are integers. If the sum of a and b i.e. a+b is bigger than the number it can be represented by 32 bit integer, then there will be an overflow. Thus, even if the actual result of (a+b)/2 can be represented by 32 bits, the intermediate calculation can be larger than it is supported by 32 bits.

On the other hand, when you compute a + (b-a)/2 , you are adding a, and (b-a)/2. obviously, (b-a) / 2 can be supported by 32 bits. Since we know that the final sum is also an integer of 32 bits, a+ (b-a)/2 will never be suffered from the overflow problem. Hence, (a+b)/2 may not always be equal to a + (b-a)/2!