> Do all your handling of whatever '2.00' using integers, with the number multiplied by 100. That means declaring integer variables in whatever programming language you're using, declaring an INTEGER column in the SQLite database, and doing programming to print the number divided by 100 to two places when you want to print it. This is the normal way computers handle amounts of money with a one-hundredth base. This is a fallacy and useful only for addition and subtraction. Multiplication and division introduce other problems and need to be carried out using "exactly rounded" arithmetic. That means computing the result "exactly" and then "rounding" to the precision required. This cannot be done with integer arithmetic. You require scaled arithmetic (floating point) in order to be able to do that. The most commonly available format for "scaled arithmetic" today is floating point, and the most common variant is base 2, usually called Binary Floating Point, though there are other bases available, such as base 10 and base 16. However base 2 has specific advantages over using other bases. Some languages directly support fixed precision scaled types, for example COBOL, which carries out (with runtime library support) "exactly rounded" arithmetic without the necessity of the programmer knowing what they are doing.