"A discerning developer just declared that when dealing with dollars, doubles are a dodgy deal".
Don't use float or double types to handle money in your programs. Use BigDecimal instead, or something equivalent in non-JVM languages.
The 'float' type has only 6 or 7 digits of accuracy, although it may appear to have more precision. Money values over $150K will start losing precision at about a cent at a time just from raw display, and there may be larger rounding errors when doing computations such as sales tax. The float type has a "mantissa" (the numerical part aside from the exponent) of 24 bits--this is where you get 24 bits of accuracy; 2^24 = 16777216, and if this is a monetary value then you are losing accuracy after you get over $167,772.16 -- if the only thing you are doing is displaying numbers. You may have fewer significant digits when you start doing calculations.
Using the 'double' type is less dangerous, but still prone to errors especially when doing mathematical operations on money. Double values have 14 to 15 digits of accuracy, but during financial calculations such as sales tax or interest rates, you might be losing pennies that eventually add up to millions.
This emphasizes the importance of testing not merely big numbers, but numbers with many significant digits.
Here's a handy guide to the problem of using floats, in a variety of contexts.
BigDecimal, on the other hand, represents decimal values as integers and an exponent. They can store arbitrary-precision values, as well as compute them accurately while giving you explicit control over rounding.