Multiplication and division of floats creates rounding errors of <1 ulp each time. In most contexts you never need to worry about them.
The operations you need to watch out for are addition/subtraction, in cases where your result has much smaller magnitude than your inputs, causing loss of significance. Sometimes great care must be taken in implementing numerical algorithms to avoid this. But this is an inherent problem in numerical computing, not the fault of the floating point format per se.
Doing integer/rational arithmetic gives you a choice: either never do any rounding and require exponentially growing precision that makes even the simplest algorithms impractically expensive (not to mention giving up entirely on the many common computations which cannot be represented whatsoever in an exact rational arithmetic system), or allow rounding/approximation of some kind and end up with roughly the same problems floats have.
The operations you need to watch out for are addition/subtraction, in cases where your result has much smaller magnitude than your inputs, causing loss of significance. Sometimes great care must be taken in implementing numerical algorithms to avoid this. But this is an inherent problem in numerical computing, not the fault of the floating point format per se.