A normalized number is a number for which both the exponent (including offset) and the most significant bit of the mantissa are non-zero. For such numbers, all the bits of the mantissa contribute to the precision of the representation.
The smallest normalized single precision floating-point number greater than zero is about 1.1754943-38. Smaller numbers are possible, but those numbers must be represented with a zero exponent and a mantissa whose leading bit(s) are zero, which leads to a loss of precision. These numbers are called denormalized numbers; denormals (newer specifications refer to these as subnormal numbers).
Denormal computations use both hardware or operating system resources to handle them, which can cost hundreds of clock cycles.
Denormal computations take much longer to calculate on IA-32 and Intel® 64 processors than normal computations.
Denormals are computed in software on processors based on the IA-64 architecture, and the computation usually requires hundreds of clock cycles, which results in excessive kernel time.
There are several ways to handle denormals and increase the performance of your application:
Scale the values into the normalized range.
Use a higher precision data type with a larger dynamic range.
Flush denormals to zero.
See Also
Intel® 64 and IA-32 Architectures Software Developer's Manual, Volume 1: Basic Architecture
Intel® Itanium® Architecture Software Developer’s Manual, Volume 1: Application Architecture
Institute of Electrical and Electronics Engineers, Inc*. (IEEE) web site for information about the current floating-point standards and recommendations.