Skip to main content

Floating-Point Numbers Are Already Logarithms

IEEE 754 doubles store numbers as exponent plus mantissa, a decomposition that makes building math functions from scratch surprisingly tractable. · 6 min read

When implementing transcendental functions in a bare-metal assembly project, the IEEE 754 format turns out to be the most useful tool available. The exponent field is a coarse base-2 logarithm. The mantissa is the fractional refinement. Every 64-bit double is already stored as a decomposition that maps almost directly onto the algorithms for exp, ln, and power functions.

The Layout

A double splits into three fields: one sign bit, eleven exponent bits biased by 1023, and 52 mantissa bits encoding the fractional part of a normalised 1.something value. The number’s actual value is (-1)^sign × 2^(exponent - 1023) × 1.mantissa. That middle term is what makes things interesting. If you need ln(x), you already have the integer part of log₂(x) sitting in bits 52 through 62. Extract it with a shift and a mask, subtract 1023, and you have the coarse answer. The mantissa gives you the rest.

ln(x): Disassemble, Refine, Reassemble

The format did most of the heavy lifting. The polynomial is just a refinement pass on a decomposition that was already there.

exp(x): The Same Trick in Reverse

Why It Generalises

The principle applies well beyond assembly. IEEE 754 is a logarithmic decomposition, and knowing that helps when reasoning about precision (the mantissa gives constant relative precision because it refines a logarithmic grid), when diagnosing instability (operations that cross exponent boundaries lose mantissa bits), and when understanding why “fast” approximations like the famous inverse square root hack work at all. They all exploit the same property: the exponent field is a coarse logarithm baked into every floating-point number.