A quick note that’s mostly regurgitating the paper *“Accelerating Correctly Rounded Floating-Point Division when the Divisor Is Known in Advance”* with some comments, code and extra numbers.

The problem is computing $\frac{x}{y}$ when $y$ is known in advance or a compile time constant. I’m going to ignore hardware reciprocal approximation instructions and related techniques. I’ll use single precision throughout.

**NOTE:** The working assumption throughout this is that the inputs are limited to floating point values on the *normal* range and that no overflow or underflow (absolute value is not smaller than smallest normal number) occurs. Otherwise error bounds are not as quoted.

**BEWARE:** Also division latency and throughput are not as bad as in days of yore.

## The “naive” method multiply by reciprocal

Let’s run though this well known method. First we precompute the reciprocal:

where $\text{RN} \left(\cdot\right)$ is simply the hardware *round to nearest (ties to even)*. Then the approximate quotient $q$ is computed by:

Performing the division (the correctly rounded result) has an error bound of $0.5~\text{ulp}$ and this approximation’s bound is $1.5~\text{ulp}$. A surprise from the paper is that this approximation returns a correctly rounded result more often than I would have expected at 72.9% of the time (this is a conjecture back by some empirical testing and a partial proof).

Of course the authors dismiss this as being too awful to contemplate but for those of us willing to increase the error bound for a performance boost it’s a very useful tool. I’m including this method because:

- It’s seemly common to run across code which is performing some $n$ divides to avoid the increased error bound and at the same time $y$ and/or the $n$ values of $x$ are computed with a very wide bound. Improving the latter and using this “naive” method will be faster and lower total error in some cases.
- The other methods require hardware FMA.
- As noted in intro the latency of division isn’t the murder it used to be and this might be the only viable option.
- And might as well mention that computing the reciprocal as a double, promoting the $x$ values to doubles and demoting is another potential option.

## The product of unevaluated pair method 1 FMA + 1 product

The main proposed method of the paper precomputes an unevaluated pair $\left(z_h,~z_l\right)$
approximation of the reciprocal by:

One possible way to translate that into *C* code would be:

To approximate the division we simply multiply by the unevaluated pair:

As an aside there’s a related paper *“Correctly rounded multiplication by arbitrary precision constants”* which describes how to determine if some constant (stored as an unevaluated pair) is correctly rounded for all x. Some examples include: $\pi,~ 1/\pi, ~\log\left(2\right), ~1/\log\left(2\right)$, etc.

So how good is this approximation? The short version is that it’s excellent:

- For $98.7273\%$ of choices of $y$ the result is exact (meaning correctly rounded, aka bit identical to performing the division) for all input.
- The remain $1.2727\%$ of $y$ values only return an inexact result for exactly one mantissa bit pattern per power-of-two interval which in single precision means $1$ in $2^{23}$ or approximately $1.19209\times 10^{-5}\%$
- The maximum error of these inexact results is $0.990934~\text{ulp}$, so they are all
*faithfully rounded*. The average error is a much tighter $0.605071$ and the RMS is $0.611434$.

A minor note is that the sign of a zero output is flushed to positive if the signs of the two elements of the ordered pair are different.

Aside: unevaluated pairs are a special case of (non-overlapping) floating-point expansions. We have:

## Corrected “naive” method 2 FMA + 1 product

Up front: this method isn’t very promising since it requires two FMAs and a product. Minimum sketch: we compute the approximate $q$ as in the “naive” method, then the approximate remainder $r$ and finally the correctly rounded quotient $q’$ by:

and a code version:

## Oh! For form sake!

For the single FMA method the paper provides a recipe to determine if a given divisor $y$ is correctly rounded for all $x$ and if not the mantissa bit pattern $X$ for which it is not. The method is very expensive since it requires a fair number of branches and to add insult to injury with near equal probability (noted in code comments). It’s bordering on useless for runtime usage IMHO.

My code is slightly different than the method presented in the paper. First I computed the smallest $Y$ value which is not correctly rounded which is: `0x9f0237`

. That’s folded with the paper’s initial test of all even $Y$ are correctly rounded into a rotate right operation. This increases the accepted values from $0.5$ to $~0.62$. Secondly I completely drop the test on the magnitude of $z_l$. This is a fair number of operations to cut half the cases at that point. Next the computation of the candidate $X$ value is slightly restructured to drop the number of branches. This block of code can be to convert into branchfree but I couldn’t be bothered. Lastly if we have a candidate $X$ we test if it’s correctly rounded.

The above needs to the following helper functions: