DISCLAIMER: This is a half written blog post I started ages ago and got bored with. I’m tossing out as is since the topic has shown up on my timeline a couple of time recently. The TL;DR version is: worry about quat to matrix conversion and you don’t need to normalize unless feeding to some method that expects the inputs to be normalized. shrug.



Problem statement and the set-up


Some set-up bullet points:

  • All quaternions (other than zero) represent a rotation.
  • All quaternions that fall on a line through the origin represent the same rotation and the space of quaternion is that of infinitesimal rotations.
  • The quaternion product is composition of rotations and the similarity transform $QPQ^{-1}$ applies a rotation $Q$ to a coordinate $P$.
  • Assuming $Q$ is a unit quaternion we can reduce $QPQ^{-1}$ to $QPQ^{*}$ and if that assumption is false then the resulting transform is still the correct rotation composed with a uniform scaling of $Q \cdot Q$ (the magitude squared).
  • The previous holds if converted into a matrix before application provided the diagonal components are computed without assuming unit magnitude. This is worth considering in all cases (SEE: On quaternion/rotation matrix conversions and errors).

What I am going to do is a little empirical demo with some graphs. Toy code can be found: HERE



The test quaternion style


The simple manufactured-solution test I’m going use is as follows. Note this is not attempting to pound on worse case input situations but to instead give a rough idea of what might actually occur in common cases.

  1. Initialize quaternion $Q_0 = 1$
  2. iteratively compose $n$ times: $ Q_{i+1} = R_i~Q_i $, where $R_i$ is a uniform random rotation.
  3. iteratively compose $n$ times: $ Q_{i+1} = Q_i~\left(R_{i-n}\right)^* $ (the $R_j$ sequence is same as previous step)


As an example (in painful detail) if $n=4$ we have the following composition sequence:

\[\begin{array}{llll} Q_0 & = 1 \\ Q_1 & = R_0 ~ Q_0 & = R_0 \\ Q_2 & = R_1 ~ Q_1 & = R_1 ~ R_0 \\ Q_3 & = R_2 ~ Q_2 & = R_2 ~ R_1 ~ R_0 \\ Q_4 & = R_3 ~ Q_3 & = R_3 ~ R_2 ~ R_1 ~ R_0 \\ Q_5 & = Q_4 ~ R_0^* & = R_3 ~ R_2 ~ R_1 ~ R_0 ~ R_0^* & = R_3 ~ R_2 ~ R_1\\ Q_6 & = Q_5 ~ R_1^* & = R_3 ~ R_2 ~ R_1 ~ R_0 ~ R_0^* ~ R_1^* &= R_3 ~ R_2\\ Q_7 & = Q_6 ~ R_2^* & = R_3 ~ R_2 ~ R_1 ~ R_0 ~ R_0^* ~ R_1^* ~ R_2^* &= R_3 \\ Q_8 & = Q_7 ~ R_3^* & = R_3 ~ R_2 ~ R_1 ~ R_0 ~ R_0^* ~ R_1^* ~ R_2^* ~ R_3^* &= 1\\ \end{array}\]


so we’ve composed $2n-1$ products and the exact result is $Q_n = 1$. We can measure how far we’ve deviated from unity by:

\[\left|~ 1 - \sqrt{Q_n \cdot Q_n} ~ \right|\]


Setting $n = 2^x$, performing computation in single precision (so $2^{x+1}+1$ products):


So at the right end we have $x=16$ so we’ve composed $2\cdot2^{16}-1 = 131071$ products and the magnitude of the results is wavering around $ 1 \pm 8\cdot10^{-5} $ which would result in wanky uniform scale factor on $\left[0.999841, 1.00016\right]$. Given the $x$ axis is a log scale let’s look again as a log plot:


So the error in magnitude roughly linearly increases (wrt the number of product). Now let’s look at how the angle error (measured in degrees) behaves.

\[\frac{360}{\pi} \text{atan}\left(\frac{\sqrt{U \cdot U}}{w} \right)\]


where $U$ and $w$ are the bivector and scalar parts respectively. (recall an extra factor of 2 is needed to convert the angle measure from quaternion to 3D space)


First let me mention that using the standard methods for quaternion-to-matrix and matrix-to-quaternion introduces a peek round-trip error of ~0.000121 degrees. This is more than what we’re seeing at $x=5$ which corresponds to a chain of 63 products. OK, like magnitude the angle error is roughly linearly increasing wrt the number of products. These plots also have some addition traces:

unity is as described for magnitude
tiny starts with $Q_0 = 2^{-100}$
huge starts with $Q_0 = 2^{100}$
norm performs a normalization step after each product
no-fma fp-contraction (auto FMA usage) is disabled
fma the product is rewritten into eight $ab \pm cd$ terms in the style of Kahan’s determinate (uses a TwoProduct)
promote same as unity except promotes to doubles, computes product and lowers to singles
binary64 same as unity except double precision quaternions (for later)


The point of tiny, huge and norm is to show that the magnitude of the inputs (and post normalizing) has almost no effect on the angle error. Notice that allowing auto & explict FMAs and promoting to double computation do significantly decrease the error.



The matrix test I love LA


For matrices can just repeat the same process with a couple of tweaks. Swap quaternion to matrix product, the random quaternion is converted to matrix just prior to that (any angle error introduced by this step doesn’t matter since the two terms which should cancel will be the same). To measure the angle error I should be using the fact that the trace is $1+2~\cos \theta$, but I’m being lazy and just converting back to a quaternion (we’re spillballing and doing matrices as well came as an afterthought).



Revised test


An obvious question would be: “But are uniform rotations a good choice?” especially if you’ve thought about what that means. Quickly: as a space very little of the total volume of rotations are those with a small rotation angle. Visualize rotating an object about all possible rotations that are approximately the indentity (yeah..it’s pretty static). Now attempt to visualize rotating about all possiable rotations that are approximately maximum angle (3D rotation of $\pi$). The average uniform random rotation angle works out to be $\frac{\pi}{2}+\frac{2}{\pi} \approx 126.476$ degrees.


So just about the same.



Some formal treatment


I’m aware of a couple of papers with formal proofs WRT quaternion error bounds. Most recent is “Algorithms for manipulating quaternions in floating-point arithmetic” which covers some bounds on the norm, product and a specific quaternion-to-matrix conversion method. The other is “Verification of the Functional Behavior of a Floating-Point Program: an Industrial Case Study” which (seems to be more of a tutorial oriented paper of using some tools (Frama-C, Coq & Gappa)) proves a (not tight) upper bound on the magnitude error. The example result is for in double precision, composing $10^{10} \left(\approx 2^{33.22}\right)$ rotations then the magnitude error is bound by: $|q| \leq 1.00003 $.




Comments





© 2023 Marc B. Reynolds
all original content is public domain under UNLICENSE