WARNING: Some numerical analysis and mathematics will be mentioned in passing. Also the number systems will not be treated like blackboxes. Risk of physical injury, however, is negligible.
I like magic tricks. Watching the performer do the show and attempting to catch the “slightofhand” that makes the impossible seem possible. It’s fun and makes me believe in magic for a few moments. So I’m going to give a shot at doing some floating point magic tricks.
Throughout my performance I’ll use two 32bit formats: IEEE binary32 (aka singles or float in a number of languages) and posits (with es=2).
NARRATOR: For general purpose computations 32bit is about as small as we can push, otherwise bit size choice doesn’t matter. Gustafson uses es=3 to cover the range of binary32 but suggests that es=2 is a better and it look like that’s the value they’re running with for the specification. The choice doesn’t matter in the slightest for this discussion.
Originally this was going to be one big post but I’ve decided to break in in parts. Source code will be provided in a later version (for reasons). There are so many intertwined issues that it’s almost impossible to provide some nice linear presentation.
Goldberg’s long thin triangle performance LIVE on stage
NARRATOR: Kahan presented this problem in or prior to 1986 and the Goldberg paper (from 1991) was inspired from attended a conference by Kahan.
We are going totally compute the area $A$ of a thin triangle using the classic form of Heron’s equation. Life’s way too short to think about computation!
With a wave of my hand I declare the lengths to be:
and let’s pull out of my magic computationalizing box the exact, the supermojoexponent and taperingprecisionjustexactlywhereyouneedit posits and poor old binary32 results (wrong digits in red):
Booyah! In your face ya old retired, don’t publish papers anymore, stuck in the 80s people! Posits are, like, exact and you got ZERO digits correct!
NARRATOR: Attempting a detailed comparison between two formats which are both base2 in base10 is misleading.
Ahhh…okay. Let’s look again:
CATASTROPIC! only 3 out of 24 binary digits correct vs. 23!
NARRATOR: Yes. I see what you did there. Very clever, but then again Gustafson titled his mathematica example “Thin triangle area with posits: an unfair fight”
I’m going to do this trick again, but first I want to tell a story. Way back in 1985..raise your hand if you were even born yet. Yeah? A couple of people. Ok, well, back in the day integer adders were super cheap and integer multiplier super expensive SO the IEEE committee was all like “Okay..let’s make the exponent part bigger (since that needs adders) and the fractional bits part smaller(that the multipliers)!”. I suppose that was fine then, but manohman have things changed. Like we just saw…singles only have 24 bit of precision (AT MOST!) and totally fail at a simple computation. But at the same time can represent really stupid big numbers! Get this: the largest finite number a single can represent is almost $2^{128}$. I think mathematicians call numbers like this googleperplexing…probably because even searching the web won’t help you understand them. See what I mean. Way too big. That’s why I’m using posits with es=2. With that we still represent numbers up to $ {2^{120}} $ AND have more precision. Maybe a better choice would be to use es=3..that would slightly hurt our precision, but we’d be able to dropinreplace IEEE doubles with 32bit posits (how dope is that?)…but I’m going to stick with my choice for now.
NARRATOR: Knuth once said: “The idea that people knew a thing or two in the ’70s is strange to a lot of young programmers.”
Let’s do the same thing again, but this time with a reasonably big, but not googleperplexing, number: $a=3670016$.
NARRATOR: Fun fact: $2^{120}$ = 1329227995784915872903807060280344576
Yeah, yeah okay…millions is silly, let’s use a more realistic number: $a=917504$
NARRATOR: I was zoning out. What was that you said about replacing doubles again?
Shut! Up! $a=3584$
NARRATOR: That’s smaller than $2^{12}$. Are these termed googleunderwhelming numbers?
Okay let’s not lose sight of what we’ve seen so far. Which is that posits can be much more accurate than binary32. What I’ve ignored is the common consensus that the most important interval is $\left[1, 1\right]$. Now, for once, IEEE got this right and HALF of the values are on this interval. Of course posits get it right too so let’s try values less than one…
NARRATOR: Don’t bother. We’ll see exactly the same thing. You started by turn the exponent knob up away from 0 and turning down the same thing happens. I hinted that you should look at the binary, but did you? Here I’ll delete the distractions the exact answer properly rounded to 24bit were:
100000000100111111100111
100000000100111111100111
100000000100111111100111
100000000100111111100111
And the IEEE bits were:
100101000010100101111111
100101000010100101111111
100101000010100101111111
100101000010100101111111
Hey, there, there. Let’s do a little math and look at some graphs. That always makes me feel better.
Intermission
I’m going to start with some full disclosure. The reason for this blog post is stupid. I made a shitpost poll on tweeter which I was expecting to be ignored but it wasn’t. I offered to make a critique and my (very cruel) followers voted yes so here we are. So here’s what I’m going to do. I’m going to keep it to the bare minimum.
 I’m only going to address general purpose computations. This eliminates bit sizes smaller than 32 and usage in embedded or otherwise special purpose hardware. Talking about application specific usages requires knowing and addressing the specific needs of that application.
 No meaningful analysis will be performed. Although more than others I could point to.
 If I perform a “trick” that shows posits in a negative light I will explain how it works.
 I am not an expert in scientific computing/NA nor hardware design, so I’ll only be pointing out “obvious” things. On this note I’m not being a jerk to people for which this stuff isn’t “obvious”. Hell, ya’ll is my target audience! I will be tossing out concepts without details or which I’ll sketch later. At least you’ll have some keywords to search on…assuming you care.
 I’m also going to completely ignore some details. This is too long as it is and keywords are provided. Examples include mostly ignoring overflow/underflow issses and working with denormals (for IEEE case).
 Yeah this stuff is hard but if you’re getting by without any of it then you’re officially doing it right. If you’re running into trouble then there’s no magic pill and you’re going to need to learn some basics. Luckily there’s a wealth of resources just a web search away and people you can hit up for questions (yeah it’s not the 80s anymore). And of course there’s always the option of grabbing a library written by people that liveeatandbreath this material.
 Counting correct digits isn’t my thing. I think it’s a poor measure (more later). I’m partially running with that to be the same.
Another bookkeeping item is the tone that I’ve chosen for this post. I’m sure some readers will find it on somewhere on the range of objectionable to “toxic”. My reasoning is simple. The authors allow themselves to be jerks and provide egregiously disingenuous examples and statements. In their defense I suppose I should point out the generous sprinkly of weasel words which make many statements not complete altfacts.
WARNING: All of the following statements about posits are false:
 You can just treat them like reals and they will give good results.
 They are dropinreplacements for IEEE.
 All triedandtrue methods will work just as well or better.
Back to the future
Since I’ve broken my big post up into parts we need Bill and Ted to write up some posits..err..postits to skim some points that don’t yet exist yet.
 We have two different finite sets designed to model reals (extended reals for IEEE and projectively extended reals for posits). For a fixed $b$ bit package we can represent up to $2^b$ distinct elements
 They both have a sign bit and are therefore symmetric about zero.
 They both store a poweroftwo exponent and some number of fraction bits and so are both binary floating point models. The fractional part means that both are piecewise linear (fixed point) on power of two intervals.
 IEEE uses a fixed number of bits for the exponent (8 for binary32) and the remaining are fraction bits. Those on the normal range have an additional implied bit and denormals do not.
 Posits choose to encode the exponent using a variant of the variable length Rice encoding and the remaining bits are the fraction. All nonzero finite posit values have an additional implied bit.
So ignoring the “devil’s in the details” parts the only important difference between the two is how finite values are distributed. IEEE’s fixed precision on the normal range is based on the commonly accepted notion of “scaleinvariance”. Posits are based on the notation that a tight neighborhood (in a log sense) of one need the lion’s share of points. In this neighborhood posits have a few more bits of precision and they “taper” off as we move away.
One possible argument for tapered precision might be that “gosh there’s a bunch of important constants in this range!” I’ll quote someone that’s well respected and liveseatandbreaths this stuff:
Trefethen^{1}: At present, almost nothing in physics is known to more than 12 or 13 digits of accuracy. Thus IEEE numbers (doubles) are orders of magnitude more precise than any number in science. (Of course, purely mathematical quantities like $\pi$ are another matter.)
There’s some specific examples such as the gravitational constant $G$ is known to 4 decimal digits. Note wikipedia lists 6 digits, but then again this 2018 paper says 4, we measure 4. A second point raised is that doubles on a poweroftwo interval can represent almost $10^{16}$ values and in physics when we hit about ${10^8}$ discrete things we’re all..this is a continous system! He continues with:
In two senses, then, floatingpoint arithmetic is far closer to its ideal than is physics. It is a curious phenomenon that, nevertheless, it is floatingpoint arithmetic rather than the laws of physics that is widely regarded as an ugly and dangerous compromise.
And (partly for fun) the section I’m quoting ends with the Trefethen quote I see most frequently (bolded):
…leading to the current widespread impression that the main business of numerical analysis is coping with rounding errors. In fact, the main business of numerical analysis is designing algorithms that converge quickly; roundingerror analysis, while often a part of the discussion, is rarely the central issue. If rounding errors vanished, 90% of numerical analysis would remain.
To briefly run with the bolded quote the biggest error for practitioners to worry for about is from modeling error. Not the floating point model, but the model used in the computation. A real world example here is the millennium bridge or more simply expecting high school textbook physics equations to hold. The next biggest problem is truncation error which is we can only perform a finite number of steps. Notice that both of these are pure math problems and not computer math ones. (Some truncation error can be removed in pure math when we know a closedform solution, but this isn’t generally the case). Additionally there’s input data uncertainty but this is neither pure math nor computation.
So one of my points for giving these Trefethen quotes is: Don’t get lost in the numbers. By necessity we’re going to be looking at rounding errors, troublesome cases for floating point (and recall both models are floating point) and measuring\comparing errors. It all to easy to think: “Fuck! Floating point computations are totally broken!”
Triangle trick REVEALED or let’s not break the cardinal rule
Let’s break down the first example of the magician. It’s simply Gustafson’s 128bit example converted to 32bit. For both posits & IEEE we have the upper 3 bits set for all three inputs with exponent of $b$ and $c$ smaller by 1. For IEEE’s $b$ and $c$ values the bottom two bits are set. However posits values in this range have two more bits available, so the same numeric value has the bottom two clear. Let’s look what happens when we start to compute $s$. First we perform $a+b$:
IEEE POSITS
t0=a+b: 111.000000000000000000000 111.00000000000000000000000
+ 11.1000000000000000000011 + 11.100000000000000000001100
 
1010.1000000000000000000011 1010.100000000000000000001100
round: 1010.10000000000000000001 1010.10000000000000000000110
The upper bits cause a carry (both increase exp by one) and that trailing bit of $b$ means we need one more bit (now 25) to represent exactly. IEEE has to round to 24 bits and the posits version still has one zero bit at the bottom. Let’s complete the computation of $s$.
t1=t0+c: 1010.10000000000000000001 1010.10000000000000000000110
+ 11.1000000000000000000011 + 11.10000000000000000000110
 
1110.0000000000000000000111 1110.00000000000000000001100
round: 1110.00000000000000000010 1110.00000000000000000001100
.5f*t1: 111.000000000000000000010 111.000000000000000000001100
s: 7.000000954 7.000000715
This time we still need 25 bits to be exact since t0 had to adjust the exp and IEEE must round again. Posits are still good with the padding bits we gave them. The multiply by half introduces no error for either. Also shown is the decimal values of each to 10 digits and the binary32 relative error is a tiny $~3.40598 \times 10^{8}$.
Now for the $\left(sa\right)$ and $\left(sb\right)$ terms:
sa: 111.000000000000000000010 111.000000000000000000001100
111.000000000000000000000 111.000000000000000000000000
 
0.000000000000000000010 0.000000000000000000001100
sb: 111.000000000000000000010 111.000000000000000000001100
11.1000000000000000000011 11.100000000000000000001100
 
11.100000000000000000001 11.100000000000000000000000
Again posits don’t have any rounding error. The binary32 $\left(sb\right)$ has a tiny relative error of $6.81196 \times 10^{8}$ and the performed $\left(sa\right)$ subtraction was exact, but the total relative error is a massive $0.333\overline{3}$. The tiny error in $s$ was magnified by a subtraction of a nearby number. This is an example of catastrophic cancellation or loss of significance. John D. Cook’s version of a common rule of thumb:
Cardinal rule of floating point arithmetic:
If $x$ and $y$ agree to $n$ bits, then you can lose up to $n$ bits of precision computing xy.
Nothing interesting happens in the remaining operations. All of the error is from a contrived set of numbers where IEEE is in a catastrophic cancellation case and posits are not. So this example tells us nothing. Well actually there is a noteworthy thing: the final relative error of the IEEE result is $~0.154701$. This is an example of the incorrect notion that errors grow without bound as a computation progresses.
Background references on the thin triangle problem: as previously mentioned this problem was original introduced by Kahan^{2} (as early as 1986) followed by Goldberg^{3} performing a penandpaper analysis and most recently Boldo^{4} provides a formal proof and a tighter error bound.
Posits thin triangle trick: the “fair” fight
As we saw the reason the posit version appears to give a better answer is because the deck was stacked. The point of looking at the thin triangle problem is to see the impact of catastrophic cancellation in what appears to be a simple realworld problem. The stacking of the deck was to choose numbers where loss of significance is occurring for IEEE and not for posits. Let’s look at what happens when we use the correct $\text{ulp}$ value for posits ($\text{ulp}$ stands for “unit in the last place”. Brief handwaving def is the size of the right most digit..so $\text{ulp}\left(1\right)$ is $2^{23}$ for IEEE and $2^{27}$ for posits.
The correct answer is $1.9578876 \times 10^{3}$ and posits give $\color{red}{2.2607739} \times 10^{3}$ and the binary digits are:
So posit don’t have any black magic to prevent loss of significance. There is no magic to be found. This is simply a property of modeling reals with a single element of a finite set. Let’s look again at the binary32 digits in this configuration:
Authors: That is ample for protecting against this particular inaccuracyamplifying disaster
No. Tossing more bits at the problem doesn’t change a thing. The only choice is to change how the computation is performed. However this does include locally moving to a higher precision model or effectively working at a higher precision (more on these later).
Let’s just beat a dead horse with posits128 (es=4):
\[\begin{array}{l} 100\color{red}{101000010100101111101101001001100010011001110011001000000010011100101001101001011101001010110111100101100001010111} \end{array}\]Authors: It is interesting to note that the posit answer would be more accurate than the float answer even if it were converted to a 16bit posit at the end.
It is interesting to note that the 128bit (or of any length) posit answer would not lose any correct digits if converted to the 8bit IEEE format of the authors’ design at the end.
In the real world when we hit computational problems with inputs we care about we must change how the computation is performed. Here our options are very limited and we have two choices:
 Promote to higher precision, perform computation and lower back to original. So with 32bit input we could promote to a 64bit format.
 Effectively work in a higher precision (more details in a later post. Floating point expansion is one keyword for the impatient).
The Boldo paper^{4} details Kahan’s solution (for double input) which is an example of using option two. This is going to be left as a black box for now and it cost about one more issue vs. Heron’s (godbolt):
This list of requirements simply are: sorted largest first, valid triangle (including degenerates to line). Taking the original set of inputs and using Kahan’s method with 32bit operations gives:
An interesting question is then: Does the error bound of Kahan’s method hold for posits? Well we’ll have to deblackbox it at some point I guess. An aside here: the complexity of Kahan’s method as shown is about the same as Heron’s (godbolt). The real cost is the ordering requirement in the cases where it’s not known nor otherwise required.
Thin triangle: reality check
I’d guess that the author’s might counterargue the above by saying I’m attempting to meet the wrong “esthetic” by being “nonrigorous, but cheap and fast”. That instead of “guessing” I should be “rigorous and mathematical”. Let’s go back to the original rigged example for IEEE.
and let’s assume that $a$ is a given (taken to be exact) and that $b$ was computed with uber extreme care and we know that it’s within 1 ulp of correct:
performing exact arithmetic with these gives:
and once again the original IEEE result computed by Herons and hitting catastrophic cancellation was:
This is exactly the upper end of the interval rounded to IEEE. Blindly using interval arithmetic got us nowhere (we’d have to observe a small interval yielded a relatively wide one) and didn’t they claim we could just copy equations from textbooks? Let’s do some of that (roughly) define stuff shit:
 A wellconditioned problem is one where small changes in input result in small changes in output.
 an illconditioned problem is one where small changes in input result in large changes in output.
This is a pure mathematical property and not one of computation. A formal measure of conditioning is the condition number. The quick nonexact & nonwikipedia def is:
\[\frac{\text{change in output}}{\text{change in input}}\]and given a function $f$ it can be approximated by:
\[\frac{x~f'\left(x\right)}{f\left(x\right)}\]
for more than one variable you gotta get all Jabocian.
We also have the solely numeric property of stability which is frequently defined in a way that sounds identical to conditioning. Let me give two quick & dirty defs of stable:
 An algorithm is stable if the result(s) are relatively insensitive to errors made during the computation.
 An algorithm is stable if the result(s) are the exact solution of a nearby problem.
The second definition is in terms of backward error analysis which I may mention in a future post (Actually there’re two flavors: forward and backward stability).
Both the conditioning of a problem and the stability of an algorithm can depend on the range (or domain) of the inputs. And to beat a dead horse all four combinations of wellconditioned/illconditioned and stable/unstable are possible.
This brings us back to authors’ arguments of just copying equations from textbooks (since algebra is hard) and if you run into trouble use intervals. Great. We find an interval that looks too big. What’s the problem? Are we doing something unstable? Is our problem illconditioned? Point being intervals are a tool whos usefulness depends on the user knowledge of which the authors are showing disdain.
NOTE: This section was the hard reality check, now for the soft version. You probably don’t care. Assuming you’re computing triangle areas it’s likely you’re interested in the total area of many of them and individual ill conditioned input value results won’t end up a significant part of the total. The entire point of computation is to get usable results.
Rah rah ahahah! I want the log! logloglog! A loglog plot!
The authors propose a warm up exercise to compare the two formats which I’ll address in the next section. For my warmup comparison I’ll run with triangle areas. Since the authors have some interesting ideas about error measures (more in a later post), I’ll quickly stub it out forward relative error:
where $e$ is exact and $a$ is the approximation (computed result). Care must be taken for $e$ approaching zero and the right side side is useless for computing unless using extended or arbitrary precision. We can rewrite $a$ in a Wilkinson style:
where $p$ is bits of precision of $a$ which is fixed in the normal range of IEEE and varies for posits. Then the error can be expressed as:
where $n$ is the number of ULPs which can indeed be any real value. As an example a function which is “correctly rounded” means that it’s error is within one half ULP $\left(n \le \frac{1}{2}\right)$ and $p$ for binary32 is fixed to $24$ in the normal range so on that range we’d see a relative error of less that $2^{25} \approx 2.9802322 \times 10^{8}$.
Let’s take the magician’s stage show and look at a plot of relative errors:
The IEEE results are garbage but it’s dead consistent across the range with a fixed relative error of $~0.154701$. It only changes when values start hitting denormals or overflow. For this section I want to show how the error plots behave so I’m going to show a log plot instead (actually manually performed and base2 of course). Taking the log of the relative error gives us:
If the error is zero then this shoots off to negative infinity. Repeating the previous we have:
There’s a couple of things to note. First the little hiccup posits had around $x=22$ is less noticeable. We don’t really care here since we want to examine behavior when the error is reasonably low. The second thing is the posit plot looks nice and flat on $x \in \left[15,~8\right]$ in the linear plot but is rather a mess when we microscope in.
Now let’s change to using Kahan’s method:
The IEEE result is still flat and the error has radically dropped. Posits have generally improved but is error is even more erratic. However up to this point the comparison isn’t fair. So I’m going to change the method. We have IEEE with the lower 3bits of $b$ set and posit has two extra clear (as in the trick revealed section). And once posit have less than 24bits of precision it can no longer even represent the input, so it’s screwed. Each model will now have it’s own set of inputs in the same configuration. Showing the above will help make it clear that this isn’t cheating.
Where $\text{RN}$ and $\text{ulp}$ are the rounding and size of ulp function in each model. IEEE is the same as before and posit now always has representable inputs. Additionally there’s the minor tweak to $a$ (divided by four from previous) so the graphs are centered around posits maximal precision range. Also the plot now have two more traces which are the exact area $A$ correctly rounded in each model. Ignore these for the moment.
In addition I’ll add RMS and average error number for each on the range $x \in \left[20,~20\right]$. In linear space this is the “please don’t hurt ‘em” range of $\left[\frac{1}{1048576},~1048576\right]$.
IEEE  IEEE RN(A)  posit  posit RN(A)  

RMS  1.99108e8  1.99108e8  0.003084440  1.62931e6 
AVE  1.99108e8  1.99108e8  0.000714024  6.21741e7 
Let’s jump to the opposite end of the conditioning spectrum and look at a wellconditioned triangle:
IEEE  IEEE RN(A)  posit  posit RN(A)  

RMS  1.79482e8  1.79482e8  6.09276e7  3.50695e7 
AVE  1.79482e8  1.79482e8  2.61015e7  1.64948e7 
It’s gonna be admitted. Posits are kicking IEEE’s ass for equalateral triangles with side measures on $\left[\frac{1}{32}, 32\right]$. Recall this is a base2 loglog plot so a decrease in Y by one is halving and increase by one is doubling of relative error respectively.
However significands with a population count of one is very improbable, so let’s change to the very probable half filled using $\alpha = \left(\sqrt{5}1\right)/2$:
\[a = b = c = \text{RN}\left(\alpha~2^x\right)\]IEEE  IEEE RN(A)  posit  posit RN(A)  

RMS  2.61353e8  2.613538  0.0000794538  5.79264e7 
AVE  2.61353e8  2.61353e8  0.0000342251  2.40965e7 
How about a rescaled Pythagorean triple:
Just a side thought here. I’m getting the impression that when working with posits that choice of unit of measure might be a bit important.
IEEE  IEEE RN(A)  posit  posit RN(A)  

RMS  2.06133e8  2.06133e8  0.000206543  4.55979e7 
AVE  2.06133e8  2.06133e8  0.0000562866  1.75246e7 
Okay that’s enough loglog plot examples. Going back to equilateral triangles let’s look at relative error on $\left[1,~1000\right]$:
and a static image on wider range $\left[1,~2^{24}\right]$. The range choice doesn’t matter since (probably counter to expectations) I don’t have much to say that directly compares the IEEE and posit results.
What we’re seeing above is the effect of tapered precision. Here’s a quick sketch:
 The sawtooth sections are the caused by the ULP size on the power of two intervals.
 For both models when we go up a POT interval the absolute size of a ULP doubles which keeps the relative size fixed.
 Additionally each time posits pass a $2^{\text{es}}$ boundary we lose a bit of precision which results in an additional doubling of ULP size and the relative error is now doubled.
 Our example model is $\text{es}=2$ so we have groups of $2^{2} = 4$.
Let’s note that mixture of operations that form the area computation isn’t very involved. There are 7 addition/subtractions, 4 multiplications and a square root. A plot (pun intended) theme of posit presentations is that IEEE dropped the ball and the dynamic range is too large. An example comment is roughly “even astrophysicist don’t know what to do with it”. Well our simple little sequence of operations on some wellconditioned problems and using a stable method divided our useable range by a factor of ~4 in the IEEE case. Posits it depends on how much error you’re willing to tolerate.
In the IEEE case the relative error is flat and the pair of plots (Kahan’s method and exact correctly rounded) are virtually identical excluding the ends of the usable range.
For posits the relative error (if you flip it upside down) looks like rolling a ball down a ziggurat and worse there’s generally a large gap (remember log scale) between the exact correctly rounded and Kahan’s method results.
Lies, damned lies and benchmarks or if a trick worked once…
The title here is more a general comment of the usefulness of drawing conclusions from benchmarks than the specific example presented in the posits paper. For the authors’ “warmup exercise” comparison they propose a bang for your bit budget measure by computing following meaningless expression:
where the rules are: we start with the integers and correctly round versions (in each model) of $\pi$ and $e$, perform the computations and then look at how accurate the results are. In other words it isn’t representative of a meaningful computation. If it was and we needed it to be accurate then we’d simply precompute the correctly rounded value and use that. Let’s run with it anyway. First note that the values do give an air of credibility. We have some small integers and a couple of common mathematical constants. Let’s take a quick peek at what’s going on starting with the numerator:
We’re jumpingupanddown on the cardinal rule and subtracting two numbers that agree in the first seven digits. The properly rounded posits version of $e$ has 27 bits of precision so it comes out less effected. Gosh what’s gonna happen in the denominator:
Nine! They agree in the first nine digits and again the posit version will be less effected since it has more digits in this range. Let’s run with the farce and give the results:
But really the raise to a power doesn’t do anything interesting. The exponent computation is exact for both and we’re taking the result to be correctly rounded so without that we’d get:
Notice how good “count correct decimal digit” measure is here. Now instead of counting correct digits let’s count how many are wrong since the point of precise computation is to prevent errors from creeping into the result. Both binary32 and posits (es=3) have lost 11 digits and the better choice of es=2 has lost 12 and we got there using seven basic and correctly rounded operations. In the mathematica nookbook he adds insult to injury and uses a wide accumulator, oh sorry, “quire” for the denominator (yeah..that’s in later post as well).
I’d imaging that running across something like this in production code might cause some to start yelling “Everyone’s fired!” in frustration.
Authors: Therefore, the extra accuracy in the result was not attained at the expense of dynamic range.
No it was attained by a stacked deck which we can do for either model, but it doesn’t tell us anything about either model. Besides why are you mentioning dynamic range? You don’t visit any of it. I kindly direct the reader to my warmup exercise.
Oops…He Did It Again! Yeah, yeah, yeah, yeah, yeah, yeah I think he did it again
The notebook presents a solve linear equation example of a near singular matrix problem presented by D.H. Bailey:
Which Gustafson transforms into values which can be exactly represented in IEEE doubles:
Gustafson: Since the system is only 2by2, we can write the solution in compact form using Cramer’s rule
Seriously? Cramer’s rule?
wikipedia: Cramer’s rule can also be numerically unstable even for $2\times2$ systems.
Okay..no prob, I can roll with that. Let’s rewrite the system using symbols:
For Cramer’s rule we compute the determinate ‘$s$’:
and scaled versions of the result:
and remove the scale factor (divide by the determinate):
Both IEEE and posits compute the determinate exactly ($s = 2^{52}$), posits (64bit, es=3) and (59bit, es=3) get the correct result $\left(1,~2\right)^T$ and IEEE returns $\left(0,~2\right)^T$. Well since $s$ is correct that must mean the problem of getting a totally mysterious $0$ result has to have come from the computation of $x_0$. Well let’s attempt to deduce what could have gone wrong:
Bullet points!
 Yeap that little lonely underlined bit at the end gets rounded away in IEEE.
 And yes this is in the small interval where posits have more bits.
 And yes this is yet another example of catastrophic cancellation.
 And yes Gustafson claims these extra bits somehow give you some protection.
 And yes I’m getting really bored of typing these things.
No matter. The solution is just a websearch away, but I’ll save you the trouble and give a detailed reference^{5} and a quote from the introduction:
Expressions of the form $ad \pm bc$ with $a,b,c,d$ some floatingpoint numbers arise naturally in many numerical computations. Examples include complex multiplication and division; discriminant of quadratic equations; crossproducts and 2D determinants. … Unfortunately, the naive way of computing $ad \pm bc$ may lead to very inaccurate results, due to catastrophic cancellations. Hence it is of interest to compute $ad \pm bc$ accurately.
Awesome. So the Gustafson is encouraging people to use a method that evaluates this troublesome expression not once, or twice but three times. Nice. Well the reference provides us with Kahan’s method (wow his name is showing up alot) to evaluate said expression (code on godolt):
Huh? I wonder if the error bound on this expression holds for posits? Well we’ll have to deblackbox it at some point I guess.
There’s no point in running any numbers, we’ve effectively looked at this with thin triangle. (NOTE: It’s actually general worse here because the posit point distribution is better suited to square roots than IEEE…details in later post).
The Catastrophic Cancellation Kid STRIKES AGAIN!
Seriously people. YOU CANNOT MAKE THIS SHIT UP. Solving the real quadratic equation $ax^2+bx+c = 0$ is the (wanted) poster child example used to demonstrate catastrophic cancellation. The “textbook” equation is:
and they tells us the equation can be rewritten using a “classic” trick:
Authors: Like most numerical tricks, it is far from obvious, has to be worked out with tedious algebra, and is prone to errors in coding it.
John D. Cook has a blog post^{6} with the derivation of this second equation, but that doesn’t matter since the second equation is no better than the first. A bottom line here is if you want to improve performance and/or accuracy of some computations…well if someone hasn’t already done the work for you then some math (algebra or otherwise which you may or may not find tedious) is required and writing any code is an error prone process (but somebody’s gotta do it).
The first problem is when $b^2$ is significantly greater than $\left 4ac \right$ since then $\sqrt{b^24ac} \approx$
$ \left b \right$ so for one choices of the $\pm$ the computation goes boom. The second problem is computing the discriminant $b^24ac$ when $ b^2 \approx 4ac $, (near equal roots) but we just talked about that solution. So this is an interesting problem since we need two different techniques to solve: algebraic manipulation to solve the first (see below) and effectively extending the precision for the second. (Recall I’m ignoring overflow/underflow issues)
To find the larger magnitude root we can side step the problem:
\[r_0 = \frac{\left(b + \text{sgn}\left(b\right)\sqrt{b^24ac}\right) }{2a}\]
where $\text{sgn}$ is the signum like function:
which in “C” can be implemented as:
The second root can be computed via:
An addtional advantage of the solution I’m showing is that you know which is the larger and smaller magnitude without inspection.
Authors: Instead of expecting programmers to memorize a panoply of arcane tricks for every formula, perhaps posits can make it a bit safer to simply apply the textbook form of the formula.
Nope still no magic protection and there’s still no need to memorize it. They can repeat stuff like this as much as they want. What I really should be doing is showing IEEE using stable methods and posits with their suggestions. But that’d be cruel. Sigh. Another missed opportunity.
For form sake the given example was: $\left(a,b,c\right) = \left(3,100,2\right) $ and computing as I suggest gives:
$r_0$  $r_1$  $r_0$  $r_1$  

exact  33.313321  0.020012014  $100001.010100000011010111010$  $1.010001111110000001111000111 \times 2^{6}$ 
IEEE  33.313320  0.020012015  $100001.010100000011010111$  $1.01000111111000000111101 \times 2^{6}$ 
posit  33.313322  0.020012014  $100001.010100000011010111011$  $1.01000111111000000111100011\times 2^{6}$ 
Surely you’re smoking, Mr. Gustafson!
Gustafson in a talk^{7} presented at Stanford runs a theme with the follow dot product:
and in binary:
Note the form of the vectors are: $ a = \left(2x,~1,~1,~2y \right) $ and $ b = \left(y,~1,~1,~x \right) $ so the outer elements cancel and we’re left with the sum of the inner elements and a result of $2$. We’re informed that both binary32 and binary64 results are zero, binary64 computed via binary sum collapse returns $1$ and (of course) posits get the correct answer.
Gustafson: Most linear algebra is unstable with floats!
Let’s step back and consider how bad this error is. If we measure using either absolute or relative error then, yeah, it’s pretty bad. However the dot product is a scalar projection which can be define geometrically (doesn’t matter if the vectors don’t have explicit geometric meaning) as:
and we can compute the relative projection:
and solving for $\theta$:
So if a model returns zero instead of one we’re off about approximately $2^{51}$ whichever way you want to look at it.
Gustafson: IEEE floats require 80bit precision to get it right. Posits (es=3) need only 25bit precision to get it right.
Nope, nope, nope:
 32bit IEEE & posits return zero.
 64bit posits (es=3) returns the correct answer.
 25bit posit (es=3) can represent the INPUT. It cannnot compute the correct answer without a wideaccumulator..wait..sorry: “quire”.
 64bit IEEE returns the CORRECT answer.
There’s some more crazy talk about the quire with respect to dot products, but I’ll leave that to the next post.
Look ma I can count! or much ado about a ULP
EXTRA BONUS EXAMPLE
Here is an example that has disappeared from the stage act. This is a mockup of a slide from the debate^{8}:
From my book, to show why roundtonearest might not be random and how unums can selfmanage accuracy:
In trying to count to a billion, IEEE floats (32bit) produce 16777216.
Let’s walk through this:
 We start $sum$ at zero and add one each iteration.
 Each time $sum$ hits a poweroftwo integer the exponent is increased by one and the size of a ULP is doubled.
 When we reach $sum = 2^{23} = 8388608$ then the size of a ULP is 1. (no rounding yet)
 When we reach $sum = 2^{24} = 16777216$ then the size of a ULP is two. (didn’t round)
 All remaining iterations attempt to add 1 which is 1/2 a ulp and $sum$ is left unchanged (no rounding ever occurs)
 IEEE754 doesn’t specify a random rounding mode (although researchers have played with the concept).
If computed with a 32bit posit (es=2) then the result is $8388608$ and es=3 would be same as IEEE. So this absurd example had to go.
With a bit of imagination we can rework the example into a much more impressive one anyway. Taking a post^{9} by Nick Higham derived from a note^{10} by David Malone let’s look at computing the $n^{th}$ harmonic number directly from the series:
If we directly convert the summation into a loop that terminates when the value doesn’t change then we get:
model  $n$  computed $H_n$  posit advantage multiplier! 

bfloat16  $65$  5.0625  0.126706x 
binary16  $513$  7.0859   
posit16 es=1  $1024$  7.7773  1.9961x 
posit16 es=2  $1025$  7.7813  1.99805x 
binary32  $2097152$  15.404   
posit32 es=2  $8388609$  16.813  4x 
Look how much better posits are than IEEE: the 32bit version can compute 4x as many! This is same mechanism as counting by one. We have a very slowly growing sum where in IEEE we hit a value too small to contribute earlier than posits. It’s trivial to contrive an example where IEEE would win but simple or convoluted examples which boil down to measuring the size of a ULP in the two formats doesn’t tell us anything we shouldn’t already know.
Summary
The provided examples are the computational equivalent of playing three card monte on the street corner. The “trick” is obvious to those that know it and hopefully I’ve done a sufficient job of explaining if loss of significance and ULP size issues were unfamiliar to you. These examples don’t measure anything about either model beyond what is already known. Specifically the size of a ULP at a given number.
We’ve already covered the majority of the “evidence” that they provide to backup the claim of posits being a “uniformly superior” model than IEEE. What else is there?
 A “scaled model” comparison. Here they compare how a made up 8bit IEEE format performs vs. posits on basic operations.
 The LINPACK benchmark is used as a “method of manufactured solutions”.
In the next post we’ll look at more absurd things and I’ll mention some uber nerdy math and CS shit!
Postscriptum 20220930
Well obviously I lost all motivation to complete the followup. Since originally writing this there have been some formal papers on the topic written by well respected researchers (to put it mildly) who never “game” their findings. In fact they bend over backwards attempting to be as positive on posits as possible. Some examples:
 “Comparing posit and IEEE754 hardware cost”, 2001. A TL;DR snippet from abstract “This library improves the state of the art of posit hardware arithmetic, and still, IEEE754 operators remain between 30% and 60% faster and smaller than their posit counterparts”. One of the coauthors (Florent de Dinechin) has even written a paper with Gustafson even though Gustafson has previous attacked his integrity as a researcher. Like I said: “bendingoverbackwards”.
 “Posits: the good, the bad and the ugly”, 2019. Looks at some of issues of using posits. Note this paper was written for a posit conference so that fact certainly effected its content.
Also this writeup was discussed in a post. The TL;DR version is that Gustafson was very polite about it and he doesn’t really deny my claims about the posit examples being contrived.
And FWIW the format now has a specification.
References and Footnotes

“Branches of Mathematics: Numerical Analysis”, N. Trefethen (PDF) ↩

“Miscalculating Area and Angles of a Needlelike Triangle”, W. Kahan, 2014 (PDF) ↩

“What Every Computer Scientist Should Know About FloatingPoint Arithmetic”, David Goldberg, 1991 (page) ↩

“How to Compute the Area of a Triangle: a Formal Revisit”, Sylvie Boldo, 2013 (PDF) ↩ ↩^{2}

“Further analysis of Kahan’s algorithm for the accurate computation of 2x2 determinants”, Jeannerod, Louvet & Muller, 2013 (page) ↩

“The quadratic formula and lowprecision arithmetic”, John D. Cook (page) ↩

“Beyond Floating Point: NextGeneration Computer Arithmetic”, J. Gustafson, 2017 (PDF) ↩

“THE GREAT DEBATE: UNUM ARITHMETIC POSITION STATEMENT”, J. Gustafson, 2016, (PDF) ↩

“Half Precision Arithmetic: fp16 Versus bfloat16”, Nick Higham, 2018 (page) ↩

“To what does the harmonic series converge?”, David Malone, 2013 (PDF) ↩