""" To Do:
a) note that instead of using a floating point base and fp error,
why not try breaking it up into IEEE-style float + error?
1) My feeling here is that the "error" should be small since
it is only a bounding limit, and therefore the mantissa
only needs about 7 binary digits (but typically 32 makes
sense) but arbritrary order since the order is relative
to the primary number.
2) Main problem here is that the ordinate is too much so either:
i) dx.ord = ord - err.ord
ii) dx stores relative error
I like (i) best since most constants should be entered
based on absolute error I believe
b) Preserving and extending s+o+m (sign-bit, order [ordinate] and
mantissa, however here we put back the leading "1" of the
mantissa, so we have order = o & mantissa = s1m. Order is
still power of 2 except when mantissa is 0.
c) Because 0 does not have an order (0*2**o == 0), we will use
the order to also serve as a bit-field storing the IEEE
exception state of the number IFF mantissa == 0
d) Mantissa is now infinite precision, or more correctly is
as precise as it should be given its error, and no more. The
best storage therefore is a bit-list or a Python long. The
problem here is that a long is not truly an ordinate: a long
is an integer where as the ordinate is a decimal of +/-1.
where +/- is specified by sign in IEEE and o is the IEEE
decimal. 2 approaches for M * 2**O:
1) Let abs(M) >= 1 or 0; then O = o - ln2(M); i.e. the total
integer part of ln2(x) is O + ln2(M) rather than the IEEE
int(ln2(x)) == o. Thus, for most long->float conversions
O is 0 and M is the original number but does this make
the algorithms slower (add, mul, str, pow, log); OTOH, then
dx always has the same order as x so it need only store
the specified decimal digits of error.
2) Use the IEEE method of 0.5 <= M/2 < 1.0. But then how to
represent M? O becomes o but M is now a decimal digit with
MSB == 1.0. Now, construction becomes hard (compute ln2(M)
and add to o to get O) but other float ops act as normal.
e) The idea of int->float and long->float conversions has to be
of a flexible precision?! My idea, the direct conversion will
have precision of max(ln2(x), 52) binary digits, where 52 is
the number of digits in IEEE floating point Mantissa (the first
digit is implied).
f) Further, I think a direct conversion may be "flex-precision" or
basically a special type of float that has 0 precision because
integers are absolutely preceise.
g) That way, when one does interger division, the "smaller" term
does not dominate the precision because both are equally accurate.
The problem here is that we are following non-propigated error
detection. What's more, since multiplication can be done with
absolute precision, shouldn't float(x) * float(y) == float(x*y)?
It seems that in integral float should hold ZERO precision (100%
accuracy) UNTIL a floating point operation is performed.
Division could then operate like described, with max(ln2(x), ln2(y),
52) being the result. Any cross-operation (involving an error
and non-error) will be set by the value with error and in fact
most algos will to the defined one already since the dn == 0 term
will not contribute to the final error. But then, we could make
those conversions faster by not doing the squaring / square root.
h) No "correlation factors" between values: if a is derived from b
the error of Op(a, b) should actually include twice the partial
derivitive of dOp/da and dOp/db and their relative errors. As
this is difficult (impossible) to tell, let's assume no
covariance.
i) Constants like "pi" should be generators: pi will be precomputed
but more digits will be calculated as needed based on the error
of the other operand. But what about Unary functions??
1) The implication is that this would be extended to other known
irrationals, such as log(2), but then we have to represent
log(2) symbolicly!!
2) The problem with this, and the extended concept of deferred
computation is that it implies we'd have to "hold onto" infinite
precision values until an "imprecise" was applied which could
be many cycles hence.
3) The problem might be solved with a type of "Units" support, but
still is a problem for repeated high-order calculation, such as
sin(cos(ln(exp(2))))! What's more, the "correct precision to
calculate too is NOT simply ln2(y) digits as after all those
operations it is likely we will loose more digits of precision.
Therefore the correct "error" to use in calculation is not
simply ln2(y) but ln2(y) + sin.error + cos.error + ...
j) With this, would a new Units library be appropriate? Given a
dimensional analysis, would it be possible to use special type
specifiers to force dimensional analysis on all calculations.
Granted such libraries already exist, but I doubt they all
correctly propigate scientific errors like I do.
1) With Units, shouldn't you have constants, like c and h-bar?
2) With constants, shouldn't you have conversion factors such
as between inches and meters and columbs and electrons?
k) Use my old C++ code to write the unary functions with error
propigations such as for sin, cos, log, exp, ...
l) What happened to your C++ library??
"""
import math, re
from string import join
float_precision = 0.0
float_error_digits = 2
def set_fpu_precision_decimal():
global float_precision, float_error_digits
one = 1.0
for i in range(100):
near_one = float("1." + "0"*i + "1")
if near_one == one:
float_precision = 10**(i-float_error_digits)
break
def set_fpu_precision_binary():
global float_precision, float_error_digits
one = 1.0
for i in range(100):
near_one = one + one/2**i
if near_one == one:
float_precision = 2**i / 10**float_error_digits
break
set_fpu_precision_binary()
class Sci(float):
__slots__ = ['dx', '_digits']
dispx = re.compile("([eE])")
def __new__(cls, x, dx=None):
"""Because we are overriding float, we need to init in new."""
if isinstance(x, Sci):
return super(Sci, cls).__new__(cls, float(x))
else:
return super(Sci, cls).__new__(cls, x)
def __init__(self, x, dx=None):
"""Create an error object with Standard Deviation dx (defaults to 0)"""
if isinstance(x, Sci):
self.dx = x.dx
self._digits = x._digits
if dx is not None:
raise AttributeError, "Copy Constructor did not expect second parameter."
else:
if dx is None:
global float_precision
if type(x) in [int, long] or float_precision == 0.0:
dx = 0.0
elif float(self) != 0.0:
dx = float(self) / float_precision
else:
dx = 1.0 / float_precision
self.dx = dx
self._update_signif_digits()
def _update_signif_digits(self):
global float_precision
if self.dx == 0.0:
self._digits = float_precision
else:
self._digits = int(math.log(float(self) / self.dx)/math.log(10)) + 1
def relative_error(self):
# To Do: Make this a property with __get__ and __set__
return float(self) / self.dx
def __str__(self):
# We will use the short-hand form by default, but maybe use the +/-
# notation later with an option flag float_error_digits
if self.dx == 0.0:
# No Uncertenty
return str(float(self))
global float_error_digits
# PROBLEM!!! DOES NOT PRESERVE TRAILING ZEROS
# (and width doesn't work well on FP-exponents)
sx = "%.*g" % (self._digits + float_error_digits, float(self))
sdx = "%.*g" % (float_error_digits, self.dx)
sx = self.dispx.split(sx)
sdx = self.dispx.split(sdx)
sdx = sdx[0]
sdx = sdx.lstrip("0.")
if len(sdx) == 1:
sdx += "0"
elif sdx[1] == ".":
sdx = sdx[0] + sdx[2:]
sx = sx[:1] + ["(" + sdx + ")"] + sx[1:]
return join(sx, "")
def __repr__(self):
# This is as I usually do: expr(repr(x)) yields x.
return "Sci(%s, %s)" % (repr(float(self)), repr(self.dx))
def __abs__(self):
z = abs(float(self))
return Sci(z, self.dx)
def __cmp__(self, other):
other = Sci(other)
result = cmp(float(self), float(other))
if result == 0:
result = cmp(self.dx, other.dx)
return result
def __coerce__(self, other):
if isinstance(other, Sci):
return None
else:
return Sci(other)
def __nonzero__(self):
return float(self) != 0.0 and self.dx != 0.0
def __add__(self, other):
other = Sci(other)
z = float(self) + float(other)
dz2 = self.dx**2 + other.dx**2
return Sci(z, dz2**.5)
def __radd__(self, other):
other = Sci(other)
z = float(other) + float(self)
dz2 = other.dx**2 + self.dx**2
return Sci(z, dz2**.5)
def __sub__(self, other):
other = Sci(other)
z = float(self) - float(other)
dz2 = self.dx**2 + other.dx**2
return Sci(z, dz2**.5)
def __rsub__(self, other):
other = Sci(other)
z = float(other) - float(self)
dz2 = other.dx**2 + self.dx**2
return Sci(z, dz2**.5)
def __mul__(self, other):
other = Sci(other)
z = float(self) * float(other)
dzr2 = (self.dx / float(self))**2 + (other.dx / float(other))**2
return Sci(z, dzr2**.5*z)
def __rmul__(self, other):
other = Sci(other)
z = float(other) * float(self)
dzr2 = (other.dx / float(other))**2 + (self.dx / float(self))**2
return Sci(z, dzr2**.5*z)
def __div__(self, other):
other = Sci(other)
z = float(self) / float(other)
dz2 = self.dx**2 + (z * other.dx / float(other))**2
return Sci(z, dz2**.5)
def __rdiv__(self, other):
other = Sci(other)
z = float(other) / float(self)
dz2 = other.dx**2 + (z * self.dx / float(self))**2
return Sci(z, dz2**.5)
# Nothing to do here because truediv casts to float, but we're already
# float!
__truediv__ = __div__
__rtruediv__ = __rdiv__
def __floordiv__(self, other):
other = Sci(other)
z = float(self) // float(other)
dz2 = (self.dx / float(other))**2 + (z * other.dx / float(other))**2
return Sci(z, dz2**.5)
def __rfloordiv__(self, other):
other = Sci(other)
z = float(other) // float(self)
dz2 = (other.dx / float(self))**2 + (z * self.dx / float(self))**2
return Sci(z, dz2**.5)
def __mod__(self, other):
other = Sci(other)
r = float(self) % float(other)
dr2 = self.dx**2 + (other.dx * (float(self) - r) / float(other))**2
return Sci(z, dr2**.5)
def __rmod__(self, other):
other = Sci(other)
r = float(other) % float(self)
dr2 = other.dx**2 + (self.dx * (float(other) - r) / float(self))**2
return Sci(z, dr2**.5)
def __divmod__(self, other):
other = Sci(other)
z, r = divmod(float(self), float(other))
dzr2 = (self.dx / (float(self) - r))**2 + (other.dx / float(other))**2
dr2 = self.dx**2 + (other.dx * z)**2
return (Sci(z, dzr2**.5*z), Sci(r, dr2**.5))
def __rdivmod__(self, other):
other = Sci(other)
z, r = divmod(float(other), float(self))
dzr2 = (other.dx / (float(other) - r))**2 + (self.dx / float(self))**2
dr2 = other.dx**2 + (self.dx * z)**2
return (Sci(z, dzr2**.5*z), Sci(r, dr2**.5))
def __pow__(self, other, modulo=None):
other = Sci(other)
if modulo is None:
z = float(self)**float(other)
dz2 = (float(other) * z * self.dx / float(self))**2 + (math.log(float(self)) * z * other.dx)**2
return Sci(z, dz2**.5)
else:
raise TypeError, "pow() 3rd argument not allowed unless all arguments are integers"
def __rpow__(self, other, modulo=None):
other = Sci(other)
if modulo is None:
z = float(other)**float(self)
dz2 = (float(self) * z * other.dx / float(other))**2 + (math.log(float(other)) * z * self.dx)**2
return Sci(z, dz2**.5)
else:
raise TypeError, "pow() 3rd argument not allowed unless all arguments are integers"
# constants
pi = Sci(math.pi)
c = Sci(299792458)
h_bar = Sci(1.054571596e-34, 0.000000082e-34)
print pi**2 * h_bar * c / 240