[Prev][Next][Index][Thread]
Re: Type theory vs floating-point arithmetic
Dear Matthias,
the way to see a subtype as a subset of a type
(and thus your argument)
_THERE ARE NO VALUES THAT THE TWO TYPES HAVE IN COMMON_
is as far as I know a bit out of date. It is understood
that in general every subtyping relation goes equipped
with a COERCION (special embedding of the set of elements of
the subtype into the set of elelements of the type).
In some cases it may be identical on elements, but in general it
changes the representation (and maybe more). Think of lists
and vectors of fixed length. Principal
condition is that if you have A<_c B (A is subtype of B via
coercion c), f is an operation on A and f' an operation
on B you want to regard as "the same as f", it should be
f'(c(x)) = f(x). In case of binary operations you should
pass to AxA<_cxc BxB.
The possibility to 'convert back' in general is not
required at all.
There are some 'global' requirements, e.g. coherence of the
whole set of coercions (if we consider many types and subtypes):
coherence means that there exists at most one coercion between given types.
So, when you consider the relationship between 32-bit and 64-bit
floating point arithmetics I would ask if there exists a coercion c
such that f'(c(x)) = f(x) holds for all operations f and their "extensions"
f' that you are interested in.
The answer will depend on the choice of operations. Usually these
operations (+, x...) were implemented without any conscious attention to
harmonization for 16-bit, 32-bit, 64-bit cases so I guess you'll
find many bizarre things.
But it is worth to be studied. Does there exists a COERCION?
About coercions see 'coercive subtyping' on the web. Also
I may recommend a concrete site: http://www.dur.ac.uk/~dcs0zl/ [Zhaohui Luo]
All the best,
Sergei Soloviev
>X-Authentication-Warning: saul.cis.upenn.edu: bcpierce set sender to
types-errors@cis.upenn.edu using -f
>Date: Thu, 28 Aug 2003 13:30:30 -0500
>Mime-Version: 1.0 (Apple Message framework v552)
>Subject: Re: Type theory vs floating-point arithmetic
>From: Matthias Blume <blume@tti-c.org>
>To: types@cis.upenn.edu
>Content-Transfer-Encoding: 7bit
>
>[----- The Types Forum, http://www.cis.upenn.edu/~bcpierce/types -----]
>
>On Saturday, August 23, 2003, at 01:57 AM, Tim Sweeney wrote:
>
> There is a further problem regarding subtyping. It might first
> appear that
> the type of 32-bit floating point values is a subtype of the type of
> 64-bit
> floating point values. Though one can losslessly convert from 32-bit
> to
> 64-bit and back, performing a 32-bit arithmetic operation on such
> 32-bit
> numbers can produce a different result than performing a 64-bit
> arithmetic
> operation on their 64-bit extensions, and then converting back. So
> from a
> type theoretic point of view, the types of 32-bit and 64-bit floating
> point
> numbers must be disjoint.
>
>Here is another way of seeing that 32-bit IEEE cannot be a subtype of
>64-bit IEEE because
>
> _THERE ARE NO VALUES THAT THE TWO TYPES HAVE IN COMMON_
>
>Why? Well, n-bit IEEE numbers are n-bit approximations of
>infinite-precision
>reals. In other words, the number
>
> SIGN . DIG1 DIG2 ... DIGn E EXP
>
>should be interpreted as
>
> SIGN . DIG1 DIG2 ... DIGn ? ? ? ? ... E EXP
>
>where each "?" in the infinite sequence of "?"s stands for "I don't
>know the value
>of this digit". With this, it is clear that +.1011e7 in a
>four-bit-mantissa representation
>is different from .10110000e7 in an eight-bit-mantissa representation
>since
>the former really is +.1011????????... e7 while the latter is
>+.10110000????... e7.
>
>Matthias