Welcome, Guest. Please login or register.

Author Topic: Decimal Floating Point and Abstraction Layer  (Read 4437 times)

Description:

0 Members and 1 Guest are viewing this topic.

Offline tonydTopic starter

  • Newbie
  • *
  • Join Date: Jun 2014
  • Posts: 14
    • Show only replies by tonyd
Decimal Floating Point and Abstraction Layer
« on: March 23, 2016, 11:58:56 AM »
Planning to try OS4.1FE Classic with UAE shortly....
Forgive me if they already have this....

What is the state of decimal floating points on the Amiga?  I think this is important for future development, and if we don't have it, we could have an abstraction layer, such as IBM developed for PPC (http://speleotrove.com/decimal/dfpal/dfpalugaio.html).  Might this very library work or be easy to adapt to the Amiga?

The abstraction layer (DFPAL) allows you to use the decimal types whether the hardware natively supports it or not.  If the hardware supports it, the abstraction software will use that, and it will be much faster, but if the hardware does not support it, the code still executes and gives the same results, though slower.

Also, I hope they'll follow the IEEE interchange format.  The interchange format not only specifies the parts, but also their order, so that it's easy for anyone to read and write the same format (https://en.wikipedia.org/wiki/IEEE_floating_point#Basic_and_interchange_formats).
« Last Edit: March 23, 2016, 12:56:51 PM by tonyd »
 

Offline olsen

Re: Decimal Floating Point and Abstraction Layer
« Reply #1 on: March 23, 2016, 03:04:39 PM »
Quote from: tonyd;806229
Planning to try OS4.1FE Classic with UAE shortly....
Forgive me if they already have this....
Just curious: which application do you have in mind for decimal floating point numbers?

I recall that this format is useful for storing "human-readable" numbers, such as for prices and for performing simple arithmetic operations on these (add, subtract, divide, multiply). Beyond that, the IEEE 754 format allows you to keep errors much better in check.

But it's sometimes difficult to explain to the end-user why the numbers that tumble out of supposedly simple calculations don't seem to add up if IEEE 754 format is being used.
 

Offline psxphill

Re: Decimal Floating Point and Abstraction Layer
« Reply #2 on: March 23, 2016, 06:47:42 PM »
Quote from: olsen;806245
But it's sometimes difficult to explain to the end-user why the numbers that tumble out of supposedly simple calculations don't seem to add up if IEEE 754 format is being used.

If you are writing financial software and use floating point numbers then instead of wasting time explaining to the end-user why none of the figures add up right, you should invest the time in something more beneficial to society like brushing up your cv and looking for a job that doesn't require things to add up right. Staying in the job would be immoral.

Floating point is a lossy compression algorithm. If end users buy the explanation then they shouldn't be doing that job either.
« Last Edit: March 23, 2016, 06:51:14 PM by psxphill »
 

Offline tonydTopic starter

  • Newbie
  • *
  • Join Date: Jun 2014
  • Posts: 14
    • Show only replies by tonyd
Not "Lossy" -- Re: Decimal Floating Point and Abstraction Layer
« Reply #3 on: March 24, 2016, 12:12:53 AM »
Quote from: psxphill;806253
Floating point is a lossy compression algorithm.

@psxphill: Decimal floating point is not a lossy compression algorithm.  Binary floating point can be lossy, because some decimal numbers can't be accurately represented.  That's why we need the decimal floating point type.

@olsen: Yeah, it's used where accuracy is most important.  I can't tell for sure, but I think you're referring to IEEE 754 as only defining a binary floating point standard.  Actually, it defines both binary and decimal types (https://en.wikipedia.org/wiki/IEEE_floating_point).

I'm a big fan of standardization when the standard makes sense, and it seems to me that this one does, though I'm still learning about it.  Binary floating point has been in the hardware for some time now, and decimal floating point is being added to modern processors.  I need to do more research, but it's probably according to this standard.

It seems that the IEEE standard for decimal is better than the Java BigDecimal (http://www.cs.berkeley.edu/~wkahan/JAVAhurt.pdf), and probably better than the C# decimal (but I haven't verified that one).  There is a move to add decimal to C (http://gcc.gnu.org/onlinedocs/gcc/Decimal-Float.html) and to C++ (http://open-std.org/JTC1/SC22/WG21/docs/papers/2014/n3871.html).

Mike Cowlishaw seems to have a very good and informative site on the subject (http://speleotrove.com/decimal/).
« Last Edit: March 24, 2016, 03:30:25 AM by tonyd »
 

guest11527

  • Guest
Re: Not "Lossy" -- Re: Decimal Floating Point and Abstraction Layer
« Reply #4 on: March 24, 2016, 07:03:24 AM »
Quote from: tonyd;806266
@psxphill: Decimal floating point is not a lossy compression algorithm.  Binary floating point can be lossy, because some decimal numbers can't be accurately represented.  
Every floating point format is necessarily lossy. No matter whether decimal or binary. Neither binary nor decimal floating point can reprenset 1/3 or 1/7 precisely. The reason why decimal is used for banking is because it can represent decimal fractions (as used by Humans) precisely, but that's a rather arbitrary choice. As soon as you need to calculcate interest rates and the like, both formats generate loss, necesarily. Even worse, the decimal format necessarily generates a larger loss compared to binary.

Quote from: tonyd;806266
@olsen: Yeah, it's used where accuracy is most important.  I can't tell for sure, but I think you're referring to IEEE 754 as only defining a binary floating point standard.  Actually, it defines both binary and decimal types (https://en.wikipedia.org/wiki/IEEE_floating_point).
In terms of accuracy, i.e. average precision loss, binary is actually the better format. The average rounding loss of a floating point format grows when the basis grows. While this can be shown rigorously (see the link at the end), there is a simple hand-waving argument: If you loose a digit in a binary floating point format, you only loose one bit and hence one binary decision. If you loose a digit in a decimal format, you loose one decimal decision, or more than three bits. So it's quite the reverse from what you say: Binary is more precise than decimal.

The only advantage of decimal is that it rounds the way how humans round, but that's a rather arbitrary choice, and as far as the mathematics is concerned, even a bad choice.


Quote from: tonyd;806266
I'm a big fan of standardization when the standard makes sense, and it seems to me that this one does, though I'm still learning about it.  Binary floating point has been in the hardware for some time now, and decimal floating point is being added to modern processors.  I need to do more research, but it's probably according to this standard.
Is anyone really using decimal in hardware these days? The old 68881/82 FPUs had a decimal format, but no decimal implementation of the FPU. I've never seen it "in silicon" today, neither the format, nor a complete FPU. *If* you need to do decimal to satisfy some standards of the financial industry (no matter how well motivated these are), then this is usually in software.

Here's a good wrap-up of floating point rounding and the precision of various formats:

https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
 

Offline psxphill

Re: Not "Lossy" -- Re: Decimal Floating Point and Abstraction Layer
« Reply #5 on: March 24, 2016, 09:44:06 AM »
Quote from: Thomas Richter;806274
The reason why decimal is used for banking is because it can represent decimal fractions (as used by Humans) precisely, but that's a rather arbitrary choice. As soon as you need to calculcate interest rates and the like, both formats generate loss, necesarily.

Russia had the first decimal currency in 1704, the UK switched in 1971. It's not an arbitrary choice.

Obviously if you are calculating a percentage of 0.01 then you cannot represent that as money as there are no fractions of a penny. That isn't lossy as you aren't losing anything that could be represented by real money.

Floating point on the other hand cannot represent decimal currency at all. It is impossible to store most 2 digit decimal numbers accurately in a floating point number. All you can do is store the closest number and then round it when displaying it. However this causes a problem when you do something as mundane as adding two numbers together.

You are right that you can't represent 1/3 in decimal, but that isn't a problem at all for accountants. Not being able to add 0.10 & 0.10 and get 0.20 is. caveat: I don't know if this is a real example, but there are situations where two trivial values cannot be added up as I was once given the task of fixing a system using floating point. I replaced it with 64 bit integer maths (on a z80 without a 64 bit maths package).

I hope you're not trying to justify using floating point for financial calculations as that would be worrying.
 

Offline cunnpole

  • Full Member
  • ***
  • Join Date: Mar 2011
  • Posts: 120
    • Show only replies by cunnpole
Re: Not "Lossy" -- Re: Decimal Floating Point and Abstraction Layer
« Reply #6 on: March 24, 2016, 11:38:31 AM »
Quote from: psxphill;806278
but that isn't a problem at all for accountants
Oh yes it is. Fractions of pennies are very important on a scale of millions of accounts/transactions.
 

Offline tonydTopic starter

  • Newbie
  • *
  • Join Date: Jun 2014
  • Posts: 14
    • Show only replies by tonyd
Re: Not "Lossy" -- Re: Decimal Floating Point and Abstraction Layer
« Reply #7 on: March 24, 2016, 11:47:03 AM »
@Thomas: I believe there's a place for binary as well.  I wasn't saying there isn't.  It depends upon your purpose.

Quote
Neither binary nor decimal floating point can reprenset 1/3 or 1/7 precisely.
Fractions aren't the issue.

Quote
Is anyone really using decimal in hardware these days?
It's been in IBM POWER processors since the POWER6.  I imagine it's in the Amiga's new processors, but need to confirm.
(Correcting myself.... In my zeal, I accidentally overstated this case.  It's current hardware implementations include POWER, SparcX, z10, and some kind of processor from SilMinds.)

Quote
then this is usually in software
That's because it wasn't available in the hardware.  It's being done in software because it's needed.

Quote
Binary is more precise than decimal.
Not  if you're trying to represent decimal numbers.  For example, there's no  precise representation for decimal 0.1 in binary.  Within a certain  amount of precision, decimal floating point can perfectly represent a  decimal number.  That can't be said about binary floating point numbers.
« Last Edit: March 24, 2016, 10:04:13 PM by tonyd »
 

Offline jj

  • Lifetime Member
  • Hero Member
  • *****
  • Join Date: Feb 2002
  • Posts: 4051
  • Country: wales
  • Thanked: 2 times
  • Gender: Male
    • Show only replies by jj
Re: Not "Lossy" -- Re: Decimal Floating Point and Abstraction Layer
« Reply #8 on: March 24, 2016, 01:56:38 PM »
Quote from: psxphill;806278
Floating point on the other hand cannot represent decimal currency at all. It is impossible to store most 2 digit decimal numbers accurately in a floating point number. .

I will admit that I do not seem to have the deep knowledge you have on this subject, but this statement confuses me ?
“We don't stop playing because we grow old; we grow old because we stop playing.” - George Bernard Shaw

Xbox Live: S0ulA55a551n2
 
Registered MorphsOS 3.13 user on Powerbook G4 15"
 

guest11527

  • Guest
Re: Not "Lossy" -- Re: Decimal Floating Point and Abstraction Layer
« Reply #9 on: March 24, 2016, 01:57:34 PM »
Quote from: tonyd;806285
@Thomas: I believe there's a place for binary as well.  I wasn't saying there isn't.  It depends upon your purpose.
Exactly. It's a matter of the requirements, which is pretty much what Olsen was asking for. I understand that the financial industry (if you call that an industry) has requirements for decimal, but not because of its precision (which is, as I said, lower than binary), but due to legacy reasons, namely that the whole system evolved around the decimal system, and it is hard to switch without introducing additional rounding steps when going from one system to another. These rounding steps are not the problem of the binary system. They are the problem of "backwards compatibility" to a legacy.  Concerning requirements: If the requirements are really financial applications, I highly doubt that there is any serious requirement to run that on Amiga hardware. There certainly is for PCs.  
Quote from: tonyd;806285
It's been in IBM POWER processors since the POWER6.  I imagine it's in the Amiga's new processors, but need to confirm.
I cannot tell you about the POWER architecture, but PowerPC (which is related, but not identical to POWER) does not have it. There is neither an integer BCD instruction as far as I can tell (68K does have some elementary BCD operations) nor specific floating point instructions for decimal, nor a decimal datatype (68881/82 have them). So it's all done in software.  Which is, actually, the standard way these days, probably with the exception of some specialized hardware. Not saying that there is no need for it, but I don't quite understand why Amiga land needs a hardware based solution for it. Leave alone a software based solution.  
Quote from: tonyd;806285
That's because it wasn't available in the hardware.  It's being done in software because it's needed.
In Amiga land? By whom? What's the application?  
Quote from: tonyd;806285
Not  if you're trying to represent decimal numbers.  For example, there's no  precise representation for decimal 0.1 in binary.  Within a certain  amount of precision, decimal floating point can perfectly represent a  decimal number.  That can't be said about binary floating point numbers.

And your point is? Sorry, but the base of ten is a rather arbitrary choice. Neither can decimal represent the fraction 0.1 of the ternary system represent precisely, so this is hardly a criterium. Otherwise, we should probably use a ternary system. Or probably a system that has a larger set of prime divisors than the decimal. What about the basis of 12 or 60 (even used historically?). If representation of fractions is your goal, these systems are much better than the decimal system...  The problem with decimal is that rounding errors accumulate faster than in binary - that's precisely why I posted the link above. It's worth reading. For scientific applications, binary is really better. For financial, decimal is only better due to its legacy, not because "math is easier". It is not.
 

Offline olsen

Re: Not "Lossy" -- Re: Decimal Floating Point and Abstraction Layer
« Reply #10 on: March 24, 2016, 03:34:49 PM »
Quote from: JJ;806287
I will admit that I do not seem to have the deep knowledge you have on this subject, but this statement confuses me ?
You are in good company in this respect. It is puzzling, but nevertheless true as far as the part of a floating point number is concerned which follows the decimal point.

Numbers such as 1234.5 can be written down as a series of digits, each multiplied by a power of ten, e.g. 1 * 10^3 + 2 * 10^2 +  3 * 10^1 + 4 * 10^0 + 5 * 10^(-1). But this is not how floating point numbers are represented by the computer. Instead of powers of ten it uses powers of two.

Simplifying things for the sake of this example (there are other rules which apply to floating point numbers which are not quickly explained): it means that you have to rewrite those sums above like so: 1234.5 = 1 * 2^10 + 1 * 2^7 + 1 * 2^6 + 1 * 2^4 + 1 * 2^1 + 1 * 2^(-1)

That works well because you can write 0.5 as 2^(-1) = 1/2. But what do you do with a number such as 0.1? You would have to write 0.1 as a sum of powers of two, which doesn't work out neatly. The best you can do is come up with a sum that's an approximation of 0.1, which is either a bit larger or smaller than 0.1, but not exactly the same value (please don't ask me to come up with such a sum, I can't do this off the cuff).

This is what makes floating point numbers very unwieldy when you want to provide exact decimal number figures to users. There will invariably be "roundoff" errors which are very hard to explain. If you learned arithmetics at school you will calculate using the powers of ten sums implicit in how you write down numbers. Binary numbers are at best an oddity, and floating point numbers are an oddity on top of an oddity.

In the end you get complaints that "numbers don't add up", there's a tenth of a cent missing from a compound sum, etc. when those numbers are indeed sufficiently accurate for calculations, but converting them into a displayable form introduces visible rounding errors.