This week's giveaway is in the JDBC forum.
We're giving away four copies of Java Database Connections & Transactions (e-book only) and have Marco Behler on-line!
See this thread for details.
Win a copy of Java Database Connections & Transactions (e-book only) this week in the JDBC forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Devaka Cooray
  • Knute Snortum
  • Paul Clapham
  • Tim Cooke
Sheriffs:
  • Liutauras Vilda
  • Jeanne Boyarsky
  • Bear Bibeault
Saloon Keepers:
  • Tim Moores
  • Stephan van Hulst
  • Ron McLeod
  • Piet Souris
  • Frits Walraven
Bartenders:
  • Ganesh Patekar
  • Tim Holloway
  • salvin francis

Java and Changes to its Mathematics.  RSS feed

 
Ranch Hand
Posts: 44
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
There are a bunch of things to do with Java and mathematics that actually do need to be changed.
However, I can't obtain very much information about when these changes will be made,
or find confirmation that these changes must and will go ahead.  Can someone
more in the know reply specifically to me here, please?

-In the java language, types float, Float, double, Double are not range accurate.
It is possible to generate resuts, via arithmetic operations, that underflow and overflow.


double a = 0.1;
double b = 0.1;
double x = a*b;
out.println(x);


When are they going to include a compiler switch or a keyword to enable
proper, range accurate arithmetic for the float and the double, and their objects?

Their own in place library code can produce errors in this area.  
StrictMath, Vector3f and Vector3D are examples.  Or calculating the norm of a vector.

-BigDecimal and BigInteger actualy are not arbitrary precision.  Arbitrary precision
arithmetic, in terms of digit places, is infinitely extensible to more integer or decimal places,
limited only by available virtual machine memory.  This needs to be provided.

-The largest scale arithmetic that the java language has also does not support operator syntax
at the source code level, which it has to.  +, -, * , / , %, +=, -=, *=, /=, %=, >=, <=, ==, !=
all need to work for arbitrary precision arithmetic code.

-There needs to be a transcendental functions library for arbitrary precision arithmetic types.
Trigonometry must include sin, cos, tan, asin, acos and atan. Support for
power, nth root, square root, logarithms in base 10, 2 and Euler's e, a method
for calculating pi to any decimal places, and a method for calculating e to any decimal places.
Particularly for a decimal type, applied to by another value of that same decimal type,
like the philosophy in the StrictMath class.

Can someone tell me what the story really is?
 
Saloon Keeper
Posts: 5504
143
Android Firefox Browser Mac OS X Safari Tomcat Server VI Editor
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I don't think there is a consensus for many of the things that you say must change or need to be provided that that is actually so. There is no shortage of mathematical libraries that address many of these points for anyone to use who needs them. That they all need to be provided by the JRE is not clear to me. While it would make some people's work easier, my guess is that it is actally a small niche; I wouldn't hold my breath waiting for any of this.
 
Zachary Mitchell
Ranch Hand
Posts: 44
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Well, things like range accurate floating point mode surely need to happen.

If things like Vector.cross, Vector.dot, or normalising a vector might
produce a convoluted floating point overflow or underflow,
all those methods are a waste of time, can't really be called,
waste memory on each object, and increases the boilerplate
code that people need to re-implement via BigDecimal,
or BigInteger.

Combined with Java Jep 306, to start with.

Is there anyone who knows more in this area?
 
Tim Moores
Saloon Keeper
Posts: 5504
143
Android Firefox Browser Mac OS X Safari Tomcat Server VI Editor
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Well, things like range accurate floating point mode surely need to happen.


Why? IEEE 754 -which float and double implement- allows underflow and overflow, and for the vast majority of applications that appears to be adequate. I'm not convinced that the rest need to be accommodated via functionality built into the JRE.
 
Zachary Mitchell
Ranch Hand
Posts: 44
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Only in the sense that you can push and pull using values of those types, and that they can be worked around
via BigDecimal and BigInteger.

The problem is that floating point overflow or underflow can occur in a convoluted fashion,
and that you can't simply use one line of "floating point" code to give you your range accurate answer.

The other libraries on this issue for java are irrelevant, because they rely on internal string data
or rely on using methods which can't use the operator syntax at the source code level.

If I want no overflow or underflow, within a range, and I need to use operators, which necessarily
avoid the additional brackets that methods introduce, along with the silliness of having to English
language spell divide, multiply etc, I don't have any broader, inside-java, source code option.

Combined with the fact that a whole bunch of language libraries that return float or double
are heir to overflow and underflow, given how they are implemented right now; methods
which cannot be called because of the accuracy problem, and otherwise waste their imprint in memory.

The solution to all this is what every other major language does here, which is either to introduce
a new keyword or such that can enforce floating point arithmetic range accuracy, or to introduce a compiler switch.

Even this all only applies on range limited arithmetic.  Properly, java should have true Arbitrary Precision arithmetic,
which is infinitely pro extensible, contingent to available VM memory alone.

Surely there are others who see what I mean?
 
Marshal
Posts: 64496
225
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Peter Baumarchais wrote:. . . these changes must and will go ahead. . . .

As Tim has said, Java® is here following the rules of IEEE754; no such change will ever be made. What's more, the Java® Language Specification (=JLS) says that floating‑point operations follow IEEE754, so it would not be possible to change that without breaking old code.

. . . arithmetic operations, that underflow and overflow.
. . .

You haven&aposlt shown overflow nor underflow, but the simple imprecision inherent in floating‑point arithmetic. There is a page on the University of Princeton website showing how those imprecisions can cause wildly inaccurate results in some instances. I can't remember where it is, sorry.

When are they going to include a compiler switch or a keyword to enable proper, range accurate arithmetic for the float and the double, and their objects?

Never. If you need more precision, it will be necessary to use a different platform.

-BigDecimal and BigInteger actualy are not arbitrary precision.  . . .

They are indeed limited only by the memory capacity of the platform, but only support a limited number of operations, e.g. mutiplicative and additive. There is no way to calculate irrational numbers to a greater precision than the 53 bits you get with double. If you need that, you will have to use a different platform. I don't believe there ever will be such a greater precision for other operations. BigInteger's arithmetic is indeed precise, as is int's, but I don't think integer arithmetic is relevant to your question. Yes, there are major problems about the design of BigDecimal; you haven't mentioned the worst problem. Yes, maybe it would have been better to have a decimal primitive datatype, as does C#. But there isn't. Yes, maybe BigDecimal and BigInteger should support operator overloading for +-*/ and %. But they don't. And I do not believe that will ever change.
The design of BigDecimal to be immutable is inconsistent with its use of operators like *=.
I think none of the things you are complaining about will ever change.
 
Saloon Keeper
Posts: 10251
216
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Peter Baumarchais wrote:When are they going to include a compiler switch or a keyword to enable proper, range accurate arithmetic for the float and the double, and their objects?


They're not going to. Floating point values in Java use the IEEE-754 standard. If you want more precision, then you must not use floating point values.

Their own in place library code can produce errors in this area.  
StrictMath, Vector3f and Vector3D are examples.  Or calculating the norm of a vector.


It's YOUR job as the programmer to take into account that floating point operations are inherently imprecise. Again, if you need more precision, don't use floating point values.

-BigDecimal and BigInteger actualy are not arbitrary precision. Arbitrary precision arithmetic, in terms of digit places, is infinitely extensible to more integer or decimal places, limited only by available virtual machine memory.


How do you figure that BigDecimal isn't arbitrary precision?

-The largest scale arithmetic that the java language has also does not support operator syntax at the source code level, which it has to.


Why?

Trigonometry must include sin, cos, tan, asin, acos and atan. Support for power, nth root, square root, logarithms in base 10, 2 and Euler's e, a method for calculating pi to any decimal places, and a method for calculating e to any decimal places.


I don't know why they didn't add these methods to BigDecimal. Probably because BigDecimal represents a rational number, and the functions you mentioned yield irrational numbers. They could have added versions that approximate the real numbers to a chosen precision, but maybe they wanted to focus their efforts on other issues. There are plenty of third party libraries that provide these functions using BigDecimal.
 
Campbell Ritchie
Marshal
Posts: 64496
225
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Earlier today, I wrote:. . . The design of BigDecimal to be immutable is inconsistent with its use of operators like *=. . . . .

Actually, maybe I was mistaken on that point.
 
Zachary Mitchell
Ranch Hand
Posts: 44
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Floating point values in Java use the IEEE-754 standard


This standard has been changed at least once now, and in fact Java does not entirely implement
IEE-754 on float and double.

It's YOUR job as the programmer


No, its not.  Not in this area, and maybe even not with any one language of the nature that Java has at the moment.

How do you figure that BigDecimal isn't arbitrary precision?


Because its precision will run out before memory space, a value around 2.1 billion decimal places.
It is contingent to that maximum precision, and is not infiitely pro extensible towards the VM memory limit, alone.

Operator syntax source code? Why?


Because its easier to read and program, and quicker to change.  Using add, subtract, multiply, divide, modulus
method names is a waste of text, takes too long even with autocompletion, introduces an extra set of brackets (),
and is a waste of time for mathematics, engineering, physics people, but really any people.  All along with
the fact that every other major rival PC programming language supports operators on all their numeric types.

 
author
Posts: 23832
140
C++ Chrome Eclipse IDE Firefox Browser Java jQuery Linux VI Editor Windows
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Peter Baumarchais wrote:
The solution to all this is what every other major language does here, which is either to introduce
a new keyword or such that can enforce floating point arithmetic range accuracy, or to introduce a compiler switch.



IEEE 754 is the standard used by practically every modern day processor. Additionally, it is also the standard used by most modern languages. So, can you tell us what "other major language" does here?

The main thing that i can think of is with some compilers, it would detect (and use for) interim variables ... use the extended precision format instead. I guess that can be done, except that it may have to be done in software, as extended precision format isn't supported on all processors.

... also, I guess Quad or Octal precision can also be supported, but that is not supported in many languages. The reason is... it is not supported in many processors (or actually any that I know of), which means support will have to be emulated in software, and hence, is relatively slow.

Henry
 
Stephan van Hulst
Saloon Keeper
Posts: 10251
216
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I'm pretty sure that IEEE-754 is used by most major languages, so I'm curious if you can give us examples.

I'm also interested in a convincing example why you need more than 2 billion decimal places.

While I would personally prefer operators over methods like add(), your arguments are that it wastes space and takes too long. Do you have a budget on the amount of characters that you may use? Have you plotted out the amount of time you have to take out of your schedule to write out the method names of BigDecimal, against the amount of time you have to spend on other things? I'm certain you've already spent more time on this discussion than you have on writing out the method names.

You've posited that the designers NEED to change all this, as a priority number one, over everything else they could be doing. What impending doom do you know about that we don't?
 
Henry Wong
author
Posts: 23832
140
C++ Chrome Eclipse IDE Firefox Browser Java jQuery Linux VI Editor Windows
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Stephan van Hulst wrote:
I'm also interested in a convincing example why you need more than 2 billion decimal places.



As a side note, the IEEE 754 standard has been around for about 30 to 40 years, and it always had (a set of) fixed precision formats. So, I do empathize with the OP, as this annoyance has been around for a long time.  

As for having operators (instead of methods), I guess "operator overloading" may be a feature being touted by the OP.... However, if that is a case, then the argument becomes trying to fix an issue with a bigger issue -- as while I have a love-hate relationship with operator overloading, most everyone just hate it.

EDIT: BTW, I quoted Stephan for a reason. I just wished I remembered why...

Henry
 
Zachary Mitchell
Ranch Hand
Posts: 44
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Why would you need more that 2.1 billion decimal places?


Because you might.  Because of reasons that people don't speculate on, because
of a situation which is outside of people's minds.  That's why.

I am in favour of increased operator support.  That must happen.

Although if they chose to put in operator overriding, that obviously
is their choice and addresses that issue.

?
 
Stephan van Hulst
Saloon Keeper
Posts: 10251
216
  • Likes 2
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Why don't you write a JSR and see what the community does with your demands?
 
Tim Moores
Saloon Keeper
Posts: 5504
143
Android Firefox Browser Mac OS X Safari Tomcat Server VI Editor
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Peter Baumarchais wrote:I am in favour of increased operator support.  That must happen.


Better not to say "it must" because it won't. Guy Steele -one of the principal language designers behind Java- noted that the idea had been discussed and dismissed a long time ago.
 
Master Rancher
Posts: 4087
47
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Peter Baumarchais wrote:

Why would you need more that 2.1 billion decimal places?


Because you might.  Because of reasons that people don't speculate on, because
of a situation which is outside of people's minds.  That's why.



There's lots of things that some people might want to do with a language, but that doesn't mean a language has to cater to that.

Java is very much a general purpose language, so expecting it to handle things that are more specialised is probably asking more than it is likely to ever provide.
 
Campbell Ritchie
Marshal
Posts: 64496
225
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Dave Tolls wrote:. . . There's lots of things that some people might want to do with a language, but that doesn't mean a language has to cater to that. . . . .

But you are more likely to be successful with that sort of request if you can present a concrete example of where > 2³¹ digits is likely to be needed.
 
Zachary Mitchell
Ranch Hand
Posts: 44
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I would have thought that arithmetic and functions mathematics were so grass
roots that the should have arbitrary support, anyway.

Besides which, their own documentary does use the term arbitrary precision,
at any rate.  Hence, along with grass roots concerns, the interest in it.
 
Saloon Keeper
Posts: 3256
128
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Now that you've come to mention it....

In the insurance business reporting and pricing are based on what we call market consistent valuation. The regimes, like IFRS4 and beyond (especially the current IFRS17) and Solvency II, are extremely strict. Every simplification that you make in your models, every approaching way has to be clearly explained and the least thing is that you come up what the financial impact of such a simplification is (even in the cases where you make a simplification because the real thing is far too complex to model, or would require too much time to calculate). And the reviewers (be it the accountant offices or the insurance government institues) are very very strict.

For the calculations: think about a large insurance portfolio that you calculate the cashflows for the next 80 years, using tens of thousands of scenario's and assumptions for the coming 80 years.

But so far, no reviewer has ever come up with the question what the impact is of using 64 bit doubles, compared to using infinite precision floating point (no idea how we are supposed to do that). And how could we? At the moment the deadlines are very strict. To gather all the data, to make the many million calculations every month, do the reporting, we only have about 5 working days available. So speed is of the utmost importance. But interesting question, though.

I'm content with the current longs, doubles and (hardly ever use them) BigDecimals. If ever, getting rid of the primitives, operator overloading, scala's type and being able to write a.method(b) as a method b, and a Tuple class would be my priorities.
 
Stephan van Hulst
Saloon Keeper
Posts: 10251
216
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Peter Baumarchais wrote:I would have thought that arithmetic and functions mathematics were so grass roots that the should have arbitrary support, anyway.


It isn't trivial. Because Java doesn't have operator overloading (which is not as widespread as you seem to think it is), defining these operators for the BigDecimal class is a change to the Java language itself, and all compilers need to be changed as well. The only reference type for which an operator was overloaded (and only ONE operator) was the String class. Arguably, even that was a mistake because string concatenation in loops is expensive compared to using StringBuilder, and outside of loops string interpolation would have been a more elegant solution.

Besides which, their own documentary does use the term arbitrary precision, at any rate. Hence, along with grass roots concerns, the interest in it.


For all intents and purposes. Arbitrary here is meant to be taken that the user can decide on the precision, rather than that the precision is fixed to 32 or 64 bits. The double values 1.0 and 1.0000 have the same precision: 64 bits. The BigDecimal values "1.0" and "1.0000" have a precision of 2 and 5 respectively.

What seems arbitrary (hehehe) to me is that you balk about the limit imposed by a 32 bit scale, but that it's fine if memory runs out. Note that its precision IS limited to the maximum amount of memory available as long as the scale fits inside 32 bits.
 
Zachary Mitchell
Ranch Hand
Posts: 44
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Not quite.  I think about things from the point of view of 64 bit Java, which doesn't have
the memory limit that 32 bit java has.  In my view, 32 bit Java should be the same,
although I don't know how that affects applets, and might be a bit of a "sailed ship".  Or even not.

However the changes that I am suggesting are not ultimately as complicated as
all the other changes that they have been prepared to make.

The introduced a do while loop, an advanced for loop, enums as primtives and objects,
particularly the var keyword, static imports, ... variable length method parameters, and so on.

A quality maths system, with operators, with the option to disable floating point overflow and underflow
(for code thats already compiled) is not a particularly greater problem than any of these changes, and
is even smaller a change.  All of the other virtual machines other than Hotspot have to update
as new changes always come along, at any rate.

?
 
Rancher
Posts: 89
13
Chrome Eclipse IDE Java MySQL Database Scala Tomcat Server Windows
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I think it boils down to what gets implemented is what the community wants. There are many common issues (var keyword, too much boilerplate, etc.. ) that the Java community often expresses their desire to change. In the ~3 years I've been programming this is the first time I've seen a call for a better mathematics system. In my personal opinion, if you need something so precise and accurate that Java's mathematics system won't be able to handle the problem, there are more mathematics-specialized languages to solve your problem.

-Zach
 
Sheriff
Posts: 6789
469
BSD Linux Mac OS X VI Editor
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
@OP

I think such concerns, suggestions and even claims about the simplicity to implement or not, what was done or not, and why, supposed to be raised to Oracle Corporation directly, and in particular to those who building language, so you could argue with the right audience.

Currently it seems all goes in a semi philosophical manner - is that the purpose of discussion? If yes, then it is OK to continue.
But if you are expecting to incline some changes to be made in a language - then you are in the wrong place, I'm afraid.
 
Stephan van Hulst
Saloon Keeper
Posts: 10251
216
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Peter Baumarchais wrote:However the changes that I am suggesting are not ultimately as complicated as all the other changes that they have been prepared to make.
...
A quality maths system, with operators, with the option to disable floating point overflow and underflow (for code thats already compiled) is not a particularly greater problem than any of these changes, and is even smaller a change.


First of all, I think you underestimate the complexity of some of your requests in comparison to the features you've mentioned. For instance, writing the enhanced for loop is pretty straightforward, because you only have to tweak the grammar of the language a little bit, and then lift the syntax to regular calls to Iterator. Overflow checking for types that follow an IEEE standard? Heh.

Secondly, many features you've mentioned were already part of the language from the start. It's much easier to implement a feature using a clean slate than when you have to tack it on an existing language, if there's no straightforward way to lift the syntax. While operator overloading for BigDecimal may be lifted to calls to add() etc., the overflow checking for floating point values is new.

Finally, the ease of implementing a feature is not the only factor in deciding whether it will be implemented. The actual formula is more like (chance of implementation) = (community demand) / (difficulty of implementation). There was a MUCH greater demand for the advanced for loop, and yes, even the var keyword.

In short, you will probably have to wait for a long time because you're in the minority that really wants this change, especially so if you can't produce scenarios where you absolutely need this behavior.
 
Henry Wong
author
Posts: 23832
140
C++ Chrome Eclipse IDE Firefox Browser Java jQuery Linux VI Editor Windows
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Stephan van Hulst wrote:
In short, you will probably have to wait for a long time because you're in the minority that really wants this change, especially so if you can't produce scenarios where you absolutely need this behavior.



Regardless, I hope the OP is *not* actually waiting for this, as the chances of it happening are pretty slim -- in our opinions, that is. Furthermore, it is our opinions only, because this forum is the wrong place to affect the changes. We are users, just like the OP, and are not part of the JSR process.

Henry
 
Bartender
Posts: 20775
124
Android Eclipse IDE Java Linux Redhat Tomcat Server
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
The imprecision of Java floating-point is because it's binary floating-point. It's not hardware, as such, it's mathematics.

The only major computer I know of that did arithmetic in Base 10 was the IBM 1620 back around 1960. It had the option of either 5000 or 10000 words of RAM (magnetic core memory) and it stored all numbers in binary-coded decimal.

It also had no hardware support for multiplication and division. If you wanted that, your program had to load actual multiplication tables into memory.

I haven't heard a whole lot of demand to bring it back.

As I mentioned recently in another thread, the value 1/10 is a repeating binary number, similar to the repeating decimals that we all learned in our early school years, except in base 2 instead of base 10. You can only represent it precisely if you allow an infinite number of fractional bit positions. In the real world, however, we constrain ourselves to 4 bytes for exponent+mantissa in the "float" type, and 8 bytes for exponent+mantissa in the "double" type. The imprecision or granularity is technically known as epsilon.

If you want true precision, use BigDecimal. It can provide decimal accuracy, but at a massive cost in computing power. There's a reason binary is the computational radix of choice in computers.

One of the few things I miss about C++ is operator overloading. Without it, you cannot express BigDecimal math operations using mathematical notation and have to make do with explicit method calls. But a wise person once stated, "Java is C++ without the mace and knives". Almost every feature that Java lacks from C++ has a reason. And thrice over for multiple inheritance.
 
Zachary Mitchell
Ranch Hand
Posts: 44
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
People do tend to over-explain the binary nature of floating point a lot.
Folk aren't interested in learning about that.  What they do want,
for float and double, is something referred to as range accuracy,
for the arithmetic operations of those types.  Present
hardware does have support for such.

The issues are not about how it works, but about how it doesn't.
Nowadays, there is sse options and other things in hardware
that mean that general floating point isn't strictly nececcary, or
somekind of a theoretic desirability.

C++ and the .Net languages all include a way to ask the compiler
to put overflow and underflow all away.

BigDecimal, the needed alternative, however, doesn't support
operator syntax use and isn't truly pro extensible to
memory (certainly in 64 bit Java).

?
 
Stephan van Hulst
Saloon Keeper
Posts: 10251
216
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
So why don't you use different languages instead?
 
Tim Holloway
Bartender
Posts: 20775
124
Android Eclipse IDE Java Linux Redhat Tomcat Server
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Strange. Google doesn't know anything about "range accuracy". And I have no idea how streaming SIMD Extensions (SSE) is supposed to magically cure the precision problems inherent in binary floating-point. SSE is a performance feature. And only on certain hardware. A fundamental precept of Java is "Write Once, Run Anywhere", which negates hardware-specific solutions.

Floating-point numbers have Machine epsilon. That is the indicator of how precise a value can be for the machine quantity in question. At the epsilon level, you have an upper and lower bound to the possible actual value, but the true actual value is pure fuzz.

Computers cannot work with mathematical abstractions, only machine quantities. So live with it. There's a whole mathematical discipline of error analysis and it's fundamental to engineering and has been since slide rules and analog measuring instruments. Engineering specs are not considered complete without tolerances. And in many cases, those tolerances are far less than the epsilon uncertainty of common floating point implementations.

Computer output routines can and often do allow for the fuzziness of the data they are dealing with. That's what gives us a printed output of "6.1" when the actual bit-for-bit value of a float is more like 6.0000000000009976 or 6.0000000000010032.

But in the Real World everything has its limits and only a fool expects precision where there is none. Only in pure mathematics can you be 100% precise. And then only until you apply the mathematics to something tangible. You can complain that floating-point pennies don't balance a checkbook, but the minute you go from bookkeeping to accounting even pennies get fuzzy as interest-rate functions, currency conversions and the like chew on them. On the bookkeeping side, you have 2 choices: use BigDecimal (or something equivalent such as a hand-rolled Currency class) or scale floating-point up 2 orders of magnitude so that integral numbers of pennies actually register as integral values. People have done both.

Yes, I'd like to be able to use a currency class that allowed mathematical operators, but so far no one has persuaded the architects of Java to permit operator overloading, and mathematical operators in Java apply only to primitives, not classes.

If your real gripe is that you cannot add and subtract money, consider COBOL. It has no such problems. It was designed for it. But here also, consider that not every country on Earth has only 2 decimal places to their units of specie.

 
Zachary Mitchell
Ranch Hand
Posts: 44
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Yes, I'd like to be able to use a currency class that allowed mathematical operators, but so far no one has persuaded the architects of Java to permit operator overloading, and mathematical operators in Java apply only to primitives, not classes.


Which is my present critique. There are about 4 changes that need to be put in to java.

-floating point arithmetic pronormal and denormal values must have a keyword or compiler switch to turn
them off. Probably both, so to correct code that has ben compiled already, without changing that.
float and Float, double and Double

-There needs to be a BigDecimal and BigInteger equivalent or improvement that is truly pro extensible
digit place precision, limited only to available virtual machine memory (think 64 bit java).

-There needs to be operator syntax support on all arithmetic types for their operations and uses.

+, -, *, / %, +=, -=, *=, /=, %=, ==, !=, >=, <=.

-There needs to be transcendental function library support, on types by means of those types.
Like the method signature philosophy in the StrictMath class.
 
Campbell Ritchie
Marshal
Posts: 64496
225
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Peter Baumarchais wrote:. . . 64 bit Java . . . doesn't have the memory limit that 32 bit java has. . . .

Side‑issue. Java® was “laid down” and “launched” in the days of 32‑bit computing, when there were still some 16‑bt machines around, but 64‑bit computing was but a distant vision. If we follow that sort of argument to its logical conclusion, would that mean you would like to use longs as a default integer type for arithmetic instead of ints, would you want a very long integer primitive (128 bits), would you want arrays to use longs as their indices and have capacities up to 2⁶³ elements, etc?
 
Been there. Done that. Went back for more. But this time, I took this tiny ad with me:
how do I do my own kindle-like thing - without amazon
https://coderanch.com/t/711421/engineering/kindle-amazon
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!