Mike Simmons wrote:No, for several reasons.
As for setting unknClass to null, that's pointless. It's a local variable, about to go out of scope, so there's no point in setting it to null - once it's out of scope, no one can access the reference anyway, and it can't prevent garbage collection. The reference effectively does not exist after that.
As for calling the clear() method, that's also pointless, for similar reasons. The list should be eligible for garback collection anyway right after this, so why waste time clearing it first? The memory will be recycled soon anyway.
If may really depend on what happens in that "......" part of the code. Without something happening there, nothing is done with the list anyway. You might as well not build the list at all... unless something interesting happens in the "......". So, what happens there? Most likely, some code reads each element in the list and does something with it. in which case, after you're done, there's no use for the list anyway, and you could clear it, but there's no need, it will be garbage collected soon anyway.
But... what if you call some other method, and that other method stores a reference to the list somewhere? What if they start other threads that aren't done with the list yet? In that case, calling clear() is not just pointess, but it could actively interfere with whatever the other method was supposed to do.
So, calling clear() here is either harmless but pointless, or possibly, it's harmful. Either way, why do it?
Best practice would be to eliminate both the = null and clear() call. There's nothing else to do here. If you call some other method, let the other method decide when it's done with the data. You don't need to worry about it in this method.
Campbell Ritchie wrote:Yes, but only a slight difference. Line 12 can, in theory, call the method repeatedly. It is likely however that an optimising compiler or runtime will elide the repeated calls into one call, providing there is no change to the state of the object returned.
* * * * * * * * *
Please remember that names mean things. I would have thought that a method called getExamList() will return a List and something called examMap will be a Map, which behaves differently. Maps don't usually have any way of using an index to access a pair, but Lists do.
[Addition} If you change variable names like that, you might be lucky and get a compiler error. If you really have a Map, your code will behave differently from what you expect.
Line 12 can, in theory, call the method repeatedly
Jesse Silverman wrote:
Do you really mean examMap there or was that just a typo?
You wouldn't normally see stuff like what you showed very often.
In terms of "Why do I make data private and should I have getters and setters as part of the public interface for my class?" There is not only tons of stuff "Out there, somewhere" covering this question, but it has recently been discussed extensively right here.
The first thing that came up when I searched was this recent popular thread:
Jesse Silverman wrote:Java still does not allow you to declare the type of a primitive as unsigned, nor are there UnsignedInteger and associated other wrapper types defined.
There are a great number of methods (far too many to list here, in fact), in both the Integer and Long classes that have been added somewhat recently, mostly in Java 8 and 9 that make working with unsigned values much easier, please refer:
So you can't declare a Data Type to be unsigned, but for operations, you sure can specify to treat the operands and results as unsigned.
I spent 11+ years working on a giant multi-lingual product (really more of a platform, actually) that had the same values represented in C/C++, SQL, C# and Java.
There was more than a little pain around this (okay, sometimes a LOT) and these methods that are now part of the standard wrapper classes could probably have helped us some.
The lack of there being a way to actually declare a Java VALUE unsigned still would cause us a lot of headaches even today, and I think we would still try to avoid exposing unsigned types to user api's. Sadly, because we were working with many native and .Net api's, that was not always possible. Hilarity sometimes ensued.
I will add that actually needing the full RANGE of unsigned long is spectacularly uncommon. I feel like 100% of the use cases I would be using the methods found in the provided links would be for compatibility with non-Java API's, but I may be forgetting something.
When working in Pure Java (a luxury I rarely have had) I would generally avoid unsigned types wherever possible, it is not a good state of affairs to be having values that need to be reminded that they are indeed unsigned every time one operates on or compares them.
Campbell Ritchie wrote:Most languages support unsigned integer types. It is often possible to interconvert signed and unsigned numbers, which can cause overflow errors.
Tan Quang wrote:. . . unsigned is a data type . . . on MySQL. . . . since java doesn't have an unsigned data type, something must be done to avoid overflow.
I think you want to convert them to some datatype that supports the full range, maybe even BigInteger.
Jesse Silverman wrote:
Tan Quang wrote:
As for your problem, it is very practical, but I at least check that b must be greater than 0 to perform the calculation (because most of b are modules usually > 0 , there are only a few exceptions). As for currency, it's always 0 <= money <= Integer.MAX_VALUE (partially because I use the data type int unsigned and I also don't want the player's money to be negative, I don't want to when player purchase of goods will be added money instead of minus money).
I also currently think that checking a is equal to Integer.MAX_VALUE here seems really unnecessary, I think it should be checked before doing the calculation.
I am going to guess that your primary implementation language is NOT Java, given the highlighted data type above?
People were offering various Java-specific solutions to your issue, probably based on the forum title they found the question in...
I will agree that it is generally good that programmers are learning to or forcing themselves to think more about arithmetic overflow.
When controlled and expected it can be fine, but the same casual approach can maim, kill or cost billions of dollars when it pops up someplace that matters enough...
Jesse Silverman wrote:Hi Tan Quang:
So we aren't "fighting straw men" here, can you state for what domain of values you expect the solution to work for? i.e. which values of a and b?
I am nearly sure that you don't want to return a result of Integer.MAX_VALUE for
but have lost track of what values for a and b you are trying to return a "best product possible without overflow" for.
Going back to what Mike said, it seems that returning a value of Integer.MAX_VALUE for any of these is missing the spirit of the page you linked to.
There is currently no way for your solution to signal the difference in state of affairs between a HUGE OVERFLOW, likely to cause the reactor to melt down, the plane to crash or the patient to die, and a result that just happens to evaluate to Integer.MAX_VALUE
The topic is endlessly fascinating, but I am not sure which use cases the solution you are moving towards is the best approach for.
Jesse Silverman wrote:The line that implements the check-for-max condition in line two is so long that it is effectively unreadable, so I'm not sure if it is right...
<br /> <br /> I'd need to study it some more and question whether there isn't an easier-to-read/easier-to-check phrasing of the same check that wouldn't work just as fast or undetectably slower... <br /> <br /> You may not be planning to call these in the way I am seeing they could be called and return meaningless results, but looking at both of these: <br /> <br />
If a is Integer.MAX_VALUE and -1.0 < b < 1.0 you are returning Integer.MAX_VALUE instead of a well-defined, meaningful and computable finite result, including the case where b == 0.0 (!)
Is that what you actually intend to do??
The main purpose is to prevent possible overflow.
Mike Simmons wrote:Well, those numbers look very close to me, and the time seems too short to mean much... Java performance times vary substantially in different circumstances, and you generally need to repeat the operation many times to have a meaningful result. And if you're doing IO with a method like System.out.println(), that will be far slower than the calculation. So I don't think those numbers are very meaningful. Try doing millions of calculations, with no IO, to get a better idea of the performance.
Moreover, you seem to think it's very important to be as accurate as possible, and also to be as fast as possible. The thing is, those two things can be in conflict, to some extent. And you haven't really given any clear idea of which one is more important. As an example, I pointed out the difference between casting to int, and using the round() method to reduce roundoff error. Well, calling the round method may be a little slower than simply casting to int - is it worth it? Personally I think it is, but that's really your decision; we don't know what you're planning to use this method for. You need to decide how much error is acceptable to you.
Generally, it would be best to first, make sure your code is as accurate as possible, ignoring the performance. Then make sure you have a good set of unit tests to verify that your code is achieving the necessary accuracy. Then write a performance test that gives you a good measure of how fast it is. Only after you have those, then try changing the code to make it faster. Otherwise you don't know when your changes are introducing unacceptable problems with accuracy.
Mike Simmons wrote:The only reason I would use float is if I'm dealing with large arrays of numbers. Much like Jesse wrote above. In those cases you may indeed benefit from reduced storage space compared to double. (In tradeoff with precision, of course.). Even then, most of the actual calculations might as well be done with doubles. Even if you're storing results back in float arrays.
You may well disagree, and that's fine. But, you would be well advised to test the performance of your code to see if it really makes any difference. Chances are very good that it won't. And, given your concerns about accuracy, using double rather than float seems a really good idea.
Given your performance concerns about something as minor as using double rather than float, I expect you really won't want to use BigDecimal, unless there's no other way to get the precision you need. OK.
I would also question why you would want to convert the result back to an int, given that it can introduce additional errors. If you were using long rather than int, it might make more sense... long can have more precision than double, but with a reduced range. Except that given that the multiplication needs to be done with double (assuming BigDecimal is not involved), that extra precision in long isn't really accessible. Your result will still have the effective precision of double, with added roundoff errors as you convert it to an integer value. Not sure what the point of that would be...
But if you really need the result to be an int, and can't afford the slowness of BigDecimal, here is what I would use:
It might be worthwhile to create an OverflowException which extends ArithmeticException, to make it easier to catch that specific exception if you need to.
Jesse Silverman wrote:
I also researched on double, it will help b no more error (since it doesn't convert 1.8 to 1.79999...) but since double uses 8 bytes, twice as much as float. So in some cases, I fear the calculation will take a long time.
To my knowledge, virtually all all hardware that supports Java SE does both long and double maths in hardware, which can even be faster than float or int.
I had memorized years ago that "You should just forget about float, it won't be any faster than double."
When I started working in AI for a while, that advice went out the window, but because we a) often didn't need the extra precision anyway and b) had mind-bogglingly enormous data sets.
The reason float was often preferred to double was to save space for the data, not time per se.
On CPU's there is no sense in using float16 instead of float32 or float64 (not Java types) because most CPU's do not have special float16 instructions, so the time to convert back and forth is prohibitive and mostly just isn't done.
What I didn't realize is that the GPU's that people tend to use nowadays for Serious Number Crunching support float16 natively, so that if you have absolutely tremendous data sets, and cache misses are a big part of what is slowing you down, they will happily use float16 when precision isn't needed.
Is doing "normal Java maths" on Java long and double types actually slower than using int and float?
I didn't think so and was thinking you would only do that because you needed neither the extended range nor precision and wanted to save space, but I will watch this thread for a reply by someone who knows for sure, or has recently measured.
memory: 27792 time: 0.15 exit code: 0
memory: 28596 time: 0.17 exit code: 0
Mike Simmons wrote:One other point, we should be careful about converting a double to an int by casting. Remember that this is effectively rounding down, for positive numbers... and that may not be what you want. Consider, if the double version of the result is 5.9, do you want to return 5, or 6? And, what if it's 5.9999? What if it's -5.9 or -5.9999? Decide what you think the best answer is for cases like this, and make sure your code returns the appropriate result.
Personally, I would use Math.round() in most cases like this. But maybe you need different results...