Fred Kleinschmidt wrote: If you sort the elements, the maximum sum will be the sum of the last two elements of the sorted array.
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
fred rosenberger wrote:They could be the first two...
There are three kinds of actuaries: those who can count, and those who can't.
Fred Kleinschmidt wrote:
fred rosenberger wrote:They could be the first two...
If the array is sorted in ascending order...
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors
fred rosenberger wrote:[You didn't say "sorted in ascending order". you just said "If you sort the elements".
First, your term "MaxSum" is misleading - it implies you are trying to find the maxim sum of two elements. If you sort the elements, the maximum sum will be the sum of the last two elements of the sorted array
Don't run a small loop and use nanoTime on it. The optimisations in the JVM will make the timings inaccurate. Run the loop through several tens of thousands of assessments before you do your timings, then run the same loop 1000000
Besides what everyone has stated so far, this is also not a fair comparison. In the straw man case, it is measuring the full time. In the "improved" case, it is ignoring the time to copy and sort the array.
There are three kinds of actuaries: those who can count, and those who can't.
Please have a look at this FAQ. If you show us something different from what you are actually using, you will mislead us into giving advice which doesn't help you.s ravi chandran wrote:. . .
In my actual code, both the array copy and sort are included in time calculation. I just moved it out here. . . .Besides what everyone has stated so far, this is also not a fair comparison. In the straw man case, it is measuring the full time. In the "improved" case, it is ignoring the time to copy and sort the array.
A bad way to solve programs. You will spend much more time and effort on that sort of process than you would on thinking about the problem first. Thought and planning would lead you to an elegant solution.I am not that good with planning, I always solve problems when I am actually running it. For me, its an incremental proces. Solving it step by step.
Piet Souris wrote:Don't use a HashSet. And if you go the HashMap way, make a frequency table (see my earlier reply).
"Leadership is nature's way of removing morons from the productive flow" - Dogbert
Articles by Winston can be found here
There are three kinds of actuaries: those who can count, and those who can't.
we have to find if two numbers in the array exist whose sum will be equal to the MaxSum
There are three kinds of actuaries: those who can count, and those who can't.
Piet Souris wrote:Not sure about Bitsets.
The array might contain Integer.MAX_VALUE; The Bitset would then be quite large, and what about MIN_VALUE?
And I'm not sure I understand your reasoning why duplicates are not an issue. Although OP's intention is indeed not quite clear, the question is if there exists two numbers in the array such that ...
Well, let's wait if OP can give a definition that removes all remaining doubts.
s ravi chandran wrote:Okay, I see your point with the set. How about a list? It would also have the contains() method, right?
MaxSum is definitely a wrong term. Did not get a suitable name so far. Maybe I can say it is MatchPairSum.
Piet souris wrote:Well, say we have int[] array = {7}, and that we give the sum as 14. Does your question have a solution here? And when array = {7,7}, does that make any difference?
Carey Brown wrote:
Scott Shipp wrote:
Carey Brown wrote:
Exactly what I had in mind . . .
Carey Brown wrote:
"Leadership is nature's way of removing morons from the productive flow" - Dogbert
Articles by Winston can be found here
There are three kinds of actuaries: those who can count, and those who can't.
Piet Souris wrote:To create a HashMap only once is irrelevant here...
"Leadership is nature's way of removing morons from the productive flow" - Dogbert
Articles by Winston can be found here
Winston Gutkowski wrote:
Piet Souris wrote:To create a HashMap only once is irrelevant here...
We'll have to agree to disagree there, because it's entirely dependent on HOW the method is likely to be used. Carey's solution is beautifully elegant (cow from me too), but essentially procedural; whereas I think yours may be more "objective".
You could for example, create a SumFinder object for an array, and then run findSum(value) on it as many times as you like with no overhead ... and perhaps even "enhance" it to deal with sums of more than 2 numbers without changing the API.
Now you could also do that with Carey's solution, but there wouldn't be much point since all its "working information" is provided as parameters.
Winston
Winston Gutkowski wrote:
Carey Brown wrote:
Very nice. However, it means rebuilding the Set for every call to findSum().
Carey Brown wrote:True but you wouldn't have to build the entire set (except for worst case), you'd bail out as soon as you found an answer.
"Leadership is nature's way of removing morons from the productive flow" - Dogbert
Articles by Winston can be found here
Winston Gutkowski wrote:O(3n/4)
O(n/2)
Stephan van Hulst wrote:Both of these are O(n). Constant multipliers have no effect on complexity cases.
"Leadership is nature's way of removing morons from the productive flow" - Dogbert
Articles by Winston can be found here
Consider Paul's rocket mass heater. |