Ivan Jozsef Balazs wrote:Whether this is better than lumping together all the elements and sort them as such  I do not know.
Martin Vajsar wrote:
Ivan Jozsef Balazs wrote:Whether this is better than lumping together all the elements and sort them as such  I do not know.
I'd say it isn't,
Myke Enriq wrote:I am thinking about a modified merge sort. Assume input has size N a power of 5.
You split the input in 5 sub arrays , sort those then merge them in O(n). This is the formula for a recurrent function.
Anyhow , this gets an O ( Nlog(base 5)N ) complexity.
Ryan McGuire wrote:but I don't think that bigO captures everything we're concerned about.
We're not just concerned about our algorithm being O(N log N), but also what the actual multiplier is.
Any other ideas? In particular, do we have any implementations to actually test?
Martin Vajsar wrote:
Ryan McGuire wrote:but I don't think that bigO captures everything we're concerned about.
Was it specified somewhere what are we actually concerned about?
We're not just concerned about our algorithm being O(N log N), but also what the actual multiplier is.
No, we aren't. The actual multiplier doesn't give any additional information. As has already mentioned recently elsewhere in this forum, the "bigO" just describes how the runtime of the algorithm changes when the size of its input changes. And really it is just an approximation. You cannot use it in any way to compute that runtime, since lowerorder factors of the complexity are simply omitted.
Any other ideas? In particular, do we have any implementations to actually test?
The answer to similar proposals around here usually is: show us your idea or implementation first.
Myke's idea actually sounds pretty good to me. If I understand it right, it's actually a merge sort that starts at the size of sublists equal to 5 instead of 1, so it shaves off some of the time of the original merge sort.
Ryan McGuire wrote:
Myke's modified merge sort idea from yesterday is one candidate. Any other ideas? In particular, do we have any implementations to actually test?
Martin Vajsar wrote:
Myke's idea actually sounds pretty good to me. If I understand it right, it's actually a merge sort that starts at the size of sublists equal to 5 instead of 1, so it shaves off some of the time of the original merge sort.
Ryan McGuire wrote:
It just seemed that in this case we were going off on a tangent about how log(base 5) N is proportional to log(base 10) N, and I wanted us to get back on the topic of actual algorithms that might be faster than the simpleminded one without dismissing candidates just because they execute in O(N log N) time.
Henry Wong wrote:
There is a reason why constants are ignored. There is a reason why lower order components of the expressions are ignored too. The number of elements are calculated to approach infinity.
...
If you are arguing that N should not be infinity, then you have to consider the extra passes, and that it may no longer be order NlogN due to the large number of components with each recursive step.
Myke Enriq wrote:Now I wonder , what if the starting sub arrays of size 5 are merged one by one in O(n) into starting sub arrays of size 10 ?
Myke Enriq wrote:Guys, the O(n) notation is a CURSE.
I do not care , and have ever cared EVER , if an algorithm is O(whatever). I only care about the costs , as 2 algorithms of the same O() could have very different costs.
Judging algorithms in terms of O() is a huge handicap to any programmer. The only thing that ever matters is the actual cost , not how that cost behaves when the input data goes to infinity (lol).
Having said that , it makes a huge difference to me if an algorithm has the cost N*log(base 2)N or the cost N*log(base 5)N
Myke Enriq wrote:Guys, the O(n) notation is a CURSE.
I do not care , and have ever cared EVER , if an algorithm is O(whatever). I only care about the costs , as 2 algorithms of the same O() could have very different costs.
Judging algorithms in terms of O() is a huge handicap to any programmer. The only thing that ever matters is the actual cost , not how that cost behaves when the input data goes to infinity (lol).
Having said that , it makes a huge difference to me if an algorithm has the cost N*log(base 2)N or the cost N*log(base 5)N
Enthuware  Best Mock Exams and Questions for Oracle Java Certifications
Quality Guaranteed  Pass or Full Refund!
Whatever you say buddy! And I believe this tiny ad too:
Building a Better World in your Backyard by Paul Wheaton and Shawn KlassenKoop
https://coderanch.com/wiki/718759/books/BuildingWorldBackyardPaulWheaton
