John Lockheart

Ranch Hand

Posts: 115

posted 10 years ago

Give the order of growth: (what does that mean?)

1. 10n + 3n2 + log n

2. n(n3 + 8)

Give the worst case order of growth: (what does that mean?)

3.

for (int i = 0; i < a.length; i++) {

for (int j = i - 1; j > 1; j = j / 2) {

// Do something with a[i], a[j]

}

}

4.

for (int i = 0; i < a.length; i = i + 2) {

for (int j = 0; j < 5; j++) {

for (int k = i; k > i / 2; k--) {

// Do something with a[i], a[j], a[k]

}

}

}

1. 10n + 3n2 + log n

2. n(n3 + 8)

Give the worst case order of growth: (what does that mean?)

3.

for (int i = 0; i < a.length; i++) {

for (int j = i - 1; j > 1; j = j / 2) {

// Do something with a[i], a[j]

}

}

4.

for (int i = 0; i < a.length; i = i + 2) {

for (int j = 0; j < 5; j++) {

for (int k = i; k > i / 2; k--) {

// Do something with a[i], a[j], a[k]

}

}

}

pete stein

Bartender

Posts: 1561

Christy John

Greenhorn

Posts: 7

posted 10 years ago

Big O notation is a theoratical measure of the complexity of an algorithm. It usually depicts the amount of resources needed for it's completion, either in terms of memory or computer time needed.

The n in the equation depicts the units that determine the complexity.

So an equation containing only n is said to have a complexity of n or denoted as O(n), n square has O(n square), log n has O(log n).

So O(n square ) is more complex than O(n).

You need to study Analysis and design of algorithm to understand this much deeply. However, I assume, my rely can help you for a while.

Thank you.

The n in the equation depicts the units that determine the complexity.

So an equation containing only n is said to have a complexity of n or denoted as O(n), n square has O(n square), log n has O(log n).

So O(n square ) is more complex than O(n).

You need to study Analysis and design of algorithm to understand this much deeply. However, I assume, my rely can help you for a while.

Thank you.

posted 10 years ago

Christy, I am not sure if I agree with this.

I always thought that the "big O", or the "order of", is the measure of the growth in time to complete the task in relation to the number of items in the task.

For example, if I have to run a task with 1000 items, and it takes 5 minutes to do it, then running the task with 2000 items, should take 10 minutes to do it, if it has an order of N.

You can have a really complex algorithm, with an order of N. And you can have a really simple algorithm, with an order of N squared.

Henry

Originally posted by Christy John:

Big O notation is a theoratical measure of the complexity of an algorithm. It usually depicts the amount of resources needed for it's completion, either in terms of memory or computer time needed.

The n in the equation depicts the units that determine the complexity.

So an equation containing only n is said to have a complexity of n or denoted as O(n), n square has O(n square), log n has O(log n).

So O(n square ) is more complex than O(n).

Christy, I am not sure if I agree with this.

I always thought that the "big O", or the "order of", is the measure of the growth in time to complete the task in relation to the number of items in the task.

For example, if I have to run a task with 1000 items, and it takes 5 minutes to do it, then running the task with 2000 items, should take 10 minutes to do it, if it has an order of N.

You can have a really complex algorithm, with an order of N. And you can have a really simple algorithm, with an order of N squared.

Henry

Christy John

Greenhorn

Posts: 7

David McCombs

Ranch Hand

Posts: 212

posted 10 years ago

Big O measure the number of steps in an algorithm, in many cases that is the number of comparisons. It is completely neutral in regards to hardware, language, operating system, etc.

I don't think you can say that doubling the the number of items doubles the time to complete it. For one thing, the operating environment is not considered in Big-O. There are many variables in program performance that are outside the control of the program being tested. Also, I don't think many recursive algorithms double in time as the number doubles-recursive Fibonacci comes to mind.

I don't think you can say that doubling the the number of items doubles the time to complete it. For one thing, the operating environment is not considered in Big-O. There are many variables in program performance that are outside the control of the program being tested. Also, I don't think many recursive algorithms double in time as the number doubles-recursive Fibonacci comes to mind.

"Should array indices start at 0 or 1? My compromise of 0.5 was rejected without, I thought, proper consideration."- Stan Kelly-Bootle

Ulf Dittmer

Rancher

Posts: 42972

73

posted 10 years ago

Most often it's used for time complexity, but as Christy notes, sometimes it's applied to memory complexity.

Originally posted by Henry Wong:

... the "big O", or the "order of", is the measure of the growth in time to complete the task in relation to the number of items in the task.

Most often it's used for time complexity, but as Christy notes, sometimes it's applied to memory complexity.

John Lockheart

Ranch Hand

Posts: 115

posted 10 years ago

Thanks, but I still have no idea how to answer these questions, even after all my reading on the subject. I pulled them straight from a text book, unfortunately the text book doesn't contain the answers. I am trying to answer those questions in preperation for a test coming up. I might understand better if I saw the solution to the problem, then I can figure out how I was supposed to get there.

John Lockheart

Ranch Hand

Posts: 115

Jim Yingst

Wanderer

Sheriff

Sheriff

Posts: 18671

posted 10 years ago

[written without seeing the last post]

John - in math, have you studied sequences and series? That is helpful background for this discussion.

Generally, with, big-O problems, the process to solve the problem formally would be

1. get a formula for the amount of time required to complete a given algorithm or code snippet. Generally, this means a formula for the number of times that the innermost loop of the code will execute.

2. simplify that formula by removing all but the most significant term, and remove any constant multiplier as well. So a formula like 10n^2 + 7n + 113 would simplify to 10n^2 and then to n^2 (where n^2 means n squared - not the Java meaning).

Knowing that the final formula will be simplified allows you to take many shortcuts in deriving the formula. E.g. you didn't need to worry about the 10, or the 7n, or the 133. You could skip those details in the first place if you know what you're doing. With practice, big-O problems can usually be solved with some quick mathematical intuition rather than formal proof. But this is difficult to develop except through practice.

I suspect that questions 1 and 2 were

10n + 3n2 + log n and n(n3 + 8) are formulas for the time to execute, rather than code to be executed, then you just need to simplify them. However this assumption was not stated in the problem. Also I'm guessing 3n2 is supposed to be 3n^2, is that right?

[ March 28, 2007: Message edited by: Jim Yingst ]

John - in math, have you studied sequences and series? That is helpful background for this discussion.

Generally, with, big-O problems, the process to solve the problem formally would be

1. get a formula for the amount of time required to complete a given algorithm or code snippet. Generally, this means a formula for the number of times that the innermost loop of the code will execute.

2. simplify that formula by removing all but the most significant term, and remove any constant multiplier as well. So a formula like 10n^2 + 7n + 113 would simplify to 10n^2 and then to n^2 (where n^2 means n squared - not the Java meaning).

Knowing that the final formula will be simplified allows you to take many shortcuts in deriving the formula. E.g. you didn't need to worry about the 10, or the 7n, or the 133. You could skip those details in the first place if you know what you're doing. With practice, big-O problems can usually be solved with some quick mathematical intuition rather than formal proof. But this is difficult to develop except through practice.

I suspect that questions 1 and 2 were

*supposed*to relate to only the second step above - how to simplify the formula. That is, if we assume that10n + 3n2 + log n and n(n3 + 8) are formulas for the time to execute, rather than code to be executed, then you just need to simplify them. However this assumption was not stated in the problem. Also I'm guessing 3n2 is supposed to be 3n^2, is that right?

[ March 28, 2007: Message edited by: Jim Yingst ]

"I'm not back." - Bill Harding, *Twister*

Jim Yingst

Wanderer

Sheriff

Sheriff

Posts: 18671

posted 10 years ago

Ah, good, looks like you actually do have some idea how to do these.

Your answers look correct except for the third one. Specifically, look more carefully at the inner loop. How many times will it execute if i = 10? What if i = 100? Or i = 1000? How can you generalize this behavior?

[ March 28, 2007: Message edited by: Jim Yingst ]

Your answers look correct except for the third one. Specifically, look more carefully at the inner loop. How many times will it execute if i = 10? What if i = 100? Or i = 1000? How can you generalize this behavior?

[ March 28, 2007: Message edited by: Jim Yingst ]

"I'm not back." - Bill Harding, *Twister*

Jim Yingst

Wanderer

Sheriff

Sheriff

Posts: 18671

posted 10 years ago

I think there's an implied "approximately" and/or "for large values of N" in most big-O discussions. If algorithmic complexity is not the most significant factor in determining execution time, then it makes little sense to worry about the big-O performance of the algorithm.

Note that Henry did say "if it has an order of N."

**[David McCombs]: I don't think you can say that doubling the the number of items doubles the time to complete it. For one thing, the operating environment is not considered in Big-O. There are many variables in program performance that are outside the control of the program being tested.**

I think there's an implied "approximately" and/or "for large values of N" in most big-O discussions. If algorithmic complexity is not the most significant factor in determining execution time, then it makes little sense to worry about the big-O performance of the algorithm.

**[David McCombs]: Also, I don't think many recursive algorithms double in time as the number doubles-recursive Fibonacci comes to mind.**

Note that Henry did say "if it has an order of N."

"I'm not back." - Bill Harding, *Twister*

John Lockheart

Ranch Hand

Posts: 115

Campbell Ritchie

Sheriff

Posts: 55330

157

Jim Yingst

Wanderer

Sheriff

Sheriff

Posts: 18671

posted 10 years ago

It doesn't execute in n/2. It may help to step through the code one line at a time to see what happens on each iteration. Or write a test program if you prefer, and print the value of j each time through the inner loop. It's pretty easy to establish that the behavior is not n/2 (or anything directly proportional to n). Figuring out what it

*is*may depend on your mathematical background."I'm not back." - Bill Harding, *Twister*

John Lockheart

Ranch Hand

Posts: 115

posted 10 years ago

it has to execute in proportion to n though. It doesn't start off at j = 0, 1, 90, 100. it starts off as j = i-1, so it's dependentant of i, which is dependendent on the size of an array (meaning both execute n times). Now the equation of n for the inner loop might not be n/2. But it should still be an O(n^2) algorithm.

Jim Yingst

Wanderer

Sheriff

Sheriff

Posts: 18671

posted 10 years ago

All true so far...

No. Merely being

And yet, it's not. Again, I recommend that you write some code to see what happens. It doesn't matter what you think the answer

Note: it will be simpler if you focus on the inner loop only, at first:

How many times would this execute for i = 10? i = 100? Or i = 1000? Don't guess. Find out.

**[John]: it has to execute in proportion to n though. It doesn't start off at j = 0, 1, 90, 100. it starts off as j = i-1, so it's dependentant of i, which is dependendent on the size of an array**

All true so far...

**[John] (meaning both execute n times).**

No. Merely being

*dependent*on n does not automatically mean executing n times. Sure, that's the most common case (for a loop from 0 to n-1) but it's not what we have here.

**[John]: Now the equation of n for the inner loop might not be n/2. But it should still be an O(n^2) algorithm.**

And yet, it's not. Again, I recommend that you write some code to see what happens. It doesn't matter what you think the answer

*should*be - you need to discover what it

*is*.

Note: it will be simpler if you focus on the inner loop only, at first:

How many times would this execute for i = 10? i = 100? Or i = 1000? Don't guess. Find out.

"I'm not back." - Bill Harding, *Twister*