I'm currently reading about algorithms and have come across Big-O and calculating the time it takes various algorithms to run. I'm getting to a point where I'm seeing problems like the one below and genuinely have no idea what I'm looking at or how to solve it. I know the answer to the below problem is O((4N3 + 27N2 + 53N + 12)/6) but I have no idea why. Does anyone recommend any books, websites, or what not that will explain stuff like this step-by-step? It's really bothering me but I just don't see the pattern or whatever will help me solve these better.

Thanks,

Brian

Determine the precise (i.e., not just the order of magnitude) Big-Oh values for the following code sample, based on the number of statement executions, as described in the Module 1 lecture. Choose the answer that best agrees with the value you have determined. Keep the following considerations in mind:

• Remember to consider each statement in compound statements separately.

• Pay close attention to the initial and end values of loop variables!

• Loops such as "for" and "while" contain an implicit "jump" instruction that causes execution to proceed from the end of the loop back to the beginning of the loop.

Code sample:

- 1

Brian King wrote:reading about algorithms and have come across Big-O and calculating the time it takes various algorithms to run.

I

I'm not too sure if this helps but there are some resources listed here

https://coderanch.com/c/books

Edit: Grammar Fixed

Two choices: A) Learn from mistakes. B) Don't make mistakes.

- 1

But that isn't the correct bigBrian King wrote:. . . the answer to the below problem is O((4N3 + 27N2 + 53N + 12)/6) but I have no idea why. . . .

**value. The correct value is**

*O***(**

*O**n*³)

The outer loop runs

*n*×

The middle loop runs

*n*,

*n*− 1,

*n*− 2,

*n*− 3,

*n*− 4,

*n*− 5, × The fewest number of times it can run is 0; for some peculiar reason it has never been possible to make a loop run a negative number of times What you have above is an arithmetic progression (=AP), and you know the sum of an arithmetic progression is (

*n*₀ +

*n*ₙ) ×

*m*÷ 2. Now, in an AP starting 0 whose numbers differ by 1, you multiply

*n*by (

*n*+ 1) and halve it. Note that is the same as (

*n*² +

*n*) ÷ 2. Now multiply that by

*n*for the outer loop. For the inner loop you are going to have the same rigmarole for each of the loops. The bit about

`sum += j;`is really easy by comparison: addition runs in constant time =

**(1).**

*O*Please always tell us where such questions come from, to avoid copyright problems and in this case, so we know to avoid it.

Also, is there any

*n*such that 4

*n*³ + 27

*n*² + 53

*n*doesn't divide by six? No, I think it will always divide by six.

I am not convinced you will understand the Wikipedia page. I did a search and this looks a good link.

- 1

Brian King wrote:Determine the precise (i.e., not just the order of magnitude) Big-Oh value

This doesn't make sense. The very definition of Big-Oh relates only to order of magnitude.

`O((4*n^3 + 27*n^2 + 53*n + 12)/6)`doesn't actually mean anything more than

`O(n^3)`. Whoever asked the question just wants to know a formula for the number of times the

`sum += j;`is executed.

*The mind is a strange and wonderful thing. I'm not sure that it will ever be able to figure itself out, everything else, maybe. From the atom to the universe, everything, except itself.*

given for arriving at O((4N3 + 27N2 + 53N + 12)/6):

Statement | # of times executed (general) | actual # of times executed (N = 10) | Comments | |
---|---|---|---|---|

k < n; | N(N + 1)(N + 2)/6 + N(N + 1)/2 | 275 | This conditional statement is nested at the 3rd level. Since the value of k is dependent on the current value of j, and the value of j is dependent on the current value of i, the base number of executions is given by the 3rd figurate number formula: N(N + 1)(N + 2)/6. However, since this is a loop termination condition, it must also be executed an additional time for every iteration of the enclosing j loop, so the total number of executions will be N(N + 1)(N + 2)/6 + N(N + 1)/2. |

- 1

https://www.khanacademy.org/computing/computer-science/algorithms wrote:Asymptotic notation

Learn how to use asymptotic analysis to describe the efficiency of an algorithm, and how to use asymptotic notation (Big O, Big-Theta, and Big-Omega) to more precisely describe the efficiency.

Two choices: A) Learn from mistakes. B) Don't make mistakes.

- 1

*"Cracking the Coding Interview: 189 Programming Questions and Solutions."*

Also, this video: https://www.youtube.com/watch?v=v4cd1O4zkGw&t=154s

It is sorta covered in the JavaRanch Style Guide. |