Nice, never about Big O before, I am still reading about Big O right now.

Your logic might be valid, I could probably confirm later, but you would need to use java.math.BigDecimal for an infinite (or OutOfMemoryError) representation.

Your code will stop as soon as n is smaller than 10 which will be after (Integer.MAX_VALUE/10) loops, so pretty quickly.

I hadn't been here for fifteen years

Conrado Sanchez wrote:Is the Big O expression to the code below equal to O(number of digits in the number n)??? If so, how???

Well, because of what I stated before, if you used BigDecimal the the O notation would be different. Since you are using int the notation you stated is probably correct.

I hadn't been here for fifteen years

- 1

`BigDecimal`fits into this question.

Big O notation is an

**indication**of the

**upper bound**of the running time of an algorithm. The part within the parenthesis is a function, in which the size of the problem is substituted for

`n`. You may multiply by and add to this function using any scalar.

Essentially

`O(n)`says: "I can think of a function

`f(n) = k*n+c`, where

`k`and

`c`are constants, which expresses the

**maximum**running time of the algorithm for

`n`elements.

Because multiplication and addition are already implied, it makes no sense to use Big O notations like

`O(4*n)`or

`O(n+2)`. Instead, there are a couple of common classes of algorithms:

`O(1):`Constant time - how fast the algorithm runs is unrelated to the size of the problem. For instance, returning the size of a collection happens in constant time, if the collection uses an internal counter to keep track of the size.

`O(log n):`Logarithmic time - the algorithm can be expressed as a path through an execution tree. For instance, searching an element in a balanced binary search tree is in this class.

`O(n):`Linear time - the algorithm processes

**some elements**or

**each element**at most once.

`O(n log n):`Quasilinear time - slower than linear, but faster than polynomial time. A lot of sorting algorithms are in this class.

`O(n^2):`Quadratic time - usually when every element of the problem is paired with every element.

`O(n^k):`Polynomial time - same as quadratic time, but for higher exponents.

`O(k^n):`Exponential time - as the problem increases, the running time increases exponentially. Polynomial problems are considered "easy", while exponential problems are considered "hard". Solving the traveling salesman problem by brute force takes exponential time. In general, brute forcing algorithms often fall into this category.

Big O notation is

**not**an indication of worst running time. It's an indication of the

**upper bound**. You may express the running time for the worst case scenario in Big O, but you may do so for the best case scenario as well.

For instance, sorting a deck of cards by throwing all cards in the air, and picking them up and hoping that they're sorted, has a best case running time in

`O(1)`, if you get it done on the first try. However, you may never finish, so it has a worst case running time in

`O(∞)`.

*The mind is a strange and wonderful thing. I'm not sure that it will ever be able to figure itself out, everything else, maybe. From the atom to the universe, everything, except itself.*

*n*²).

Your expression takes a time proportional to the number of digits in the number, and that depends on the logarithm of the number. So you say it runs on logarithmic time or O(log

*n)*.

Stephan van Hulst wrote:Wow, what confusion. I'm not quite sure how

BigDecimalfits into this question.

It doesn't but it did constitute a good example a a different O notation.

I knew for a long time about O notation, I just didn't know its name ;-)

I usually pick the best function for the job and benchmark it against its rivals while in doubts. Of course, sometimes looking at the O notation can give you an hint. So trivial.

Thanks Stephan,

I hadn't been here for fifteen years

Even Bogosort will finish in a finite time. I think it runs in P(Stephan van Hulst wrote: . . . For instance, sorting a deck of cards by throwing all cards in the air, and picking them up and hoping that they're sorted, has a best case running time in

O(1), if you get it done on the first try. However, you may never finish, so it has a worst case running time inO(∞).

*n*!) however. The number of attempts depends on the factorial of the number of cards.

What is the O notation for the OP function if BigDecimal were used, pretending you have infinite memory to store the BigDecimal?

In other words, what is the O notation for a function that will never return as n tends to zero?

Thanks in advance,

I hadn't been here for fifteen years

- 1

Here, at each step you divide the problem by a constant, i.e., the problem gets exponentially smaller. This is equivalent to logarithmic time:

`O(log n)`.

*The mind is a strange and wonderful thing. I'm not sure that it will ever be able to figure itself out, everything else, maybe. From the atom to the universe, everything, except itself.*

Campbell Ritchie wrote:Even Bogosort will finish in a finite time. I think it runs in P(

n!) however. The number of attempts depends on the factorial of the number of cards.

On average, yes. But the worst case is infinite.

*The mind is a strange and wonderful thing. I'm not sure that it will ever be able to figure itself out, everything else, maybe. From the atom to the universe, everything, except itself.*

A.J. Côté wrote:Quiz question for Stephan and the O notation experts in here:

What is the O notation for the OP function if BigDecimal were used, pretending you have infinite memory to store the BigDecimal?

In other words, what is the O notation for a function that will never return as n tends to zero?

Thanks in advance,

This is a contrived example, but you already answered it. If the function never returns, it's in

`O(∞)`.

`f(n) = k*∞+c`is a valid class of functions, so it can be used to describe the upper bound of an algorithm.

I looked it up, and this is just referred to as "unbounded performance".

- 1

Stephan van Hulst wrote:Big O notation is

notan indication of worst running time. It's an indication of theupper bound...

Great post, but to clarify one important fact: All these notations and "bounds" are indications of how

*scaleable*a solution is.

@Conrado: And these indications are

__very__broad. They don't include ANY indication, for example, of how long a particular internal process (such as a comparison) might take; and they are really only applicable to functions that take multiple "units" of input.

So, when you re-read Stephan's excellent post, be sure to stick "broadly speaking" in front of all his explanations.

One classic example is O(log n): All it says is that the time is proportional to the log of n to "some base".

And for a mathematician that might well be fine, but for someone trying to work out how long (or how much space) a function might

*actually*take, there is a big difference between log

**₂**n, log

**₁₀**n, and log

**₃₂**n (and there are many algorithms that work on all those bases) when it comes to

*realistic*limits of time/volume scalability.

So another way to think of them (if you're an 'optimization' fanatic) might be:

HIH

Winston

"Leadership is nature's way of removing morons from the productive flow" - Dogbert

Articles by Winston can be found here

- 1

`n log n`grows faster than

`n`.

An example: retrieving an element from a balanced binary search tree is an

`O(log n)`operation, iterating the elements of a tree is an

`O(n)`operation, and getting every element without an iterator is an

`O(n log n)`operation.

Don't get me started about those stupid light bulbs. |