Win a copy of Kotlin in Action this week in the Kotlin forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic

The Big O expression  RSS feed

 
Aron Silvester
Ranch Hand
Posts: 63
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Is the Big O expression to the code below equal to O(number of digits in the number n)??? If so, how???


Analyze the code below for integer n:


 
Tushar Goel
Ranch Hand
Posts: 934
4
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Big O notation is the worst case representation. We generally considering large inputs not small. So for larger inputs (millions/billions or more), 1/10 doesn't makes
any difference hence for larger input this loops takes O(n) time to execute.
 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hello,

Nice, never about Big O before, I am still reading about Big O right now.

Your logic might be valid, I could probably confirm later, but you would need to use java.math.BigDecimal for an infinite (or OutOfMemoryError) representation.

Your code will stop as soon as n is smaller than 10 which will be after (Integer.MAX_VALUE/10) loops, so pretty quickly.


 
Tushar Goel
Ranch Hand
Posts: 934
4
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
java.math.BigDecimal for an infinite (or OutOfMemoryError) representation.

Even BigDecimal can't represent anything infinitely.
 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Tushar Goel wrote:
java.math.BigDecimal for an infinite (or OutOfMemoryError) representation.

Even BigDecimal can't represent anything infinitely.
 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Conrado Sanchez wrote:Is the Big O expression to the code below equal to O(number of digits in the number n)??? If so, how???



Well, because of what I stated before, if you used BigDecimal the the O notation would be different. Since you are using int the notation you stated is probably correct.
 
Stephan van Hulst
Saloon Keeper
Posts: 7807
142
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Wow, what confusion. I'm not quite sure how BigDecimal fits into this question.

Big O notation is an indication of the upper bound of the running time of an algorithm. The part within the parenthesis is a function, in which the size of the problem is substituted for n. You may multiply by and add to this function using any scalar.

Essentially O(n) says: "I can think of a function f(n) = k*n+c, where k and c are constants, which expresses the maximum running time of the algorithm for n elements.

Because multiplication and addition are already implied, it makes no sense to use Big O notations like O(4*n) or O(n+2). Instead, there are a couple of common classes of algorithms:

O(1):       Constant time - how fast the algorithm runs is unrelated to the size of the problem. For instance, returning the size of a collection happens in constant time, if the collection uses an internal counter to keep track of the size.
O(log n):   Logarithmic time - the algorithm can be expressed as a path through an execution tree. For instance, searching an element in a balanced binary search tree is in this class.
O(n):       Linear time - the algorithm processes some elements or each element at most once.
O(n log n): Quasilinear time - slower than linear, but faster than polynomial time. A lot of sorting algorithms are in this class.
O(n^2):     Quadratic time - usually when every element of the problem is paired with every element.
O(n^k):     Polynomial time - same as quadratic time, but for higher exponents.
O(k^n):     Exponential time - as the problem increases, the running time increases exponentially. Polynomial problems are considered "easy", while exponential problems are considered "hard". Solving the traveling salesman problem by brute force takes exponential time. In general, brute forcing algorithms often fall into this category.

Big O notation is not an indication of worst running time. It's an indication of the upper bound. You may express the running time for the worst case scenario in Big O, but you may do so for the best case scenario as well.

For instance, sorting a deck of cards by throwing all cards in the air, and picking them up and hoping that they're sorted, has a best case running time in O(1), if you get it done on the first try. However, you may never finish, so it has a worst case running time in O(∞).
 
Campbell Ritchie
Marshal
Posts: 55711
163
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
The O (I read somewhere, possibly Wikipedia) is short for order. So an algorithm which runs in time of the square of the number of items is in quadratic order and you write O(n²).
Your expression takes a time proportional to the number of digits in the number, and that depends on the logarithm of the number. So you say it runs on logarithmic time or O(logn).
 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Stephan van Hulst wrote:Wow, what confusion. I'm not quite sure how BigDecimal fits into this question.


It doesn't but it did constitute a good example a a different O notation.

I knew for a long time about O notation, I just didn't know its name ;-)

I usually pick the best function for the job and benchmark it against its rivals while in doubts. Of course, sometimes looking at the O notation can give you an hint. So trivial.

Thanks Stephan,
 
Campbell Ritchie
Marshal
Posts: 55711
163
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Stephan van Hulst wrote: . . . For instance, sorting a deck of cards by throwing all cards in the air, and picking them up and hoping that they're sorted, has a best case running time in O(1), if you get it done on the first try. However, you may never finish, so it has a worst case running time in O(∞).
Even Bogosort will finish in a finite time. I think it runs in P(n!) however. The number of attempts depends on the factorial of the number of cards.
 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Quiz question for Stephan and the O notation experts in here:

What is the O notation for the OP function if BigDecimal were used, pretending you have infinite memory to store the BigDecimal?

In other words, what is the O notation for a function that will never return as n tends to zero?

Thanks in advance,

 
Stephan van Hulst
Saloon Keeper
Posts: 7807
142
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator

Here, at each step you divide the problem by a constant, i.e., the problem gets exponentially smaller. This is equivalent to logarithmic time: O(log n).
 
Stephan van Hulst
Saloon Keeper
Posts: 7807
142
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Campbell Ritchie wrote:Even Bogosort will finish in a finite time. I think it runs in P(n!) however. The number of attempts depends on the factorial of the number of cards.

On average, yes. But the worst case is infinite.
 
Stephan van Hulst
Saloon Keeper
Posts: 7807
142
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
A.J. Côté wrote:Quiz question for Stephan and the O notation experts in here:

What is the O notation for the OP function if BigDecimal were used, pretending you have infinite memory to store the BigDecimal?

In other words, what is the O notation for a function that will never return as n tends to zero?

Thanks in advance,



This is a contrived example, but you already answered it. If the function never returns, it's in O(∞).
 
A.J. Côté
Ranch Hand
Posts: 417
Java
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Stephan van Hulst wrote:

This is a contrived example, but you already answered it. If the function never returns, it's in O(∞).


Haa... O(∞) makes sense. Thanks!
 
Campbell Ritchie
Marshal
Posts: 55711
163
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Does O(∞) actually exist? Well, outside the MD forum, at least?
 
Stephan van Hulst
Saloon Keeper
Posts: 7807
142
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I don't see why not. f(n) = k*∞+c is a valid class of functions, so it can be used to describe the upper bound of an algorithm.

I looked it up, and this is just referred to as "unbounded performance".
 
Campbell Ritchie
Marshal
Posts: 55711
163
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Unbounded? Bad term; that suggests the performance is fast without limits. Unbounded complexity might be a better term.
 
Stephan van Hulst
Saloon Keeper
Posts: 7807
142
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I agree, and what I found was not official or conclusive. Probably an oversight on the author's part.
 
Campbell Ritchie
Marshal
Posts: 55711
163
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Agree that if you work on the worst case, that bogosort has unbounded complexity.
 
Les Morgan
Rancher
Posts: 768
19
C++ Java MySQL Database Netbeans IDE Oracle Tomcat Server
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
You have a number that is divided by its base, that produces a log(n) functionality in the loop. Big O notation as already mentioned shows this to be a log(n).
 
Winston Gutkowski
Bartender
Posts: 10573
65
Eclipse IDE Hibernate Ubuntu
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Stephan van Hulst wrote:Big O notation is not an indication of worst running time. It's an indication of the upper bound...

Great post, but to clarify one important fact: All these notations and "bounds" are indications of how scaleable a solution is.

@Conrado: And these indications are very broad. They don't include ANY indication, for example, of how long a particular internal process (such as a comparison) might take; and they are really only applicable to functions that take multiple "units" of input.

So, when you re-read Stephan's excellent post, be sure to stick "broadly speaking" in front of all his explanations.

One classic example is O(log n): All it says is that the time is proportional to the log of n to "some base".
And for a mathematician that might well be fine, but for someone trying to work out how long (or how much space) a function might actually take, there is a big difference between log n, log₁₀ n, and log₃₂ n (and there are many algorithms that work on all those bases) when it comes to realistic limits of time/volume scalability.

So another way to think of them (if you're an 'optimization' fanatic) might be:
  • O(n²) - Awful. Oddly enough though, the first "solution" that springs to our puny human brain for many problems (eg, Bubble sort). Avoid if possible.
  • O(n) - Bad, but often unavoidable. And don't forget that O(68 * n) is classified as O(n).
  • O(n log n) - OK and often unavoidable, but can sometimes be improved.
  • O(log n) - Good, and often optimal. Don't forget that "base" though.
  • O(1) - Nirvana.

  • HIH

    Winston
     
    Stephan van Hulst
    Saloon Keeper
    Posts: 7807
    142
    • Likes 1
    • Mark post as helpful
    • send pies
    • Quote
    • Report post to moderator
    One remark Winston: n log n grows faster than n.

    An example: retrieving an element from a balanced binary search tree is an O(log n) operation, iterating the elements of a tree is an O(n) operation, and getting every element without an iterator is an O(n log n) operation.
     
    • Post Reply Bookmark Topic Watch Topic
    • New Topic
    Boost this thread!