• Post Reply Bookmark Topic Watch Topic
  • New Topic

Asymptotic Notations

 
Pankaj Shet
Ranch Hand
Posts: 306
Java Scala Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
HI Guys,

I am new to data structures and algorithms,
I am trying to learn them but am not able to understand how to measure complexitity analysis.
I am not aware of mathematical expressions and explainations used in various books for the same.

I tried to understand Asymtotic notations, but not able to understand the same.

Can anyone help understanding the same?

Help is appreciated.

Regards,
-Pankaj.




 
fred rosenberger
lowercase baba
Bartender
Posts: 12341
38
Chrome Java Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Are you talking about what's commonly called "Big-O notation"? Can you give an example of one? Others around here may be smarter than me, but I'm not sure what exactly you are talking about.
 
Pankaj Shet
Ranch Hand
Posts: 306
Java Scala Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Fred,

I am talking about all the asymtotic notations like big O, Theta and Omega Notations.
I tried to understand but not able to understand them mathematically and the applicability in computer science.

 
Ulf Dittmer
Rancher
Posts: 42970
73
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
You need some mathematical knowledge which is not trivial to fully understand it. I don't think forum posts could do justice to the subject (or that people would be willing to write up that much :-), but most introductory algorithms textbooks would have at least a chapter on that.
 
ganesh swamypillai
Greenhorn
Posts: 4
  • Likes 1
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Have coffee, Read below statements.

In the beginning, you have a problem.
There is a problem you are trying to solve.
Objective1:Now your objective is solve that problem.


That program performs few logical steps to solve the problem.
Set of logical steps forms 'algorithm'.

Any problem contains data to be worked on to solve it.
Call it 'input'.
data will have size.
Call it 'input size'

You have written a program to solve that problem.

Objective1: achieved. You are happy.
Now, curiousity is awakened to go to next objective.

Objective2: I want to know how much time is taken to run my program for that problem.
Basic Solution: You can put timestamp at start of program and end of program and with difference.

Objective2: achieved. You are happy.
Now, curiousity is awakened to go to next objective.

Program acts on data provided. What if I keep changing size of data and same program is run?
Will it take same time?

Objective3: I want to know how much time is taken to run my program for that problem, by varying size of data fed to the program.
Basic Solution: You can put timestamp at start of program and end of program and with difference, keep increasing the size of data fed to the program.

Objective3: achieved. You are happy.
Now, curiousity is awakened to go to next objective.

What if you are curious to find out for large size of input, how your program behaves, without running your program?
Objective4: I want to know how much time is taken to run my program for that problem, for large size of the program, without running your program.

Asymptotic means exactly that: for large size of input, what happens?

You want to analyze your program( concrete representation of algorithm) and find out how it behaves.
Basically your objective is to do asymptotical analysis of your algorithm.
If you are not running your program, how will you know how much time it will take for large input?

By studying your code,
by finding out how many loops are present in your code?
by finding out how much extra space you need to solve your problem?
by finding out how many operations you perform in each of the loops, how much extra space your code creates/uses?

How do you study your code? try analyzing by varying n , give it some value, and list out no. of operations it does to finish that loop.

Take an example code: Suppose your program is as follows:

Let n be input size.

for( i = 0; i< n; i++)
{
j+=i;
}

print j;

If you see above code, what are the operations you can do? addition, assignment, looping, incrementing. ( incrementing is addition)
Restrict operation to addition.
Suppose n = 20;
How many additions are happening?
For i, 20 additions,
For j, 20 additions,
So totally 40 additions.

Or 40 operations is being performed by above code.
If n = 60, then how many operations? It will be 120.
Now keep increasing n. Basically, you know that you can arrive at a mathematical formula and say, the program will perform 2n operations.

Assume that your machine says that for each operation, it is taking 1 second, then you can say that your program will take 2n seconds for input size equal to n.
You can also make a statement that your program will take atmost 2n seconds or it will perform atmost 2n operations.
When you say, 'atmost' 2n operations, thats where you get your Big O ( jargon ). OR
When you say, 'atmost', you say that it is the upper bound.
When you say, 'atleast', you say that it is the lower bound. OR
You are trying to say, that your program will do atleast 2n operations to solve the problem.
When you say, 'atleast' or 'lower bound', thats where you get your Omega ( jargon ).

If you say that your program will perform exact 2n operations, then thats where you are dealing with Theta ( jargon ).
When you can say, 'exact', it is a tight bound. which is theta.

Basically, theta, omega, Big O, are all theoritical notations to say that
your algorithm will do exact number of operations, atleast those many operations, atmost those many operations respectively.

OR

theta --> exact number of operations --> tight bound
omega --> atleast those many operations --> lower bound
Big O --> atmost those many operations --> upper bound

Objective4: achieved. You are happy. Without running your progrem, you came to know if you keep increasing input, how much time will program take.
Now, curiousity is awakened to go to next objective.

I have many problems which I have solved. I have written programs to solve those problems.
People had many problems to solve. People wrote many programs to solve those problems.
Now, for same problem, two people wrote their own idea, their own algorithm, their own program, their own set of operations to solve the problem.
Now, I want to know for same set of input given to both the guys, how much time will their program take as size of input keeps on increasing.

Objective5: You want to compare behavior of two different algorithms on same problem as size of input increases.
You want to compare behavior, meaning you want to see how many operations each algorithm does to solve the same problem.
Or by knowing how many operations each algo does, you can tell how much time it will take. Or in other words: runtime complexity. ( jargon )

Obviously, anyone will want to run the algo whose runtime complexity is less.

Solution: Make someone analyze how many operations are performed by two algorithms and compare the behaviour.

Objective5: achieved. you are happy. You can compare runtime complexity of algorithms and select which one is efficient.
Now, curiousity is awakened to go to next objective.

I have written program to solve a problem. My friend also wrote a program to solve same problem.
We both compared our program's behaviour. We found that my program does more operations than his program.

Objective6: Optimize. Once you find runtime complexity of a algorithm, try to minimize complexity. If it is O(n^2), try making it O(n). If it is O(n), try making it O(1) etc.

At each objective level, different people have taken different objectives and theory of algorithms have grown.

Hope it was helpful. Please correct me, if I have gone wrong somewhere.

This can be a starting point for you to dive deep, if I am not wrong.

Hope your coffee is still there. You can sip some more.

All the best









 
Pankaj Shet
Ranch Hand
Posts: 306
Java Scala Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Hi Ganesh,

Thanks for your fantastic explanation. I really liked the way you explained in details.

and yaa... Coffee is still there. Willing to have more

Ganesh one think I wanted to understand is, if the complexity of my algorithm is 2n, then how is it represented?
like what does O(n^2) represent? n raised to power of 2? if it is the case then it can be represented as O log n to the base 2 too?

I am currently referring to book, Data Structures and Algorithms Made easy by CareerMonk Publications. This book has different programs with different algorithms.

As I am not an engineer, I am facing a difficulty in understanding the mathematical representations of algorithms..
So how and from where should I learn this?

Once again, I really appreciate your efforts to write so much here..

Thank you very much.

Regards,
-Pankaj.


 
Pankaj Shet
Ranch Hand
Posts: 306
Java Scala Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
In your example:

for( i = 0; i< n; i++)
{
j+=i;
}

If the complexity is 2n, what will be the big O representation, Omega representation and Theta Representation?



 
ganesh swamypillai
Greenhorn
Posts: 4
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
For this particular example, Big O = Omega O = Theta O = 2n
 
fred rosenberger
lowercase baba
Bartender
Posts: 12341
38
Chrome Java Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
It's been a while since i've done any of this stuff, but I though that you ignore co-efficients, so Big-O (2n) simplifies to Big-O (n)


and Big-O (n^2) means n-squared, or n*n. the carat is a way to indicate the '2' should be a superscript - elevated up 1/2 a line, but most programs don't make it easy to do that.

This is NOT the same a Big-O(log n).

You can go to something like this and graph the equations. You can see how they grow at different rates.
 
ganesh swamypillai
Greenhorn
Posts: 4
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
I was saying that for the example code:

for(i=0;i<n;i++)
{
j += i;
}

For the above code, Time complexity of code is O(n) = Omega(n) = Theta(n).
In other words, the running time will be of the order of n.

Definitely, O(n^2) != O(log(n)), as running time taken as logarithmic is not same as running time being taken as quadratic.
 
fred rosenberger
lowercase baba
Bartender
Posts: 12341
38
Chrome Java Linux
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
sorry - i wasn't clear...I was responding to two different things...

My first paragraph was in response to this:
For this particular example, Big O = Omega O = Theta O = 2n


The rest of my post was in response to this:
like what does O(n^2) represent? n raised to power of 2? if it is the case then it can be represented as O log n to the base 2 too?
 
Pankaj Shet
Ranch Hand
Posts: 306
Java Scala Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
HI,

Here in the above example,

Complexity "Big O = theta= omega =2n" is space complexity I guess..?

How to derieve Time complexity and Space Complexity?

Regards,
-Pankaj.
 
Matthew Brown
Bartender
Posts: 4568
9
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Pankaj Shet wrote:Here in the above example,

Complexity "Big O = theta= omega =2n" is space complexity I guess..?

How to derieve Time complexity and Space Complexity?

That example is looking at time complexity - we're counting the number of operations. The space complexity is actually constant. You've just got two variables holding integer values. If you increase n you still don't take up any more space.
 
Pankaj Shet
Ranch Hand
Posts: 306
Java Scala Spring
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Thanks for reply Mathews..

so what will be thw space compexity in that case?
 
Paul Clapham
Sheriff
Posts: 21882
36
Eclipse IDE Firefox Browser MySQL Database
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Pankaj Shet wrote:so what will be thw space compexity in that case?


"Constant" is denoted by "O(1)". In other words, proportional to 1 as it goes to infinity.
 
Winston Gutkowski
Bartender
Posts: 10571
64
Eclipse IDE Hibernate Ubuntu
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Pankaj Shet wrote:so what will be thw space compexity in that case?

It'll be entirely dependent on what you're doing. In the last example posted it'll be constant because you only ever create two variables: i and j; but in others it could be 'n'-based or even quadratic - although true quadratic space algorithms are rare if you're doing things properly. I can't even think of one off the top of my head.

Basically, in O notation, you're only interested in four things:
  • 1 - ie, constant time. The time is basically the same no matter how large n gets.
  • log n - time that increases proportional to the log of n to some base - eg, it increases linearly each time n doubles (in which case base=2).
  • n - time that increases proportional to n. This is often called "linear" time.
  • n^x - time that increases proportional to some power of n - the most common of which is n²: ie, O increases from 4 to 9 as n increases from 2 to 3 - but x can be any power (although again, values > 3 ('cubic') are fairly rare). The collective term for this type is "quadratic".


  • HIH

    Winston
     
    Piet Souris
    Rancher
    Posts: 1634
    36
    • Mark post as helpful
    • send pies
    • Quote
    • Report post to moderator
    Winston Gutkowski wrote:(...) The collective term for this type is "quadratic"

    Actually: polynomial.

    But it doesn't stop here. We have the dreaded
  • 2 ^n, for instance when trying to get subsets from a given set
  • or even worse than that. An example is the Ackermann function (Google for it)


  • But keep in mind that all these things describe long term behavior. It means nothing
    for the everyday/one time off/home job.

    If anyone's interested, follow the 'Algorithmic Thinking' course at Coursera. You get
    plenty of Big-O questions.

    Greetz,
    Piet
     
    Pankaj Shet
    Ranch Hand
    Posts: 306
    Java Scala Spring
    • Mark post as helpful
    • send pies
    • Quote
    • Report post to moderator
    Hi All,

    I am just a beginner so what to know why logarithms is involved in complexity analysis.
    Can we calculate without using Logarithms?
    Can anyone help me regarding the same?

    Regards,
    -Pankaj.
     
    Winston Gutkowski
    Bartender
    Posts: 10571
    64
    Eclipse IDE Hibernate Ubuntu
    • Mark post as helpful
    • send pies
    • Quote
    • Report post to moderator
    Piet Souris wrote:Actually: polynomial.

    Possibly, but I've rarely seen the term "polynomial time"; whereas you'll run into "quadratic time" a lot.

    But it doesn't stop here. We have the dreaded
  • 2 ^n, for instance when trying to get subsets from a given set
  • or even worse than that. An example is the Ackermann function (Google for it)

  • True, but such problems are very rare, especially in everyday computing. And explosive functions are used more for theory; often as examples of things that can't be done - or are impractical to try - even with a computer.

    But keep in mind that all these things describe long term behavior. It means nothing for the everyday/one time off/home job.

    Very true.
    @Pankaj: Another thing that's worth noting is that many algorithms have two O values: "normal" and "worst-case". Many sorts, for example, normally work in logarithmic time, but can be "defeated" with a particularly nasty initial order of items to be sorted so that they take O(n²);.

    I should add that I missed out one important type: O(n log n), which is what most "fast" sorts take. You can't get rid of that first 'n', because a sort has to deal with every item to be sorted, so the "speed" comes in how it handles each item.

    HIH

    Winston
     
    Winston Gutkowski
    Bartender
    Posts: 10571
    64
    Eclipse IDE Hibernate Ubuntu
    • Mark post as helpful
    • send pies
    • Quote
    • Report post to moderator
    Pankaj Shet wrote:I am just a beginner so what to know why logarithms is involved in complexity analysis.
    Can we calculate without using Logarithms?

    What do you mean: calculate the expected time something will take, given a value of 'n'? Probably not, unless you have that sort of brain.

    However, that's rarely why we use the O notation. More often than not, it's used to gauge the "scalability" of a solution.

    For example: if I have an algorithm that works in O(n) time, then I know that it's likely to take 10 times as long to deal with 1,000 "things" as it does 100.

    However, if it works in log₂n time (by far the most common type of logarithmic time you're likely to run into), it'll take ≈1.5 times as long to deal with 1,000 "things" as it does 100.

    Explanation:
    2⁷=128 and 2ⁱ⁰=1024, so log₂(100) will be < 7 and log₂(1,000) will be < 10; so the time difference will be something close to 10/7 = 1.43 - and probably slightly more, since 1,000 is much closer proportionally to 1,024 than 100 is to 128.

    And if it works in constant (O(1)) time, then it will take roughly the same amount of time regardless of n. For example, retrieving a particular item from a HashMap with 1,000 keys in it will probably take about the same amount of time as if it had 100 ... or 1.

    All these things should be taken very roughly though, as there are many other things that will affect how long something takes to run. O notation simply gives you a rough idea of the behaviour of an algorithm as size increases.

    HIH

    Winston
     
    • Post Reply Bookmark Topic Watch Topic
    • New Topic
    Boost this thread!