• Post Reply Bookmark Topic Watch Topic
  • New Topic

Textual tutorial in asymptotic analysis  RSS feed

 
Franklin Wormwood
Greenhorn
Posts: 17
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Ok guys, I really don't know in what forum to post this. This is not so java related. So please move this to the suitable forum, thanks.

For a long time I'm trying to self study algorithms and data structures because I've read that they are very important in programming and we should know how they work and how they perform in applications. It was very hard for me because I can't see (I'm using a screenreader) and most books and tutorials I found always have images, graphics, graphs or unknown notations that my screenreader can't identify. Then someone recommended me this book by Robert Sedgewick:
http://algs4.cs.princeton.edu/home/
This book is very good and have lots of very clear explanations such as binary search trees which I thought I would never understand on my own. This book also has real code implementations that really helped me understand lots of algorithms. However, I still can't understand asymptotic analysis even with this book. Whenever I read the part where Sedgewick analyse the performance of an algorithm I got lost because of the graphs and notations with unknown symbols. I've searched and searched for a long time and I still can't find a tutorial that can explain asymtotic analysis in a textual way. Sure I can memmorise the big O notation and understand their meaning, but I feel it would be more fun and exciting if I'm able to calculate the running time of my methods on my own. Can you guys help me on this? I'm currently studdiing java EE at school and I can't help thinking that maybe I can write better code if I can find out if my codes are too slow or consume lots of resources. Thanks for your answers.
 
Stephan van Hulst
Saloon Keeper
Posts: 7992
143
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Complexity analysis just isn't that easy. I'm not really aware of any tutorials that describe it in an intuitive way without the use of diagrams.

Do you have an excerpt from the book which doesn't make sense to you? Maybe we can try to explain it.
 
Franklin Wormwood
Greenhorn
Posts: 17
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Stephan van Hulst wrote: Do you have an excerpt from the book which doesn't make sense to you? Maybe we can try to explain it.

An example would be:
http://algs4.cs.princeton.edu/14analysis/
"Most often, we work with tilde approximations of the form g(N) ~ a f(N) where f(N) = N^b log^c N and refer to f(N) as the order of growth of g(N)."
What is this? I totally don't understand this. And above this statement I think is a loglog plot.
 
Stephan van Hulst
Saloon Keeper
Posts: 7992
143
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
The image is a table. I copied the text below:

function   tilde approximation      order of growth   
N^3/6 - N^2/2 + N/3      ~N^3/6      N^3
N^2/2 - N/2      ~N^2/2      N^2
lg N + 1      ~lg N      lg N
3      ~3      1


If you have a function that describes how much time it takes for an algorithm to complete, we're not really interested in "smaller terms". After all, if you have a cubic function, no matter how small its coefficients, eventually it will always take more time than a quadratic function, as N increases. So we get rid of the lower terms: N^2/2 - N/2 becomes ~N^2/2. Now, the order of growth does not care about the scalar factors of the approximation, so ~N^2/2 becomes N^2. We write the order of growth as Oh(N^2) for the upper bound of the growth rate, Omega(N^2) for the lower bound of the growth rate, and Theta(N^2) for the exact growth rate (when Oh and Omega coincide).
 
Franklin Wormwood
Greenhorn
Posts: 17
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Stephan van Hulst wrote:The image is a table. I copied the text below:

Thanks very mutch for this! I really appreciate this!
Now, I only have a vague idea on your description at the end of your post because I forgot what cubic and quadratic function and low order terms mean, but I guess I can just search for them in google.
Another problem I have is for example, I always read that binary search runs at O(log n) time. However, I can't seem to know how to really calculate the algorithms as described in the link I posted. I know what O(log n) means, and I am able to understand that binary search indeed runs at O(log n) time because the code is really short and easy to understand. However, when it comes to mergesort, I can't understand how mergesort runs at O(n log n) time just by reading the code because of lots of recursive calls. So in more complex code, I feel I need to understand how to calculate the performance as described in the link, but I really can't understand any of it. Thanks again!
 
Ulf Dittmer
Rancher
Posts: 42972
73
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Wirth's classic book Algorithms and Data Structures goes into a lot of detail analyzing sorting algorithms in this way, starting with the simple and then progressing from there.
 
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!