
Stephan van Hulst wrote:The growth rate of control() depends linearly on vertices. control() is called a constant number of times from a single nonrecursive invocation of find(). find() calls itself until n matches vertices, so control() is being called vertices times. We conclude that the growth rate of find() is in O(vertices * vertices), because it the number of calls to control() grows linearly in vertices, and the growth of control() itself is also linear in vertices.
I don't know from where you get the other complexity class. I can't see any exponential growth.
control method has a loop that runs for vertices number of times  So it is O(vertices).
find is a recursive function that stops when n reaches vertices. (Assuming n starts from 1) It calls control for n=1,2,3... vertices.
find is called from a loop for 3 times. Each recursive calls involves this and hence this makes it exponential  O(3^vertices).
So, finally, it is O(3^vertices) * O(vertices) which is O(3^vertices)
EDIT:
Since you have throw new Exception("Solution found");, it will blow up once n reaches vertices. So it is O(vertices^2) in that way.
Stephan van Hulst wrote:Right. The growth would have been exponential in vertices if you didn't check that n is smaller than vertices, but since you do, the growth is polyrhythmic.
Stephan van Hulst wrote:I apologize, I'm wrong.
Growth is indeed exponential. Every call of find() in the recursion tree spawns three new calls to find(), a level lower. vertices determines the height of the recursion tree, so the complexity is O(3^vertices).
My confusion arised because I considered the loop in which find() is called to add a constant factor which could be ignored. However, it determines the base number of the exponent. If the loop ran once instead of three times, the recursion tree would have a degenerate form of a single stem, which would lead to the quadratic complexity we calculated earlier.
Stephan van Hulst wrote:I apologize, I'm wrong.
Growth is indeed exponential. Every call of find() in the recursion tree spawns three new calls to find(), a level lower. vertices determines the height of the recursion tree, so the complexity is O(3^vertices).
My confusion arised because I considered the loop in which find() is called to add a constant factor which could be ignored. However, it determines the base number of the exponent. If the loop ran once instead of three times, the recursion tree would have a degenerate form of a single stem, which would lead to the quadratic complexity we calculated earlier.
Martin Fowler in the book Refactoring wrote:Any fool can write code that a computer can understand. Good programmers write code that humans can understand.
Tim Driven Development
Stephan van Hulst wrote:Because the exception isn't caught, all future calls to find() will never happen. That means that the initial call to find() will terminate after the first recursive call where n == vertices holds, pruning all other branches of the recursion tree.
In short, when you use a throw statement to terminate your method call, the complexity will be O(vertices^2). If you use a return statement, the complexity will be O(3^vertices).
Tim Cooke wrote:Can we take into consideration the cognitive complexity of writing code like this? Somebody smart once said...
Martin Fowler in the book Refactoring wrote:Any fool can write code that a computer can understand. Good programmers write code that humans can understand.
I would become very unpopular very quickly with the rest of my team if I presented code like this as part of a solution.
Tim Driven Development
Tim Cooke wrote:It wasn't that. It was the mixing of iterative and recursive looping structures that would have offended my peers and embarrassed myself.
Tim Cooke wrote:It wasn't that. It was the mixing of iterative and recursive looping structures that would have offended my peers and embarrassed myself.
Elif Sahin wrote:
Stephan van Hulst wrote:Because the exception isn't caught, all future calls to find() will never happen. That means that the initial call to find() will terminate after the first recursive call where n == vertices holds, pruning all other branches of the recursion tree.
In short, when you use a throw statement to terminate your method call, the complexity will be O(vertices^2). If you use a return statement, the complexity will be O(3^vertices).
What if I were to call find() in a try block and then catched the exception. Would the complexity be O(3^vertices)?
Elif Sahin wrote:What if I were to call find() in a try block and then catched the exception. Would the complexity be O(3^vertices)?
Stephan van Hulst wrote:
Most importantly, use descriptive names. I think my example is much easier to read because I use fully written phrases for identifiers. The meaning of them is clear without having to analyze the context in which they're used. Use verbs for method names, and nouns for variable names. Use appropriate types. An integer is a poor representation of a color. Use enums for types that have a small finite number of instances. Don't use exceptions for flow control. Exceptions are for when something goes wrong. Finding a solution is a normal outcome of your algorithm, not an exceptional outcome. Prefer collection types over arrays. I think arrays are really great for representing fixed coordinate systems, but otherwise I stick to lists, sets, maps, queues, etc. Consider all edge cases that an algorithm might encounter. For instance, your algorithm doesn't take into account that nodes might already be colored before it starts running. Don't hardcode values. Your algorithm is hardwired so that it uses three colors. Instead, you can pass the data that it needs through method parameters.
Consider Paul's rocket mass heater. 