# 2 small algorithm questions , that require optimized code

Myke Enriq

Ranch Hand

Posts: 115

posted 3 years ago

1) Write a java (J2SE) function that takes a String as an input and outputs a String that encodes the input , by replacing a char that appears multiple times by that char followed by the int number fo times it appears.

Meaning for input = "aazcdddd" output is "a2zcd4"

2) Given an array of int , compute the sum of the squares of its elements. Meaning for input int[] x = {1,2,3} the output is 14 = 1*1 + 2*2 + 3*3

These questions do not sound hard , but I guess they are both about optimising code and about treating exceptions/anticipating exceptions.

For example in question 1 , how long is the String ? What to do when it is very long, same goes for question 2.

Anyway , they were given in a programming contest , I already solved them (the classic way) , but I am wondering if there is a better way/trick to them.

Meaning for input = "aazcdddd" output is "a2zcd4"

2) Given an array of int , compute the sum of the squares of its elements. Meaning for input int[] x = {1,2,3} the output is 14 = 1*1 + 2*2 + 3*3

These questions do not sound hard , but I guess they are both about optimising code and about treating exceptions/anticipating exceptions.

For example in question 1 , how long is the String ? What to do when it is very long, same goes for question 2.

Anyway , they were given in a programming contest , I already solved them (the classic way) , but I am wondering if there is a better way/trick to them.

Ryan McGuire

Ranch Hand

Posts: 1072

4

posted 3 years ago

For #2: The sum of squares one...

I think the point here is to reduce the number of multiplies you do. (...assuming you're optimizing for time as opposed to space.)

You could keep a hash of previously computed squares and use those if you see the same array element again.

e.g.

int[] x = {1, 2, 3, 2, 1}

CachedSquares["1"] = 1*1 = 1

SumSoFar = 0 + CachedSquares["1"] = 1

CachedSquares["2"] = 2*2 = 4

SumSoFar = 1 + CachedSquares["2"] = 5

CachedSquares["3"] = 3*3 = 9

SumSoFar = 5 + CachedSquares["3"] = 14

CachedSquares["2"] exists so...

SumSoFar = 14 + CachedSquares["2"] = 18

CachedSquares["1"] exists so...

SumSoFar = 18 + CachedSquares["1"] = 19

5 array elements, only three multiplies, O(n) time.

I think the point here is to reduce the number of multiplies you do. (...assuming you're optimizing for time as opposed to space.)

You could keep a hash of previously computed squares and use those if you see the same array element again.

e.g.

int[] x = {1, 2, 3, 2, 1}

CachedSquares["1"] = 1*1 = 1

SumSoFar = 0 + CachedSquares["1"] = 1

CachedSquares["2"] = 2*2 = 4

SumSoFar = 1 + CachedSquares["2"] = 5

CachedSquares["3"] = 3*3 = 9

SumSoFar = 5 + CachedSquares["3"] = 14

CachedSquares["2"] exists so...

SumSoFar = 14 + CachedSquares["2"] = 18

CachedSquares["1"] exists so...

SumSoFar = 18 + CachedSquares["1"] = 19

5 array elements, only three multiplies, O(n) time.

posted 3 years ago

I'd say that multiplication is so fast nowadays that a lookup even in an array of

(Things would probably be different if we were speaking about multiplication of

`int`s (or`Integer`s) might take more time than the multiplication itself. A lookup in a`HashMap`will be__much more expensive__than an integer multiplication (have a look at the`HashMap`source code).(Things would probably be different if we were speaking about multiplication of

`BigInteger`s.)
Ryan McGuire

Ranch Hand

Posts: 1072

4

posted 3 years ago

Integer multiplies are pretty darn fast, but I bet an array lookup is at least a little faster. As long as the max size of an element is small enough where we can keep an array that large. I mean we are talking optimization, not goodenoughimization.

What else is there to optimize? The "obvious" solution is O(n). I don't see how you can sum the squares of the array elements without looking at every single one, so unless I'm mistaken, any solution would have to be at least O(n).

I can see a lot of things that could wrong, such as integer overflow with either the multiplication or the addition, but that falls more under exception handling than optimization.

What else is there to optimize? The "obvious" solution is O(n). I don't see how you can sum the squares of the array elements without looking at every single one, so unless I'm mistaken, any solution would have to be at least O(n).

I can see a lot of things that could wrong, such as integer overflow with either the multiplication or the addition, but that falls more under exception handling than optimization.

Mike Simmons

Ranch Hand

Posts: 3090

14

posted 3 years ago

Well, you seemed to be showing a hash lookup in some non-Java language, not an array lookup. I don't know how to do an array lookup with a string as key/index. Even if we use a proper array lookup by int index, we need to consider the cost of

Ryan McGuire wrote:Integer multiplies are pretty darn fast, but I bet an array lookup is at least a little faster.

Well, you seemed to be showing a hash lookup in some non-Java language, not an array lookup. I don't know how to do an array lookup with a string as key/index. Even if we use a proper array lookup by int index, we need to consider the cost of

*failed*lookups. Like all caching strategies, the effectiveness depends on the ratio of uncached- to previously-cached- lookups.

Ryan McGuire

Ranch Hand

Posts: 1072

4

posted 3 years ago

A bit of a misunderstanding here.

Yes, my first post used a hash lookup. Once Martin pointed it out, I implicitly agreed that a hash lookup is pretty bad. I don't know of any "good" hash functions that A) don't include at least one multiply and B) don't degenerate into just an array lookup. e.g. For integer hash indexes, you could make the hash function return the index itself.

My second post backed away from the hash idea and tried to see if we we could use a plain old array lookup to possibly do better than the straightforward obvious solution.

Again, the solution to this question seems so obvious that I don't see much room for optimization. The only possibilities rely on an assumption or two. e.g. You could use an array to cache squares iff the maximum value is <= the max array size. There are other tricks we could use if we knew that the input array was sorted ahead of time. Also there may be an O(n*n) algorithm that's faster than the obvious linear algorithm for some inputs if we can be sure n is small enough and the input data is repetitive enough: e.g. [56,34,34,45,56,45,56,67,34,45,34,67,78,45,78,56,56,78,34,23,67,23,67,78,56,78,23] might be calculatable using an O(n*n) or O(n logn) algorithm faster than with the obvious O(n) way.

Mike Simmons wrote:Ryan McGuire wrote:Integer multiplies are pretty darn fast, but I bet an array lookup is at least a little faster.

Well, you seemed to be showing a hash lookup in some non-Java language, not an array lookup. I don't know how to do an array lookup with a string as key/index. Even if we use a proper array lookup by int index, we need to consider the cost offailedlookups. Like all caching strategies, the effectiveness depends on the ratio of uncached- to previously-cached- lookups.

A bit of a misunderstanding here.

Yes, my first post used a hash lookup. Once Martin pointed it out, I implicitly agreed that a hash lookup is pretty bad. I don't know of any "good" hash functions that A) don't include at least one multiply and B) don't degenerate into just an array lookup. e.g. For integer hash indexes, you could make the hash function return the index itself.

My second post backed away from the hash idea and tried to see if we we could use a plain old array lookup to possibly do better than the straightforward obvious solution.

Again, the solution to this question seems so obvious that I don't see much room for optimization. The only possibilities rely on an assumption or two. e.g. You could use an array to cache squares iff the maximum value is <= the max array size. There are other tricks we could use if we knew that the input array was sorted ahead of time. Also there may be an O(n*n) algorithm that's faster than the obvious linear algorithm for some inputs if we can be sure n is small enough and the input data is repetitive enough: e.g. [56,34,34,45,56,45,56,67,34,45,34,67,78,45,78,56,56,78,34,23,67,23,67,78,56,78,23] might be calculatable using an O(n*n) or O(n logn) algorithm faster than with the obvious O(n) way.

Ivan Jozsef Balazs

Rancher

Posts: 981

5

posted 3 years ago

> There are other tricks we could use if we knew that the input array was sorted ahead of time.

If you know the array is sorted, you can remember the previous element and its square and reuse the square if possible.

So you have to do extra comparisons for the sake of saving the squaring of the repeating elements.

The number of the operations increased but the int comparison might be cheaper than the integral multiplication.

I doubt its buying much tough.

If you know the array is sorted, you can remember the previous element and its square and reuse the square if possible.

So you have to do extra comparisons for the sake of saving the squaring of the repeating elements.

The number of the operations increased but the int comparison might be cheaper than the integral multiplication.

I doubt its buying much tough.