dennis deems

Ranch Hand

Posts: 808

posted 5 years ago

The problem reads:

The first 6 powers of 2 are 2, 4, 8, 16, 32, 64. Their digits sum to 2, 4, 8, 7, 5, 10. If we adopt the rule that we must continue to sum the numerals until we have a single-digit result, then this becomes 2. 4. 8. 7. 5. 1. For want of a better term I am calling these numbers the "reduced" sums. This 6-element pattern of reduced sums recurs predictably through the series of powers of 2 (at least as far as 2^42).

Unfortunately, the reduced sum is not what the problem asks for. Somehow from that pattern of reduced sums, I have to figure out what the

In general the sum of the digits tends to be larger than the number that came before; but there are plenty of local exceptions to this trend. The first seven sums that reduce to 4 are: 4, 13, 22, 31, 40, 58, 67. The first five elements in this series are promising as they increase without omission. The sixth, however, leaves a gap: 49 also reduces to 4, but it is skipped. The sums that reduce to 7 are even more perplexing: 7, 7, 25, 25, 43, 61, 61. Only four elements are distinct; three are repeated, seemingly unpredictably. However, here at least no sum is smaller than the one that came before. That's not the case with the 5's: 5, 14, 14, 41, 41, 59, 50. Huh?

Please, can someone help me stop spinning my wheels?

2^15 = 32768 and the sum of its digits is 3 + 2 + 7 + 6 + 8 = 26.

What is the sum of the digits of the number 2^1000?

**Please do NOT post a solution**as that is not remotely what I am seeking. I am just a little stuck and hoping someone can give me a non-spoilery nudge in the right direction. I have discovered a periodic pattern in the powers of 2 and the sums of their digits, but knowing this pattern is not helping me determine what the next sum will be.The first 6 powers of 2 are 2, 4, 8, 16, 32, 64. Their digits sum to 2, 4, 8, 7, 5, 10. If we adopt the rule that we must continue to sum the numerals until we have a single-digit result, then this becomes 2. 4. 8. 7. 5. 1. For want of a better term I am calling these numbers the "reduced" sums. This 6-element pattern of reduced sums recurs predictably through the series of powers of 2 (at least as far as 2^42).

Unfortunately, the reduced sum is not what the problem asks for. Somehow from that pattern of reduced sums, I have to figure out what the

*actual*sum would be. Looking at the next six powers of two, we find the sums 11, 13, 8, 7, 14, 19. Two fit the pattern without modification, but the other four must be reduced in order to fit. Hereafter, all sums fit the pattern only after reduction. But I don't see a way to predictably work backward from the reduced sum.In general the sum of the digits tends to be larger than the number that came before; but there are plenty of local exceptions to this trend. The first seven sums that reduce to 4 are: 4, 13, 22, 31, 40, 58, 67. The first five elements in this series are promising as they increase without omission. The sixth, however, leaves a gap: 49 also reduces to 4, but it is skipped. The sums that reduce to 7 are even more perplexing: 7, 7, 25, 25, 43, 61, 61. Only four elements are distinct; three are repeated, seemingly unpredictably. However, here at least no sum is smaller than the one that came before. That's not the case with the 5's: 5, 14, 14, 41, 41, 59, 50. Huh?

Please, can someone help me stop spinning my wheels?

posted 5 years ago

Here's my thoughts...I believe many of these projects have been around a while. Computers have gotten much faster. Languages have gotten much more able to deal with large numbers.

I have a program that solves this. I do it by brute force - i calculate 2^1000, then simply parse the characters and add them up. I'm sure that when this problem was created, that would have taken a long time, if the languages of the day could even handle a 300+ digit number.

But today, it takes my code longer to compile than it does to run. And I would bet that much of the 'run time' is simply the JVM starting up.

so..I don't have a trick. I have a solution that works/runs in under 2 seconds, but it ain't pretty.

I have a program that solves this. I do it by brute force - i calculate 2^1000, then simply parse the characters and add them up. I'm sure that when this problem was created, that would have taken a long time, if the languages of the day could even handle a 300+ digit number.

But today, it takes my code longer to compile than it does to run. And I would bet that much of the 'run time' is simply the JVM starting up.

so..I don't have a trick. I have a solution that works/runs in under 2 seconds, but it ain't pretty.

There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors

Matthew Brown

Bartender

Posts: 4568

9

posted 5 years ago

That problem dates from 2002, according to the site, so I'm pretty sure the brute force approach would have been fine then. Though, obviously, it's easier in some languages than others.

However, it's one of the early Euler problems. The early ones aren't meant to be hard, but later on you need to get cleverer. Take, for example, problems 18 and 67. They're the same problem, but 67 is on a larger scale and the "search all possibilities" approach isn't feasible.

(Back to the original question - I'm not sure there's a pattern that would be helpful here).

However, it's one of the early Euler problems. The early ones aren't meant to be hard, but later on you need to get cleverer. Take, for example, problems 18 and 67. They're the same problem, but 67 is on a larger scale and the "search all possibilities" approach isn't feasible.

(Back to the original question - I'm not sure there's a pattern that would be helpful here).

dennis deems

Ranch Hand

Posts: 808

posted 5 years ago
There are only two hard things in computer science: cache invalidation, naming things, and off-by-one errors

I skimmed through the ProjectEuler forum on #16. I didn't see anything BUT brute force.

There was an unofficial contest to see who could write the shortest program...Some, like Haskell or J, seem to let you do it in one line, but ultimately every one I saw used brute force.

There was an unofficial contest to see who could write the shortest program...Some, like Haskell or J, seem to let you do it in one line, but ultimately every one I saw used brute force.

Stephan van Hulst

Saloon Keeper

Posts: 6966

109

posted 5 years ago

Yep, I just computed the value and printed the substring. Personally I think that's as elegant as it gets.

*The mind is a strange and wonderful thing. I'm not sure that it will ever be able to figure itself out, everything else, maybe. From the atom to the universe, everything, except itself.*

posted 5 years ago

i did it the same way, i used BigInteger to get the number, converted it to a string so i could parse it, then changed the chars to ints to do the math. then i changed it back to a string which is what all my Euler classes return. i am thinking there might be a better way, but i don't see it.

SCJP

Visit my download page

Matthew Brown

Bartender

Posts: 4568

9

posted 5 years ago

Stephan van Hulst wrote:Yep, I just computed the value and printed the substring. Personally I think that's as elegant as it gets.

*If*you define elegance as efficiency of computation rather than brevity or simplicity of code, I think you could improve by not making the unnecessary calculations for digits beyond those you need. I didn't bother in this case, but I did in another later problem that was about the last digits of a simple-but-big calculation.

Stephan van Hulst

Saloon Keeper

Posts: 6966

109

posted 5 years ago

I think both Matthew and I were confused with another question where you need only print out the first couple of digits of a large result. 16 is of course a different matter.

Matthew Brown

Bartender

Posts: 4568

9

posted 5 years ago

Dennis, I too had thought the Problem 16 has a solution different from the "compute the result and sum the digits" one. I've actually already pondered over it here.

Since that time, I've concluded that unless you apply a really clever math tricks (and math has many tricks to pull, it is even possible to compute the last few hundred digits of the Graham's number), there is no solution that would not require to calculate the number. There certainly is a relation between a digital sum of a number and its double, but to be able to calculate that precisely, you need to know which digits "overflow" what is the adjoining number, and for that, you generally need to know position of these digits, which at the end actually is the number in its numerical form.

So the only trick I was able to think of was to reduce the number of multiplication by using powers of two to perform the multiplications, but this still results into the number being fully calculated. I haven't solved this one yet, though, as I'm still trying to come up with something better, but it probably won't happen.

Since that time, I've concluded that unless you apply a really clever math tricks (and math has many tricks to pull, it is even possible to compute the last few hundred digits of the Graham's number), there is no solution that would not require to calculate the number. There certainly is a relation between a digital sum of a number and its double, but to be able to calculate that precisely, you need to know which digits "overflow" what is the adjoining number, and for that, you generally need to know position of these digits, which at the end actually is the number in its numerical form.

So the only trick I was able to think of was to reduce the number of multiplication by using powers of two to perform the multiplications, but this still results into the number being fully calculated. I haven't solved this one yet, though, as I'm still trying to come up with something better, but it probably won't happen.