• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Tim Cooke
  • paul wheaton
  • Ron McLeod
  • Jeanne Boyarsky
Sheriffs:
  • Paul Clapham
Saloon Keepers:
  • Tim Holloway
  • Roland Mueller
Bartenders:

New strategy for avoiding OutOfMemoryException ?

 
Greenhorn
Posts: 25
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I've got a program that reads in 400k database rows, and by the end of it, there are 400k new rows in a second table. So it reads a row, does all sorts of lookups/calculations, and inserts one row in a second table.

There was no previous logging code in the method, other than "method begins" and "method ends". Somewhere between those two, I was getting "OutOfMemoryException" (OOME).

After adding a method variable named 'records', within the "while (rs.next())" processing loop, I added this code:


After about 2 minutes, somewhere between records 0 and 10,000 , I got the OutOfMemoryException (OOME). So then I thought to increase the granularity of the logging, and switched the mod to " % 1000". And waiting for about 4 minutes, and on to record 50,000.... no OOME's.

Perhaps I didn't wait long enough. Perhaps if I had waited even more time, the OOME that was *going* to happen, would have happened.

But nevertheless, in 3 trials, switching code back and forth between 1k and 10k, it *always* threw an OOME fairly quickly with 10k, and never threw an OOME at 1k (within the limits of my patience in the test).

So that's a bit strange, right?
 
Rancher
Posts: 4804
7
Mac OS X VI Editor Linux
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
No, its not strange, its expected. You've run out of memory.

Have you raised the JVM's memory max? it starts fairly small.

Raise it to use something like half the memory in your computer.
If your computer is too small, use a smarter algorithm that doesn't load it all into memory.
 
Mike Fourier
Greenhorn
Posts: 25
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I'm going to have to take more care in how I post questions, I guess.


With this code:
The program threw an OOME before records ever reached 10k (10 thousand records)

With this code:
The program ran to more than 50 thousand records, without any OOME.

Note, the only thing that changed, was how often I logged. This made no sense to me, so I switched back and forth between 10,000 and 1,000 three times. It was consistent. The application did not make it to 10k records, logging every 10k records, but made it past 50k records, logging every 1k.

So I ask again: What might explain that? Or: did I not wait long enough? Was it just fluke?

Look, I realize I'm going to have to figure out what *exactly* is going on myself. But I wanted to know if anyone has seen something this "strange" before, or has tips on where to start looking.
[ May 27, 2008: Message edited by: Mike Fourier ]
 
author
Posts: 14112
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The only explanation that is coming to my mind is that there are actually two or more threads involved - one creating objects on the heap, and one making them eligible for garbage collection. More frequent IO in the former thread might then give more time to the latter thread freeing up memory. Or something like that.

Can you show us more of the code?
 
reply
    Bookmark Topic Watch Topic
  • New Topic