Forums Register Login

Too many open files problem

+Pie Number of slices to send: Send
Hi All,

We are facing problem in RHEL 5.2 with "Too many open files" in our application logs.
Exception at accept java.net.SocketException: Too many open files

We usually do following below to remove the same in RHEL 4.u7
1.Edit /etc/security/limits.conf and add the lines:
* soft nofile 1024
* hard nofile 65536

2.Edit /etc/pam.d/login, adding the line:
session required /lib/security/pam_limits.so

3.The system file descriptor limit is set in /proc/sys/fs/file-max. The following command will increase the limit to 65536:
echo 65536 > /proc/sys/fs/file-max

The above steps usually removes the error in RHEL 4u7 but not in RHEL 5.2
After all this also we are facing the toomany open files issue.
Please help!!!


Thanks,
Ricky

+Pie Number of slices to send: Send
Can you maybe not use so many file descriptors? Are you closing all your socket, streams, files, etc, as soon as possible after you no longer need them?
+Pie Number of slices to send: Send
 

Ernest Friedman-Hill wrote:Can you maybe not use so many file descriptors? Are you closing all your socket, streams, files, etc, as soon as possible after you no longer need them?



Yes I do.

Now I got a temporary fix.
I am setting when ever I'm starting the server in the server startup script. ulimit -n 65536

I used below program and checked the open file limit and saw that thought ulimit -aH shows 65536 its able to open only 1020 number of files after that it gives a openfile error but once i put ulimit -n 65536 and run the program it open 65535 number of file with out problem. I dont know why this weird stuff comming but for now I added it in my startup script.




Thanks,
-Ricky
+Pie Number of slices to send: Send
I tried to answer a couple of days back and the server sent it to Coventry. Let me try again.

My ex-boss and I had been persuaded that Java "automatic garbage collection meant that you didn't need to explicitly close files. After doing a thorough post-mortem on a frequent blow-up, we were horrified to learn otherwise. I had to go back in and explicitly close things - and add "finally" clauses so that open files wouldn't leak when exceptions were thrown.

There's some confusion on what to close and when, however, since In Java, you can build one file construct on top of another - especially with streams. In the case of a jdbc Statement, the Statement close() will cascade to all of its ResultSets, so I normally only close ResultSets when I need to free up resources while still keeping the Statement. In the case of things like a PrintWriter build on a FileOutputStream, I never really worked out the rules, other than it's always a good idea to explicitly flush.

Keeping lots of files/network connections open is expensive. It's not something you should do frivolously, and you should be very careful that if you intended to do so for performance reasons that you're not causing more problems than you solve. There are reasons to do so - like when you're a database. But it's not something to do without careful planing.
What's wrong? Where are you going? Stop! Read this tiny ad:
a bit of art, as a gift, the permaculture playing cards
https://gardener-gift.com


reply
reply
This thread has been viewed 19527 times.
Similar Threads
signing multipart email
java IOEXCEPTION:too many open files
MultiDimenaional Array sort
Tomcat : java.net.SocketException: Too many open files
IOException:Too many open files
More...

All times above are in ranch (not your local) time.
The current ranch time is
Mar 28, 2024 17:32:45.