Nitin Dubey

Ranch Hand
+ Follow
since Oct 30, 2000
Nitin likes ...
Oracle
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
1
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Nitin Dubey

If you are not updating the state of Singleton object then there shouldnt be a problem with multiple instances across different nodes. Assuming that all the nodes will have identical Singleton objects in all cases. However, if the state of singleton object is changing there are multiple ways to solve it. Each one has its own pros and cons. It will depend what all you have and how you can use it. Some might be an overkill some may be really helpful.

-- If you have money opt for solutions like Terracotta

-- Use distributed cache solutions like memcached. You may build your own. The singleton objects will be kept out of JVM in another JVM/process and can be used from that place. This solution can boost the performance of the application as lot of data can be cached out of JVM.

-- Serialize the state in database and fetch from db everytime you refer the object. Preferred if you have less time, control over the application and lightweight objects

-- If you have messaging middleware, every JVM may have its own copy and whenever anything changes, publish the new state/delta over topic and update other singleton objects. Publishing the message will be in the same transaction, but consumption is out of your control. If you are okay with this delay then this approach is good.
6 years ago
-- Yes catching RuntimeException is a better idea, thanks for pointing it out Jeff
-- I can catch RuntimeException and throw a subtype of RuntimeException (application specific) that can record whether it was logged or not. Hence, multiple logging issue should be resolved.
-- I would be printing the objects that are in question when exception occurs. Hence, the exception log will have entire serialized object in XML form so that one can rebuild the object if required. In all cases one needs to reproduce the problem first and they need data. At times it becomes difficult to debug the problem if you are not sure about the data that was either passed by the user OR was being processed at that right moment

Does it sound good enough?
6 years ago
I have been thinking about capturing run time data that lead to exceptions. The exceptions like Divide by zero, array index out of bounds, null pointer exceptions occur in practical scenarios and one has to go back to log files, then find data and then start working on it.

However, I feel one can write code in the following manner and identify the actual problem with data which will help in identifying the root cause easily. Though approach to catch Exception just for logging is reported as anti-pattern, I am of an opinion that this will eventually help in root cause identification easily.



>
6 years ago
I have around 9lac files in a folder that we need to process. When the load encounters around 9Lac the listFiles() method in File class throws OutOfMemory error. Is there any better mechanism to handle that?

I can obviously put filter to reduce the list to some extent. But that condition also has apossibility to blow up some day. Is there any class that does iterations for huge collection object avoiding OutOfMemory errors.

Cheers.

- Nitin
9 years ago
Probably it is for a good reason. Had there existed a tool then it might have resulted into some bad code.

The problem was identified and fixed. We got a patch from the vendor.
9 years ago
Hello,

I have a java application that keeps running for months together. Past updates to the application with new third party api's have caused the application to go out of memory after 10-15 days of processing. It is obvious that there are memory leaks. I did PMD and then profiling for the application and found that most of the memory leaks are coming out of third party libraries.

In order to solve the problem I can ask the vendors for fixes but that takes long duration.

Is there any method in java world to keep such things in control? I can think of something like transaction savepoints, where we take a snapshot of java mem and then rollback to the same state after say 10 days or something. This cannot be done through java api since it has no direct control over the memory. It is also evident that most of the people get into such kind of problems, how do they overcome it?

Your suggestions are required.

Cheers,


Nitin
9 years ago
Hello,

I am running APACHE - 228 with Tomcat 6.0.14.

Is there any method to compress responses sent through AJP connector? The default implementation of AJP connector does not seem to support compression as of now. I also need to ensure that all responses are compressed but NOT below a particular response size (say 200kb).

I have gone through the net and came up with following options:

1. Compress responses sent from AJP connector. (couldn't find anything, documentation for AJP connector does not mention compression anywhere)
2. Compress responses with Apache 228 modules.
3. Applying Servlet filters into the web application. (Can we identify if a response is to be compressed or not based on its size?


Cheers,

Nitin
9 years ago
I do not have a reason and never suspect the capability of java. Just thought that there may be a better way for processing bigger files (like with choice of classes).
10 years ago
Hi,

What is the best way (with respect to performance) to perform read operation for 500MB Ascii file in jse 1.5?

I am aware of the basic java api but haven't much worked on IO anytime.
10 years ago
Thanks Ulf and Greg. It really helped. I believe I had to change "DES" to "DESede" only but unfortunately it did not work for me. Following are some doubts

#1. What is the difference between the following two codes for generating keys?
This one worked for me:


and (the one that did not work)



Exception:
Exception in thread "main" java.security.spec.InvalidKeySpecException: Inappropriate key specification
at com.sun.crypto.provider.DESedeKeyFactory.engineGenerateSecret(DashoA12275)
at javax.crypto.SecretKeyFactory.generateSecret(DashoA12275)
at com.sungard.crypto.tripledes.example3.TestCipherTripleDES.main(TestCipherTripleDES.java:39)

Do I have to get some implementation of this algo? OR the key specified is wrong?

#2. I have to use Initialization vector with Triple DES. The vector will be in the form of a String like this "111222333AAADDFF" (exact length). I did modify some code like this but it is throwing an exception related to byte length:


Exception:
Exception in thread "main" java.security.InvalidAlgorithmParameterException: Wrong IV length: must be 8 bytes long
at com.sun.crypto.provider.SunJCE_h.a(DashoA12275)
at com.sun.crypto.provider.DESedeCipher.engineInit(DashoA12275)
at javax.crypto.Cipher.a(DashoA12275)
at javax.crypto.Cipher.a(DashoA12275)
at javax.crypto.Cipher.init(DashoA12275)
at javax.crypto.Cipher.init(DashoA12275)
at com.sungard.crypto.tripledes.example2.SimpleTripleDESVariant.main(SimpleTripleDESVariant.java:32)

It looks obvious that the length of init vector is more than 8 bytes. Just want to be sure that my understanding is correct or not.
10 years ago
I have googled for this information but could not get specific example using 3 different keys and initialization vector. Can anybody help me pointing to a good java example implemented?
10 years ago
Thanks. That means I can use immutable HashSet objects in multi-threaded environment without any risk.

Well, I now understand; it is very logical. The exception that is thrown in such cases is ConcurrentModificationException that itself means it is thrown only when there is any modification.
Hi,

Following is the code I have written. Are there any chances of ConcurrentAccessException to arise if multiple threads are calling isExcluded method? There is no synchronization stuff in the code below, once this data is loaded it will not be changed.

- Is the object locked while reading as well?
- I can also pack these things in a ConcurrentHashMap, what should work best?



I am trying to find out best way two write this kind of code.

Any help is appreciated.
Thanks for the help Joe. I will be using this in a web application, I am planning to put this in startup and serialize the seckey so that it can be retrieved again. This is coz in our application the users will have the URL's all the time even if the server is down.

I hope this will work.
10 years ago