Mano Krrish

Greenhorn
+ Follow
since Sep 10, 2010
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
0
In last 30 days
0
Total given
0
Likes
Total received
0
Received in last 30 days
0
Total given
1
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Mano Krrish

Hi All,

I am looking for a solution for this problem eagerly

I am creating a HttpURLConnection to a URL and getting feed from it after 'SUCCESS' response code. The feed I am using has a chance to go off, when there is some issue. Is there any way to identify if the connection is still active using the same connection object I created. I tried getting response code of the connection, which is returning 200-OK always, even if the stream/feed has gone off. Could any one help me out in this...

I have given the snippet below, Please let me know if I need to furnish any more details




Even the readLine() method could able to return a line(after the connection gone off), as it is using buffer, so I couldn't able to come out and reconnect.

Thanks in advance!

Regards
Manoj
12 years ago
All I just need to get currently is the reader of that inputstream, but I am getting this exception only just before the getInputStream() during the runtime.

Please help me.....!!!
13 years ago
Thanks for the reply Rob,

Yep, the definition of the 422 Error response is much weird, which I couldn't figure out. May be could you please confirm if the code I am using is correct?
13 years ago
Hi all,

I have a JSON feed which I am trying to call using URLConnection. I requested the feed with authentication parameters(UserName & Password) and got the cookies of the call, then tried to request the server with cookies, for the feed data, but I received a 422 Http Error response. Could any one please help us on this and correct me if I am wrong in any other aspect of using the Connection too.

Thanks in advance!!

I have given below the error console and the code snippet I am using for this:



java.io.IOException: Server returned HTTP response code: 422 for URL:

at sun.net.www.protocol.http.HttpURLConnection.getInputStream(Unknown So
urce)
at sun.net.www.protocol.https.HttpsURLConnectionImpl.getInputStream(Unkn
own Source)

13 years ago
Oh...Thanks Steve....I really made a silly mistake...I was not catching a "Null Pointer Exception" , which led to these things. Thanks for making me aware of this....
Hi All,
I am using newFixedThreadPool() from Executors class. when i am fixing the thread size to 2, last two threads are not completing their process,they just stay hanged...I get doubt in my usage of Executors. Could any one please verify the usage....Also say if i am using the shutdown() method properly....


Thanks Martin,

I used this, and my problem is resolved. Now i can finish the entire process with almost the same speed till the end of the process.

Thanks once again for the help......
yes...I think this can resolve the problem . This implementation of queue would be better, if it is done from the start of the process or from, when the trailing threads have a lot of works to do.....Which will be more efficient...?. Can you also please explain the usage, limitations and concept of Producer-Consumer in Threads?

Thanks
Thanks for the response Henry...

Every page download is taking approx. 4 seconds from the first, till the end of the process. Moreover every sites are taking same time. The thing happening is, towards the end of process, consider only 3 threads are running, hence only 3 pages are downloaded in 4 seconds (if this is a wise calculation ). But, when we consider the start of the process, approx. 50 threads are running, obviously the page downloaded will be more.....(i guess this kind of calculation is not bad)
Hi

I am currently running a java application starting with 50 threads at a time. Every threads also include numerous page downloads from the web, so at start (when all the 50 threads are running), the process is very fast. When nearly 40 threads finish their operations and only the remaining threads are working....it takes a lot of time ....which also postpones the finish of total process. I think i am wasting the resources towards the end of the process.....Can any one please suggest an idea to overcome this....


Thanks in advance.....
I now tried with profiling my application....This profiling is very new to me.....i used Hprof for profiling and i think this can be used for optimization more than what i had understood on this......So i am posting here, heap dump of the profiled content.Could any one help on this? also, please suggest me, where the optimization is required.......



and few important stack traces are

Thanks for your response Wouter Oet....
Any way i am already using the Multi-Threading in my application, Though i am not sure about its efficiency....Here i post the sample of thread concept i am using...Could you or any one make some guidance for me...



Creating Connection inbetween...and then...



"Outlet_ID" is the unique id for each website
startUrl" is the home page of the website



And then.....the removal of the completed thread and starting the next thread is done in the below given process...




What i am doing in the above code is.....Creating thread objects for every websites and queuing them all in a linked list. Then starting the 15 threads from there....if any of the threads finish its process, i am removing that thread from the queue and starting the next one which is at the top of the queue.....

Please let me know if any change in this threading would make the process efficient....
Hi help me please.....

This is a problem i am facing for a long time. I am using a crawler program to crawl the web, which is a stand alone java program. Currently i am crawling nearly 40 websites using this. It is working fine till it crawls around 25 websites, after that, it is getting slow gradually and atlast taking a long time to complete it...I also checked for JVM memory, It is always having nearly 70 to 90 percentage of the memory, free....

Because of this problem, i can't even increase the number of websites for crawling......

Can any one please give me a solution regarding this.....

Thanks in advance........