Igor Santos

Greenhorn
+ Follow
since Apr 02, 2012
Merit badge: grant badges
For More
Cows and Likes
Cows
Total received
In last 30 days
0
Forums and Threads

Recent posts by Igor Santos

If that was the problem, than I'd have no hard time to execute the jar manually via command line, which I do. I think it's something related to the path where the jars are placed, but I still can't make it work
12 years ago
Here's the situation: I got 10 projects that need 2 libs to run: Jsoup (A html parser API), and Utils (Framework developed by me).
I want to update that Utils project from time to time, and not have to re-compile 10 projects everytime that happens..
So I started using Ant, and after a great amount of time figured this much out:



Problem is... When I try to run it, I get the following error:



RoboBean is a class inside the referenced package, and when I open the jar, I can see that it's in its right place.
I tried exporting the Runnable Jar with Eclipse specifying "Copy required libraries into a sub-folder next to the generated JAR",
thought it'll build 10 folders for 10 projects, which is not what I'm aiming for here...

After doing it, I compared both MANIFEST.MF's... To my surprise, here's what differs:


This is the one generated by Eclipse


And this is the one generated by my build.xml


I already searched all over JavaRanch and StackOverflow and found nothing that could explain / clarify anything...
12 years ago
I'm currently using the JSoup API to manage a website's connection. Though I correctly login to the site, and am redirected to a success page, I can't manage the session in any way... I think the root of my problem is SSL. I managed to go past this using THIS piece of code:

As I'm not posting anything, and I have an urgency to do this, I figured I'd use this while I don't get a robust solution. And now to my main problem... After I connected with JSoup, and was redirected to the success page, I can't in ANY WAY keep the session up. As soon As I try to get data from another web page from the same website I get the 403 http code. Here's the piece of code I'm using:

So now I humbly ask of you, Java elders, how may I redirect myself to another page using the same session?
12 years ago
Srry I took this long to answer. I was testing it with as many results as possible.
It worked just fine, tyvm! I had never used a BufferedReader with bytes before,
and that's probably the reason I didn't find a solution online for myself.

TYVM indeed, sir
12 years ago
So here I am trying to read some xml from a WSDL Webservice. I read it with a buffer, store it in a String and then save it to a .xml file. Problem is... So far with 200 or so blocks of content, it works. Thought this client is supposed to save data from a hundred thousand blocks of content. What I've read so far that has been useful, but hasn't solved the problem is this: Stack Overflow.

My piece of code is really bad for now. So.. If you guy point me in the right direction and just find a way.
(Here's the code)



I know the problem with storing the all the xml data into a String. And I tried to use the Read and Write at the same time, but I get some crashes..
As stated above, I just need to be pointed into something helpful
12 years ago
It works! I just used an extra set inside that iterator and it worked. Thank you so very much, Jasper!
It may not be the best solution, but while I don't make it faster, I'll stick to that. Here's the piece of code
working:
12 years ago
So... What now? D:
Any piece of advise works

I've heard that the iterator.remove() is a good trick.
Though I'm using a foreach. I'd really really really
appreciate any help now
12 years ago
Well... As I'm using this to catch news from BIG news websites, that will
ALWAYS direct me to other huge websites ranging from e-commerces,
to other news websites, to forums. So basically It'd go on forever, while
my focus is just getting the news. As soon as this starts working I'll put
on a test which certifies that the URLs caught has some presence of the
original query. (Like... www.uol.com.br as a starting point, MUST contain
uol somewhere in the URLs that preceed it)
12 years ago
Well.. As you've probably got from the method's name, it's a webcrawler.
I'm basically getting all the link from a website, and then going through each
of the URLs I caught, and getting more URLs, until the iteration gets to 3.

I didn't post that part of the code because it's really not important relating
to to the error... Oh, and tyvm on the initiation of the set. Didn't know that.

If you want me to post the full method, here it is:

12 years ago
Well... As I've looked around, I've seen a lot of people with the same problem.
Their problem is that they're iterating and changing the <? extends Collection>
in the same loop. By using iterator.remove(); it's all good. I've seen some guys
talking about using Foo too. Ranch Topic, and yet
it has not solved my problem... The other way that seemed good was to use
ConcurrentSkipListSet, or some other concurrent collection. Which I could not
get to work as well... And here it is:



I'm getting this exception in the recursive invocation of this method. Trying to
get it to work with a different approach, I've used the following piece of code:



And yet, I STILL get the same error, in the same lines. I truly don't know where
I've wronged. Can you guys shine a light? :S
12 years ago