Granny's Programming Pearls
"inside of every large program is a small program struggling to get out"
JavaRanch.com/granny.jsp
Win a copy of Kotlin in Action this week in the Kotlin forum!
  • Post Reply Bookmark Topic Watch Topic
  • New Topic

backing up 0.5 GB over the internet  RSS feed

 
Stephen Huey
Ranch Hand
Posts: 618
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
We back up our web app's database and compress it all into a file that's about half a gig in size. We want to beam that file down to our office every night, and that Linux box is set up as an FTP server, so I figure we could just go out there and get it over FTP. The production machines supposedly have 3MB up and 3MB down, and our office here can supposedly get up to 1MB downstream. It seems like under ideal circumstances, it might only take about 2-3 hours to FTP it down to here every night. Does this sound about right? Is there a faster way? I know how to use Apache's HTTPClient, and my understanding is that HTTP is faster than FTP, but wouldn't that Linux machine have to be set up as a webserver? I suppose there's also the possibility of writing some simple client/server programs in Java just to transfer the bytes across, but would that be faster than FTP, and also, wouldn't we have to open up ports that are probably closed down on the router/firewall guarding the production machines? Thanks for any suggestions...
 
Joe Ess
Bartender
Posts: 9429
12
Linux Mac OS X Windows
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
Originally posted by Stephen Huey:
HTTP is faster than FTP,

Overall, while FTP may be very efficient for large file transfers, it is not the best protocol for the transfer of short, simple files.


but wouldn't that Linux machine have to be set up as a webserver?

Correct.


I suppose there's also the possibility of writing some simple client/server programs in Java just to transfer the bytes across, but would that be faster than FTP

Don't reinvent the wheel. Use a standard protocol, be it FTP or HTTP. Makes your code smaller, less error-prone and easier to maintain. Since FTP is already optimized to handle large files, any speed gain from your proprietory method would be neglegable, and really unnecessary since there's nobody waiting around for the data to arrive.


and also, wouldn't we have to open up ports that are probably closed down on the router/firewall guarding the production machines?

Correct. And introducing security holes to write around established standards is a Bad Idea.
 
It is sorta covered in the JavaRanch Style Guide.
  • Post Reply Bookmark Topic Watch Topic
  • New Topic
Boost this thread!