• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

Data corruption while using I/O streams and sockets in Java

 
Greenhorn
Posts: 2
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
The application which I am developing is a typical client-server app using Java sockets with TCP/IP. I am using bufferedinputstream and a byte array of size-1024. I will be transferring multiple files one after the other. Information regarding the file is send as a header splitted by delimiters. This info is extracted to get the file name as well as size. The header size is calculated using the convention by which I add to the byte stream before I send the file.
when I run the application in the same machine it gives fairly consistent results regarding the file size, but running the client and server apps on different machines result in data corruption with a significant increase in the number of bytes recieved. I am unable to comprehend why this is happening when the app is the same and is supposed to use the same TCP/IP stack whether run on one machine or another...Can somebody give me a soln???
reply
    Bookmark Topic Watch Topic
  • New Topic