• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Bear Bibeault
  • Ron McLeod
  • Jeanne Boyarsky
  • Paul Clapham
Sheriffs:
  • Tim Cooke
  • Liutauras Vilda
  • Junilu Lacar
Saloon Keepers:
  • Tim Moores
  • Stephan van Hulst
  • Tim Holloway
  • fred rosenberger
  • salvin francis
Bartenders:
  • Piet Souris
  • Frits Walraven
  • Carey Brown

Data corruption while using I/O streams and sockets in Java

 
Greenhorn
Posts: 2
  • Mark post as helpful
  • send pies
  • Quote
  • Report post to moderator
The application which I am developing is a typical client-server app using Java sockets with TCP/IP. I am using bufferedinputstream and a byte array of size-1024. I will be transferring multiple files one after the other. Information regarding the file is send as a header splitted by delimiters. This info is extracted to get the file name as well as size. The header size is calculated using the convention by which I add to the byte stream before I send the file.
when I run the application in the same machine it gives fairly consistent results regarding the file size, but running the client and server apps on different machines result in data corruption with a significant increase in the number of bytes recieved. I am unable to comprehend why this is happening when the app is the same and is supposed to use the same TCP/IP stack whether run on one machine or another...Can somebody give me a soln???
 
Don't get me started about those stupid light bulbs.
    Bookmark Topic Watch Topic
  • New Topic