Originally posted by Liang Anmian:
My file stores BOTH text and binary data. I wonder if the Huffman Compression will work well in this scenario.
Probably not - Huffman coding usually doesn't give you more than 30%-ish on text, less on binary data. You're probably better off with an LZ* family algorithm; they still need only limited memory footprint (suspect ~20K-ish) and achieve much better compression. Follow them with an entropy coder such as Huffman for even better compression rate; this is how most compressed archive formats work. Avoid LZW if you're worried about patent issues.
This Java LZSS implementation looks useable and unencumbered by patent or licensing restrictions, but be warned I have no experience with it whatsoever.
This is assuming that J2ME doesn't have inbuilt gzip/zip/jar support like J2SE has (don't ask me, I'm a big iron
J2EE guy). If it does, it's simple: use that. A gzip stream would serve you nicely.
Hope this helps,
- Peter
[ August 21, 2004: Message edited by: Peter den Haan ]