• Post Reply Bookmark Topic Watch Topic
  • New Topic
programming forums Java Mobile Certification Databases Caching Books Engineering Micro Controllers OS Languages Paradigms IDEs Build Tools Frameworks Application Servers Open Source This Site Careers Other Pie Elite all forums
this forum made possible by our volunteer staff, including ...
Marshals:
  • Campbell Ritchie
  • Jeanne Boyarsky
  • Ron McLeod
  • Paul Clapham
  • Liutauras Vilda
Sheriffs:
  • paul wheaton
  • Rob Spoor
  • Devaka Cooray
Saloon Keepers:
  • Stephan van Hulst
  • Tim Holloway
  • Carey Brown
  • Frits Walraven
  • Tim Moores
Bartenders:
  • Mikalai Zaikin

HFS - Compression Filter - has anyone else tried this ?

 
Ranch Hand
Posts: 200
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Page 690-694 provide code to implement a compression filter. I have tried to get this working to no avail. I downloaded some code off the 'net and found some fundemental differences.

1. The other code uses a group of objects to provide the StreamObject
specifically:
ByteArrayOutputStream
GZIPOutputStream

I also noticed some other websites use the same style, ByteArrayOutputStream, GZIPOutputStream.

Is this way compression or data managling filters "should" work ?

2. Following "finishing" the stream produces "empty html/body tags"..
The "other" code finishes the GZIP Stream, then obtains the bytes rom the ByteArrayOutputStream and then writes them to the HttpServletResponse object.

3. "write" methods on the stream object.
The sample in the book only implements one. "should we" or "do we have to" implement all the write methods
public void write(int b) throws IOException
public void write(byte b[]) throws IOException
public void write(byte b[], int off, int len) throws IOException

(From an encapsulation and completeness point of view, I will implement all write methods).

Thoughts ?


 
Ranch Hand
Posts: 121
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Colin,

I'm having fun with this one at the moment - will let you know if I get it going.

Re the write methods - here's an extract from the API docs for write methods other than write(int):

The write method of OutputStream calls the write method of one argument on each of the bytes to be written out. Subclasses are encouraged to override this method and provide a more efficient implementation.



and for write(int) it says:

Subclasses of OutputStream must provide an implementation for this method.


so just over-riding write(int) is fine.
 
Roger Yates
Ranch Hand
Posts: 121
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
In my attempts to get the compression filter working, I have managed to get a permanently compressed output coming back, despite the fact I have set content-type to "gzip".
Any ideas why this might be?

Also on HFS p693 the CompressionResponseWrapper class has the following in getOutputStream():
and similar code in getWriter().
I think these should read: and a similar change in getWriter().
But even fixing this doesn't help my all-scrambled output which looks big enough to be a response page compressed!
I'm using the following to set the content type, as per HFS:

and my DD reads:
Other than changing the package, and above lines, I think my code is identical to HFS.

Anyone have any ideas as to what's going on?
(To check the filter uncompressed, I tried setting content to text/html, writing a short string on front, and passing the normal response to doFilter and this all worked fine.)
[ October 19, 2004: Message edited by: Roger Yates ]
 
Colin Fletcher
Ranch Hand
Posts: 200
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
I did manage to get this working and have uploaded the code to:

http://www3.telus.net/~cafletch/hfs-compression.zip

I also emailed Kathy this code so hopefully it can be used as part of the sample code for the book.
 
author
Posts: 199
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Hello Colin,

I assure you the code works fine; although there might be some typos introduced when moving from the code files to the book files. If you would like my original code set, please send me an email message (b_basham@yahoo.com) and I will send you the complete code.

However, you raise some interesting questions, so allow me to talk about these:

Originally posted by Colin Fletcher:
1. The other code uses a group of objects to provide the StreamObject
specifically:
ByteArrayOutputStream
GZIPOutputStream

I also noticed some other websites use the same style, ByteArrayOutputStream, GZIPOutputStream.

Is this way compression or data managling filters "should" work ?

2. Following "finishing" the stream produces "empty html/body tags"..
The "other" code finishes the GZIP Stream, then obtains the bytes rom the ByteArrayOutputStream and then writes them to the HttpServletResponse object.



These two questions are related. What you have found in other peoples compression filters is a common technique, which I call "capture, then transform" technique. This technique uses a ByteArrayOutputStream (BAOS) to capture all of the content generated by the JSP or servlet. Then when control returns to the filter, the filter transforms the complete content all at once (in this case by using a GZIP to compress the content).

The HFS compression filter uses a different technique which I call "transform and send". With this technique the content is being transformed (in this case compression is the transformation) as the content is being produced by the JSP or servlet. When control returns to the filter all it needs to do is flush the GZIP buffer using the finish method.

There are pros and cons to each of these techniques. The "capture, then transform" technique requires more runtime resources as it needs to buffer all of the content in memory before the transformation occurs. The "transform and send" technique uses significantly less resources as the transformation occurs as the content is being produced and sent to the client.

3. "write" methods on the stream object.
The sample in the book only implements one. "should we" or "do we have to" implement all the write methods
public void write(int b) throws IOException
public void write(byte b[]) throws IOException
public void write(byte b[], int off, int len) throws IOException



Yes, you could implement all of these methods, but you don't have to so why do more work than is necessary? The answer to this rhetorical question is that in some cases you can gain a performance optimization by specializing each method. This was not important for the code example in HFS.

HTH,
Bryan
 
Colin Fletcher
Ranch Hand
Posts: 200
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
Bryan, Thank you for the reply. I am interested in seeing the "transform and send" implementation of the compression filter. Your detail on the other points I raised has helped my understanding, thank you.
 
author
Posts: 21
  • Mark post as helpful
  • send pies
    Number of slices to send:
    Optional 'thank-you' note:
  • Quote
  • Report post to moderator
PS -- I have implemented an open-source compression filter that you might be interested in. It uses the "transform and send" approach, as you call it, and so is significantly faster than the naive approach. Actually, it buffers up to 1024 bytes of response (default setting) and will send the response uncompressed if it doesn't exceed that 1024 bytes. This also increases efficiency. It also supports more compression algorithms, stats, etc. Have a look!

PJL Compressing Filter
http://sourceforge.net/projects/pjl-comp-filter/
 
Police line, do not cross. Well, this tiny ad can go through:
a bit of art, as a gift, that will fit in a stocking
https://gardener-gift.com
reply
    Bookmark Topic Watch Topic
  • New Topic