Netuality

Taming the big, bad, nasty websites

Archive for the ‘Sun’ tag

HTTP compression filter on servlets : good idea, wrong layer

3 comments

The Servlet 2.3 specifications introduced the notion of servlet filters, powerful tools but unfortunately used in quite unimaginative ways. Let’s take for instance this ONJava article (“Two Servlet Filters Every Web Application Should Have”) written by one of the coauthors to Servlets and JavaServer Pages; the J2EE Web Tier (a well-known servlets and JSP book from O’Reilly), Jayson Falkner*. This article has loads of trackbacks, it became so popular that the filters eventually got published on JavaPerformanceTuning along with an (otherwise very sensible and pragmatic) interview of the author. However, there is a more efficient way of performing these tasks, as undiscriminated page compression and simple time-based caching do not necessarily belong in the servlet container**. As one of the comments (on ONJava) put it : ‘good idea, wrong layer !’. Let’s see why…

There is a simple way to compress pages from any kind of site (be it Java, PHP, or Ruby on Rails), natively, in Apache web server. The trick consists in chaining two Apache modules : mod_proxy and mod_gzip.Via mod_proxy, it becomes possible to configure a certain path on one of your virtual hosts to proxy all requests to the servlet container, then you may selectively compress pages using mod_gzip.

Supposing that the two modules are compiled and loaded in the configuration, and your servlet is located at http://local_address:8080/b2b. You want to make it visible at http://external_address/b2b. To activate the proxy, add the following two lines :

ProxyPass /b2b/ http://local_address:8080/b2b/
ProxyPassReverse /b2b/ http://local_address:8080/b2b/

You can add as many directives as you like, proxy-ing all the servlets for the server (for instance, one of the configuration I’ve looked at has a special servlet for dynamic image generation and one for dynamic PDF documents generation – the output will not be compressed, but they all had to be proxy-ed). Time-based caching is also possible with mod_proxy, but this subject deserves a little article by itself. For the moment, we’ll stick to simple transparent proxying and compression.

Congratulations, just restart Apache and you have a running proxy. Mod_gzip is a little bit trickier. I’ve adapted a little bit the configuration from the article Getting mod_gzip to compress Zope pages proxied by Apache (haven’t been able to find anything better concerning integration with Java servlet containers) and here’s the result :

#module settings
mod_gzip_on Yes
mod_gzip_can_negotiate Yes
mod_gzip_send_vary Yes
mod_gzip_dechunk Yes
mod_gzip_add_header_count Yes
mod_gzip_minimum_file_size 512
mod_gzip_maximum_file_size	5000000
mod_gzip_maximum_inmem_size	100000
mod_gzip_temp_dir /tmp
mod_gzip_keep_workfiles No
mod_gzip_update_static No
mod_gzip_static_suffix .gz
#includes
mod_gzip_item_include mime ^text/*$
mod_gzip_item_include mime httpd/unix-directory
mod_gzip_item_include handler proxy-server
mod_gzip_item_include handler cgi-script
#excludes
mod_gzip_item_exclude reqheader  "User-agent: Mozilla/4.0[678]"
mod_gzip_item_exclude mime ^image/*$
#log settings
LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\" mod_gzip: %{mod_gzip_result}n In:%{mod_gzip_input_size}n Out:%{mod_gzip_output_size}n:%{mod_gzip_compression_ratio}npct." mod_gzip_info
CustomLog /var/log/apache/mod_gzip.log mod_gzip_info

Short explanation. The module is activated and allowed to negotiate (see if a static or cached file was already compressed and reuse it). The Vary header is useful for client-side caches to work, dechunking eliminates the ‘Transfer-encoding: chunked’ HTTP header and joins the page into one big packet before compressing. Header length is added for traffic measuring purposes (we’ll see the ‘right’ figures in the log). Minimum size of a file to be compressed is 512 bytes, setting maximum is also a good idea because a) compressing a huge file will stump your server and b) the limitation guards against infinite loops. Maximum file size to compress in memory is 100KB in my setting, but you should tune this value for optimum performance. Temporary directory is /tmp and workfiles should be kept only if you need to debug mod_gzip. Which you don’t.

We’ll include in the files to be gzipped everything that’s text type, directory listing and … the magic line is the one that specifies that everything coming from the proxy-server is susceptible to be compressed: this will assure the compression of your generated pages. And while you’re at it, why not add the cgi scripts…

The includes specified here are quite generous, let’s now filter some of it: we’ll exclude all the images because they SHOULD be already compressed and optimized for web. And last but not least, we’ll decide the format of the line to be added and the location of the compression log – it will allow us to see whether the filter is effectively running and compute how much bandwidth we have saved.

A compelling reason to use mod_gzip is its maturity. Albeit complex, this Apache module is stable and relatively bug free, which can hardly be said about the various compression filters found on the web. The original code from the O’Reilly article was behaving incorrectly under certain circumstances (corrected later on the book’s site, I’ve tested the code and it works fine). I also had some issues with Amy Roh’s filter (from Sun). Amy’s compression filter can be found in a lot of places on the web (JavaWorld, Sun), but unfortunately does not set the correct ‘Content-Length’ header, thus disturbing httpunit, which in turn has ‘turned 100% red’ my web tests suite – as soon as the compression filter was on. Argh.

For the final word, let’s compare the performance of the two solutions (servlet filter agains mod_proxy+mod_gzip). I’ve used a single machine to install both Apache and the servlet container (Jetty), and Amy Roh’s compression filter. A mildly complex navigation scenario was recorded in TestMaker (a cool free testing tool written in Java), then played a certain number of times (100, to be more specific). The results are expressed in TPS (transactions per second): the bigger, the better. The following median values were obtained : 3.10TPS direct connection to the servlet container, 2.64TPS via the compression filter and 2.81TPS via Apache mod_proxy+mod_gzip. That means a 5% performance hit between the Apache and the filter solution. Of course the figure is highly dependent on my test setup, the specific webapp and a lot of other parameters, however I am confident that Apache is superior in any configuration. You also have to consider that using a proxy has some nice bonuses. For instance, Apache HTTPS virtual sites may encrypt your content in a transparent manner. Apache has very good and fast logging, so it’d be cool to completely disable HTTP requests logging in your servlet container. Moreover, the Apache log format is understood by a myriad of traffic analyzer tools. Load balancing is possible using mod_proxy and another remarkably useful Apache module, mod_rewrite. As Apache runs in a completely different process, you might expect slightly better scalability on multiple processor boxes.

Nota bene: in all the articles I’ve read on the subject of compression, there is this strange statement that compression cannot be detected client-side. Of course you can do it… Supposing you use Firefox (which you should, if you’re serious about web browsing !) with the Web Developer plugin (which you should, if you’re serious about web development !). As depicted in the figure, the plugin helps you to “View Response Headers” (in “Information” menu): the presence or absence of Content-Encoding: gzip is what you’re looking for. Voila ! Just for kicks, look at the response headers on a few well-known sites, and prepare to be surprized (try Microsoft, for instance or Slashdot for some funny random quotes).

* Jayson Falkner has also authored this article (“Another Java Servlet Filter Most Web Applications Should Have”) which explains how to control the client-side cache via HTTP response headers. While the example is very simple, one can easily extend it to do more complex stuff such as caching according to rules (for instance, caching dynamically generated documents or images according to the context). This _is_ a pragmatic example of servlet filter.

** Unless of course – as one of the commenters explains here – you have some specific constraints against being able to use Apache, such as : embedded environment, forced to use another web server than Apache (alternative solutions might exist for those servers but I am not aware of them), mod_gzip unavailable on the target platform, etc.

Written by Adrian

February 2nd, 2005 at 8:28 am

Posted in Tools

Tagged with , , , , , ,

MVC and Front Controller frameworks in PHP – more considerations

leave a comment

Having recently stumbled upon this thread on Sitepoint community forums, I found a certain Mr. Selkirk advocating page controllers instead of front controller – meaning that the state machine logic is distributed in each page/state. I have some pragmatic problems with the approach since this means that a large (hundreds of pages) site would imply modifying each page if a new generic transition appears.

On this same thread, there's a sensible approach coming from an Interakt guy which I also happen to know personally [hi, Costin]. He describes PHP website design using MVC (from a controller point of view) as having 3 steps :

  • Design your site with a fancy IDE which will generate a lot of XML gunk
  • Let the framework compile the XML to good old PHP code, prefectly human-readable and all
  • Enjoy ! Speed and structure.

Unfortunately his solution is not exactly open-source nor free, and I'll gladly use my 500 maccaronis for a shiny new flat screen. Besides, it looks like my PHP episode is coming to an end (I see some serious consulting on Java apps on the horizon). Anyway my piece of advice to Costin (as a non-customer) is “don't do any serialization, keep the code clean as the bottleneck usually comes from the database – and the world will be a better place to live”.

On a lighter note, there is John telling us cool stuff about PHP:


Does this reloading of all data on every HTTP request mean that PHP is not scalable? Fortunately, as I've said before in other posts, that's a superficial view of things. The reloading of data is precisely the reason why PHP is scalable. The way PHP works encourages you to code using state-less designs. State information is not stored in the PHP virtual machine, but in separate mechanisms, a session variable handler or a database.
This makes PHP very suitable for highly scalable shared-nothing architectures. Each webserver in a farm can be made independent of each other, communicating with a central data and session store, typically a relational database. So you can have 10 or 1000 identically configured PHP web servers, and it will scale linearly, the only bottleneck being the database or other shared resources, not PHP.

Whew ! Only if vendors 'knew' that removing state information from their appservers, it would instantly become very suitable for highly scalable shared-nothing architectures. Somenone should tell this to IMB, BEA and Sun. And maybe to Microsoft. Oh, only if the things were that simple !

PS For those wondering about my sudden passion into PHP, there is an older entry on my weblog explaining the whos and the whats.

Written by Adrian

October 29th, 2004 at 8:44 am

Posted in Tools

Tagged with , , , ,

Junit : it's not [only] about the API

leave a comment

Being extremely busy lately, I arrive a bit late at the Junit destruction feast. While it is probably true that some guys with a certain gift for writing blog articles may “come up with something far more useful in a couple of days”, I think the discussion is missing an important point: there's a whole ecosystem living around Junit. We have Ant integration, we have the choice between code coverage tools (both commercial and open-source), plugins for mainstream IDEs and a certain number of useful or less-useful extensions. We have extensive documentation and a plethora of examples to feed the small fishes. Throwing Junit down the drain means throwing all these down the drain. Or, at least: write your own Ant integration, adapt a code coverage tool and rewrite the IDE integration, rewrite documentation and examples – this is not going to be done in “a couple of days”.

Another Junit advantage is that this little simplistic API is ubicuous. I mean, every developer heard about it and knows how to use it, unless of course he/she was living under a rock for the last few years. And I don't mean every Java developer, but just about every developer for a language under the xunit umbrella. Meaning : all the programming languages (unless you consider “languages” such as Whitespace, Brainfuck and INTERCAL).

Beck and Gamma have not only written some “crappy” classes and put the few “laughable” chunks of code on Sourceforge, they have done it first. Now, there is some well-founded criticism about the lack of evolution in Junit, but one thing is undeniable : it really did fill a niche, back then in 2000. The code may not be beautiful (and this is not good coming from XPers) but it serves its purpose : to provide a simple framework for unit testing.

Competition is the key here and smart newcomers on this “market” are good news for us programmers. But, it's gonna take some time and a lot of work to build a similar ecosystem, a similar mindshare and usurp Junit's kingdom. That would be of course more interesting to see than denial of four years of Junit influence in a few well-rounded, but futile phrases.

Written by Adrian

July 14th, 2004 at 9:55 am

Posted in AndEverythingElse

Tagged with , , , ,

Comparing FOP and JasperReports

one comment

Anybody looking for OSS reporting solutions in Java usually has to make a choice between Apache FOP and Jasper Reports*. While having somewhat different feature sets and addressing distinct reporting solutions, the two APIs boil down to the same basic thing : generate a report from an XML file (or stream/string/whatever). FOP has a clear advantage of standardization (based on XSL-Formatting Objects) while Jasper plays more in the pragmatic field of obtaining those 80% results with a minimum of effort and uses a proprietary XML format.

But FOP is not a standalone reporting solution : it's just a way of transforming XSL-FO files into a report. In order to fill the report with the necessary data, the obvious choice is a templating engine such as Jakarta Velocity. Thus a FOP report creation is a two-step operation :

  • create the XML report via Velocity
  • feed the XML stream to FOP

Jasper alleviates this problem by including its own binding engine, the only restriction being that input data should support some constraints (such as putting your 'rows' inside a JRDataSource).

Both Jasper and FOP allow inclusion of graphic files inside, usual formats (GIF, JPEG) are supported, however FOP has a nice bonus of rendering SVG inside reports. Unfortunately, this comes with the price of using Batik SVG Toolkit, which is a bulky (close to 2MB) and rather slow API. While processing your dynamic charts as XML files (Velocity again) is a seducing idea, the abysmal performance of SVG rendering will make you give up in no time. Unfortunately, I speak from experience.

At first sight, FOP has a lot more options for output format, compared to Jasper Reports. Of course there's PDF and direct printing via AWT, but also Postscript, PCL, MIF as well as SVG. These choices are quite intriguing, since Postscript and PCL are printing formats (easily obtained by redirecting the specific printer queue into a file), MIF is a rather obscure Adobe format (for Framemaker) and SVG … well, a SVG report is too darn slow to be useable (yes, I was foolish enough to try this, too). Jasper makes again a pragmatic choice by allowing really useful output formats such as HTML, CSV and XSL (never underestimate the power of Excel); and of course: direct printing via AWT and PDF.
While FOP's latest version (0.20.5) was released almost a year ago (summer 2003), Jasper Reports is bubbling with activity – Teodor releases a minor version each one or two months (latest being 0.5.3 at 18.05.2004).

I've decided to use as a 'lab rat' one of the apps developed during my 'startup days': the client GUI is written in Swing and features a few mildly complex reports generated using Velocity+FOP. FOP version is 0.20.4 (the current version back in Q1-2003, when we had to quit dreaming about the 'next financing round' and development halted) but as I already told you FOP has evolved little since then. Though, it's perfectly reasonable to use this implementation as a witness for comparison with Jasper (on the opposite, Jasper has evolved a great deal since Q1-2003).

Back then, the report development cycle was quite simplistic. In fact, the XSL-FO templates were written by hand inside a text editor and the application code was run (via a Junit testcase and some necessary configuration and business data mocking) in order to generate a PDF report. In the case of errors, we had feedback by examining the error traces. Visual feedback was given by the PDF output. While simple to perform, this cycle was extremely tiresome after a while as there was an important overhead : start a new JVM, initialize FOP, fire Acrobat Reader (plus we were using some crappy – even by the standards of 2003 – 1GHz machines w 256/512MB RAM). A WYSIWYG editor would have been nice, so one of my coworkers has made some research and the only solution he found was XMLSpy (Stylevision not available back then) – but, at 800USD/seat this was 'a bit' pricey** for us (only the Enterprise flavor covers FO WYSIWYG editing !?). Another interesting idea was to use one of the conversion tools (from RTF to FOP) such as Jfor, XMLMind or rtf2fo (of these products, only Jfor is free, but feature-poor). What stopped us from doing it was that the generated FO was overly complex : we needed comprehensible cut_the_crap files because we were going to integrate inside Velocity templates. And when you have tens of tags and blocks inside blocks and not the slightest idea which one is a row, which one is a column and which one is a transparent dumbass artefact, it's a gruesome trial-and-error task to integrate even simple VTL loops. And you'd have to do this each time you change something in the report : yikes ! Conclusion : the report development cycle was primitive for FOP and there was no way we could change it.

Things are quite different for Jasper Report : there are a lot of available report designers, and some of them are free. While the complete list is on Jasper Report site, I'd like to note at least three of them :

  • iReport is a Swing editor and very interesting because it's not only covering the basic Jasper functionality but also supplementary features such as barcode support (which is admittedly as easy as embedding a barcode font in Jasper with two lines of XML, but much easier to make it via a mouse click). iReport is free, which is excellent, but is a standalone app without IDE integration, and as any complex Swing app is quite slow and a memory hog.
  • if you are a developer using Eclipse, you'd appreciate two graphical editors based on Eclipse GEF, available as Eclipse plugins : JasperAssistant and SunshineReports. None of them is free and, at least on paper, the functionality seem identical, but SunshineReports has only the older 1.1 version downloadable, which is free but does NOT work with recent builds of Eclipse 3. How the heck am I supposed to test it ? On the contrary, Assistant has a much more relaxed attitude allowing the download of a free trial for the latest version of their product. Maybe too relaxed, though, because – even if (theoretically) limited in number of usages – you can use the trial as much as you want to***. But if you are serious about doing Jasper in Eclipse you should probably buy Assistant, available for a rather decent 59USD price tag. I am currently using it and it's a good tool.

So much for the tools, let's get the job done. The bad part : if you're experienced with FO templates, don't expect to be immediately proficient with Jasper, even with a GUI editor. The structure of an FO document has powerful analogies with HTML : you have tables, rows, cells, stuff like that, inside special constructs called blocks. It's relatively easy to use a language such as VTL in order to create nested tables, alternating colors and other data layout tricks. You can even render a tree-organized data via a recursive VTL macro, and everything is smooth and easy to understand. Jasper is completely different and at first sight you'll be shocked by its apparent lack of functionality : only rectangles, lines, ellipses, images, boilerplate text and fields (variable text). Each one of this elements has an extensive set of properties about when the element should be displayed, stretch type, associated expression for value and so on. Basically, you'd have to write Java code instead of Velocity macros and call this code from the corresponding properties of various report elements. If at the beginning it feels a little awkward, after a while it comes quite natural and simple. As for nesting and other advanced layouts, there is a powerful concept of 'subreport'. And yes I've managed to render a tree using a recursive subreport, but given the poor performance the final choice was to flatten the data into a vector then feed it into a simple Jasper report. So pay attention to the depth of 'subreporting'.

Once the reports were completely migrated, I've benchmarked a simple one (without SVG, charts, barcodes or other 'exotic' characteristics). The test machine is a 2.4GHz P4 w 512MB Toshiba Satellite laptop. In the case of FOP, the compiled velocity template and the FOP Driver are cached between successive runs. In the case of Jasper, the report is precompiled and loaded only on first run, then refilled with new data before each generation. The lazy loading and caching of reporting engines is the cause of important time differences between the generation of the first report and the subsequent reports. Delta memory is measured after garbage collection. The values presented are median for 10 runs of the 'benchmark report'.

  First run Subsequent runs Delta memory
Velocity + FOP 10365ms 381ms 850KB
Jasper Reports 1322ms 82ms 1012KB

While I am totally pro-Jasper after this short experiment, it is important to note that commercial and well-maintained FO rendering engines such as RenderX XEP claim improved performance upon FOP. Depending on your requirements, environment and reporting legacy apps, an FO-based solution might be better, especially when report generation is only on server-side.

Of course, usual disclaimer apply: this benchmark is valid only for my specific report in my specific application so YMMV.

* While I am aware that other OSS solutions do exist for Java, I consider these two as 'mainstream'.

** Did I mentioned that we were a startup with financing problems ?

*** No, I'm not going to explain here how it can be done.

Written by Adrian

May 25th, 2004 at 8:57 am

Posted in Tools

Tagged with , , , ,

I'm amazed : I own a PC !

leave a comment

Two interesting predictions:

“McNealy also predicted that five years from now, most people will be amazed they ever owned a PC.”
June 5, 1998

“This agreement will be of significant benefit to both Sun and Microsoft customers. It will stimulate new products, delivering great new choices for customers who want to combine server products from multiple vendors and achieve seamless computing in a heterogeneous computing environment.”
Mc Nealy, President and CEO Sun Microsystems, April 2, 2004

Written by Adrian

April 3rd, 2004 at 4:32 pm

Posted in AndEverythingElse

Tagged with , ,