Sunday, July 19, 2009

GWT vs. RichFaces a different perspective

Posted by Kris Gemborys

The article compares GWT with RichFaces and attempts to provide guidance on which tool is a better fit to meet your needs. Both tools have a similar set of features and a specific set of strengths and weaknesses. However, GWT provides a new approach for implementing UI which allows for complete separation of the presentation implementation from the services implementation. This separation has significant implications on scalability that is often overlooked while simply comparing UI tooling features.

GWT (Google Web Toolkit) – is a toolkit which abstracts javascript language through a sophisticated set of libraries exposed to developers as Java APIs. This set of APIs allow to write rich UI browser-based applications in Java using best OO design and implementation guidelines. The huge bonus is the cross browser compatibility (I found it works most of the time). The libraries have sophisticated support for client state management which significantly reduces complexities associated with writing highly scalable distributed applications. GWT heavily leverages AJAX. All communications with the server are implemented using asynchronous callbacks. GWT is the open source product developed and supported by Google. Google heavily leverages GWT for implementing its SAAS and Cloud computing strategies. All major Google products such as GMail, Wave client, Google Docs, and many others leverage GWT. GWT offers seamless integration with Google App Engine which is the Google’s platform for cloud computing. The main competitors in the same market space are Adobe Flex and Microsoft Silverlight.

RichFaces – is one of the best JSF libraries offering AJAX support (http://www.jsfmatrix.net/). RichFaces leverages SUN JSF RI for core JSF functionality and Ajax4Jsf toolkit for AJAX wiring. The main RichFaces competitors are .Net and ICEfaces. As opposed to GWT, RichFaces and .Net support traditional traditional two tier web applications development. The two tier web developed follows the MVC model where the View resides on the browser while the Model and Control reside on the server. RichFaces is an Opened Source product developed and supported by Exadel currently owned by RedHat.


RichFaces strengths:

- simple steps to upgrade existing JSF applications. The upgrade requires copying three RichFaces jars and adding couple entries to the web.xml descriptor file

- backwards compatibility with JSF legacy pages. The old JSF code will work just fine with RichFaces (kind of like old C code will work with C++). This offers an iterative and low-cost option for adding AJAX support to existing traditional JSF applications. Developers can gradually retrofit sluggish or flickering JSF pages which will benefit the most from introducing AJAX. For example, it is very easy to add AJAX listeners and events handlers to eliminate aggravating page refreshing.


Sample illustrating adding AJAX support to already existing controls:

<%-- tradition JSF control --%>
<h:selectOneMenu styleClass="combobox" title="Select Payment Method" onchange="setFocusEle(document.forms[0], this);doOnPageRefSubmit(this);" valueChangeListener="#{PaymentBean.onChange}" immediate="true" id="PaymentMethod" value="#{PaymentBean.paymentMethod}">
<f:selectItems value="#{PaymentBean.paymentmethodList}"/>
</h:selectOneMenu>

<%-- AJAX enabled JSF control --%>
<h:selectOneMenu styleClass="combobox" title="Select Payment Method" id="PaymentMethod" value="#{PaymentBean.paymentMethod}">
<f:selectItems value="#{PaymentBean.paymentmethodList}"/>
<a4j:support event="onchange" actionListener="#{PaymentBean.onChange}" ajaxSingle="true" immediate="true" oncomplete="setVisibilityField(data);controlVisibility(document.forms[0], this);">
<f:attribute name="attrName" value="PaymentMethod" />
</a4j:support>
</h:selectOneMenu>

public void changeVisibility(ActionEvent event) {
HtmlAjaxSupport ajaxSupport = (HtmlAjaxSupport)event.getSource();
....
String data = computeChanges();
ajaxSupport.setData(visibilityStr);
}
- replacing legacy menu and popu implementation is also very easy with Richfaces components, all you need is to start using Richfaces tag libraries



RichFaces weakness:

- Traditional development methodologies built as extensions to the HTML markup language such as ASP, JSP, .NET, and JSF are showing their age. These approaches are more suitable for form-based request/response applications rather than truly Rich UI desktop implementations requiring the complex UI navigation. The fundamental drawback of the JSF/JSP framework is need to implement server components for processing UI data. Whether you use session JSF beans or some other approaches you will need to face a scalability issue and address failover requirements. Storing state on the server works fine for small clusters, but it does not scale well. There is no easy way to replicate session data across large clusters to transparently support failover. Websites that scales to millions of users will probably never use RichFaces due to these fundamental limitations. If you think that changing default options for storing JSF state on the client instead of the server and using exclusively JSF beans request session scope will save you, think again. Even fairly simple desktop style application with JSF state set to client \requires transferring the state data exceeding 300 KB per each submission. While AJAX may come to rescue somewhat this is hardly an optimum way to implement Rich UI.


GWT strengths

- GWT and Flex do not need to store any data on the server to provide rich user experience including the complex navigation. Model, View, and Controller (Google calls it Presentation Pattern rather than traditional MVC pattern) are all running on the browser. The UI implementation is completely separated from the server implementation. A server developer does not need to deal with UI logic at all. The communication is completely stateless. The performance and scalability characteristics of client and server implementations are fully separated. The UI developer can concentrate on UI performance i.e. stylesheets optimization, widget lazy loading, and UI model logic. A server developer needs to just take care of the business logic, interfaces with other systems, and persistency. The service developers does not need to know anything about presentation implementation. From his/her perspective, the UI can be implemented in anything as long as incoming and outgoing data complies with message specifications whether it is JSON, plain old XML, or something else.


GWT weakness

- There is only one currently available WYSWIG editor for GWT offered by Instantiations. The toolkit is reasonably priced, but I was having problems using this tool when working with complex pages. While many may disagree, the need for WYSWIG editor is not as obvious in case of GWT. After initially struggling a bit, I found it fairly easy to work with GWT panels directly to implement the desired layout. I was using hosted mode to instantaneously test my changes. The only time consuming part of the UI design and development was fixing cross-browser layout problems which required recompiling code. In the hosted mode, the GWT works with the bytecode and uses IE browser compatible container to render the content. In order to test the code on another browser, a developer has to compile the entire which is a very time consuming process.


Conclusion:

Both RichFaces and GWT address similar tooling needs for writing Rich UI applications. However, GWT is on a way in while JSF AJAX is on the way out. In my mind comparing RichFaces to GWT is like comparing Java to C++. RichFaces is the natural evolution from the markup language, JSP, and JSF. RichFaces offers AJAX support and really complete set of controls and APIs. Additionally, it can be easily integrated with proven Enterprise frameworks such as Spring or more recently Seam. On the other hand GWT is a new kid on the block, somewhat similar to what was JAVA in the early days. When JAVA came around it lack a lot of features but quickly grew way beyond simple Applet development tooling. Back then, I remember everyone claiming JAVA is not the right choice for the server development because it does compile code natively so it is not as fast as C++. JSF with AJAX support it is a good choice to give a facelift to existing legacy applications and capitalize on abandon fully-featured tooling, but long-term GWT is probably a better option. RichFaces does not resolve a fundamental problems related to scalability and presentation/server logic separation while GWT does.

References:

Saturday, July 18, 2009

Communicating using HTTPS/SSL in Java

Posted by Kris Gemborys

Writing Java client SSL code to communicate using HTTPS can be a frustrating experience.
You will need to complete the following two tasks to have your SSL handshake working and HTTPS communication going:
1) Create and configure keystores which are repositories storing X509 certificates
2) Write client java code
Depending on project objectives, you will also need to consider the following SSL configuration options:
1) Self-signed certificates - used in development environment and sometimes Intranets
2) Two-way SSL authentication - typically used to facilitate secured SOA communication over WAN
3) One-way SSL authentication - most common way to submit sensitive data to eCommerce websites

This article discusses various steps and challenges encountered when configuring SSL with IBM JDK 1.4.x, IBM JDK 1.6.x, and SUN JDK 1.6.x.



We should start with determining what type of SSL communication we will be handling. In the development environment, we typically deal with self-signed X509 certificates. The SSL/HTTPS server configuration steps and tools depend obviously on the server's vendor. For example, when configuring servers which use SUN's JDK we should use keytool located in SUN's JRE bin folder, and when configuring WebSphere or IBM HTTPS, we should use IBM's ikeyman.bat. When working with certificate repositories (keystores), we need to be aware of different providers such as CMS, JKS, PKCS12.




The most important difference between self-signed certificates and production certificates obtained from Certificate Authorities (CAs), such as Verisign is that self-signed certificates are not trusted. Every X509 certificate contains other certificate references (certificate chain). This certificate chain leads to a root CA certificate and proves that a certificate installed on a server and used for SSL communication has been legitimately obtained and the owner of this certificate is a registered entity with the CA. Obtaining certificates from CAs costs money and obviously, in the development environment, we do not care whether a certificate is legitimate or not; we just want to make sure that our SSL code works. The one problem with self-signed certificates is that when java code or a browser communicates with a server which uses a self-signed certificate the code returns certificate errors. The SSL Java handshake throws exceptions that a server certificate cannot be trusted. One of the required steps to fix this issue is to force Java code to trust this self-signed certificate.



Configuring server with HTTPS/SSL

The steps to configure the server's keystore depend on vendor so you will need to consult appropriate documentation. In the case of IBM HTTP server, you will use ikeyman to create a new keystore. When creating an IBM HTTPS keystore, you will need to use CMS provider and stash password (do not forget to select the stash password checkbox). To make everything even more difficult, IBM requires use of the CMS provider for both IBM HTTP 6 and IBM HTTP 7, but IBM HTTP 6 does not come with this provider in the default configuration. You will have to go to the java security configuration folder to enable the CMS provider. Fortunately, IBM HTTP 7 does not have this issue. Once you have your keystore updated/created, you will need to generate a new self-signed certificate. I suggest you select a maximum number of days, which is either 999 or 9999, depending on your keystore tooling. The last step is to export the public portion of the certificate. The export function works the same-way for self-signed as for CAs certificates. The export extracts public key. You will need this public key to configure your client.




Configuring trusted keystore on the client

When dealing with more recent JDK implementation, you need to be aware of two types of certificate repositories: regular keystores and trusted keystores. The trusted keystore is the source of all the grief when trying to configure a straight HTTPS using self-signed certificates. By default, a self-signed certificate is not trusted because a client cannot validate the server's certificate chain when establishing SSL connections. You need to use keytool or ikeyman to import a public portion of the server's self-signed certificate into the client's default trusted keystore. Internet mailing lists are full of postings reporting various chain certificate errors (and believe me you will get different errors depending on what JSSE APIs you are using). The SUN's and IBM's default trusted keystores have already all necessary entries that include root CAs, so when communicating with a site that uses a certificate obtained from CA you do not need to modify this default trusted keystore. A SUN's trusted keystore is located in <JAVA_HOME>/lib/security/jssecacerts. You should not modify this file but rather you should make a copy and use properties settings as in the sample Java code to override the location of your trusted keystore file. If you are not using self-signed certificates you do not need to import anything to the trusted keystore.



Two-way SSL authentication

If a client uses JDK 1.6. you have to configure both keystore and trusted keystore whether you plan to store any X509 certificates in the keystore or not. While the client's trusted keystore needs to have all certificates loaded to validate the server's certificate chain, the keystore itself can be empty. You will need certificates in this client's keystore only if you plan to use the two-way SSL authorization. The process for obtaining client certificates for the two-way authorization is similar to a process for obtaining server certificates. In the development environment, you can use keytool to generate a self-singed certificate and then export public key. You guessed it right, this public key needs to be imported in the trusted server keystore or otherwise you will be getting some nasty errors in server logs.



The sample client Java code will initialize SSL communication and verify that everything works.



This code configures keystores locations:




System.setProperty("javax.net.ssl.keyStore", properties.get("keystoreClient").toString());
System.setProperty("javax.net.ssl.keyStoreType", keyStoreType);
System.setProperty("javax.net.ssl.keyStorePassword", keyStorePassword);
if (trustedKeyStoreFlag) {
System.setProperty("javax.net.ssl.trustStore", properties.get("keystoreClientTrusted").toString());
System.setProperty("javax.net.ssl.trustStorePassword", trustedKeyStorePassword);
System.setProperty("javax.net.ssl.trustStoreType", keyStoreType);
}







This portion of code retrieves certificates from keystores






KeyManagerFactory kmf = KeyManagerFactory.getInstance(providerName);

KeyStore ks = null;

kmf.init(ks, null);

/* Initialize the trust manager */

TrustManagerFactory tmf = null;

if (trustedKeyStoreFlag) {

tmf = TrustManagerFactory.getInstance(providerName);

tmf.init(ks);

}





This portion of code initializes SSL Context with SSL or TLS protocol. You should be using TLS whenever available.




SSLContext sslContext = SSLContext.getInstance(sslContextName);

sslContext.init(kmf.getKeyManagers(), tmf != null?tmf.getTrustManagers():null, null);



This function will just send an HTTPS request and process an HTTPS response:




initialized = testHandshake(testURL);






Java SSL Example InitSSL.java

SUN Java Code to update keystore with missing X509 cert InstallCert.java


References:





Setting Apache Load Balancer with a cluster of Tomcat servers

Posted by Kris Gemborys


If you want to configure an apache HTTP server as a load-balancer that forwards requests to a cluster of Tomcat servers and ensure that sticky sessions actually stick you are not alone.If you spent hours reading various articles with incomplete and outdated information and your cluster of servers bahaves not exactly the way you want it to, do not dispair just keep reading.


The one important lesson I have learn is that if you successfully configured your first Tomcat cluster, configuring and testing the next one will take you only minutes. Setting load balancing with sticky sessions is actually trivial if you: a) Pay attention to these slashes when entering Proxy Balancer settings b) Understand that an apache HTTP server follows a specific naming convention for cookies used to route HTTP request to back-end servers


Prerequisites:


1) Fedora Linux - Tested with Fedora 8 - Command to check the Linux version: cat /proc/version


2) Apache HTTP Server - Tested with Apache 2.2.6 bundled with Fedora 8 distribution - Command to check Apache version: httpd -V


3) Tomcat server - Tested with Apache Tomcat 6.0.16 - You can download the latest version of Tomcat from: http://tomcat.apache.org/


Word of Caution:


Before making any significant changes to configuration files you need to remember to make backups!


Cluster Topology


The simple cluster used for configuring load balancer consists with a single Apache HTTP server forwarding traffic to two Tomcat servers. All three components were deployed to a single physical server running Linux. The Apache HTTP server is configured to listen on ports 80 and 443, while the Tomcat servers listens on ports 14180 and 15180.


Figure 1. Simplified Topology Diagram



Figure 2. Apache Load Balance Manager Configuration




Configuring the Apache Load-Balancer


You start configuring load balancing by modifying the apache HTTP server configuration - httpd.conf.


The default location of these file is /etc/httpd/conf/httpd.conf


First check if the following modules are enabled:


LoadModule rewrite_module modules/mod_rewrite.so LoadModule proxy_module modules/mod_proxy.so LoadModule proxy_balancer_module modules/mod_proxy_balancer.so LoadModule proxy_http_module modules/mod_proxy_http.so LoadModule proxy_connect_module modules/mod_proxy_connect.so LoadModule cache_module modules/mod_cache.so


Modules mod_cache and mod_rewrite are optional.


Next you need to enable access to load balancer manager by adding the entries:


<Location /balancer-manager> SetHandler balancer-manager


Order Deny,Allow Deny from all Allow from 192.168.1.30 </Location>


The purpose of Deny and Allow entries is to limit access to specific resources and you need to customize these entries according to your needs.


In order to enable load balancing functionality, you will need to configure the proxy module.


Below configuration entries allow only access from your internal none-routable IP addresses.


<IfModule mod_proxy.c>


<Proxy *> Order deny,allow Deny from all Allow from 192.168.1 </Proxy>


ProxyRequests Off


ProxyPass /balancer-manager ! ProxyPass /server-status ! ProxyPass /server-info !


</IfModule>


If you want to enable access to all users you will need to replace the above <Proxy> settings with:


<Proxy *> Order deny,allow Allow from all </Proxy>


The ProxyPass entry with ! is used to specify exclusions, the urls excluded from being forwarded to beckend servers.


Now it is time to configure our cluster:


NameVirtualHost *:80
<VirtualHost *:80>


ServerName www.example.com
ServerAlias example.com
DocumentRoot "/var/www/html"
ProxyRequests Off


<Proxy *>
Order deny,allow
Deny from all
Allow from 192.168.1
</Proxy>


ProxyPass / balancer://maxcluster/ stickysession=JSESSIONID|jsessionid nofailover=On
ProxyPassReverse / http://127.0.0.1:14180/
ProxyPassReverse / http://127.0.0.1:15180/
<Proxy balancer://maxcluster>
BalancerMember http://127.0.0.1:14180 route=node01
BalancerMember http://127.0.0.1:15180 route=node02
ProxySet lbmethod=byrequests
</Proxy>


</VirtualHost>


The above VirtualHost configuration entry is for the domain www.example.com to load balance requests between two Tomcat servers deployed to the same physical server. The Tomcat server identified as node01 monitors the port 14180 for incoming requests (instead of the default 8080), the second tomcat server identified as node02 monitors port 15180. If Tomcat servers are running on different physical servers and using default ports the balancer configuration entries should look as below:


ProxyPassReverse / http://192.168.1.50:8080/ ProxyPassReverse / http://192.168.1.51:8080/ <Proxy balancer://maxcluster> BalancerMember http://192.168.1.50:8080 route=node01 BalancerMember http://192.168.1.51:8080 route=node02 ProxySet lbmethod=byrequests </Proxy>


When entering the Proxy balancer entries you need to pay extra attention to these slashes!


As you can guess the apache server uses route entries to route HTTP requests to appropriate back-end servers. The apache server expects that the route entry id "node01" is part of the JSESSION cookie identifier. In other words, the naming convention for the tomcat cookie is <SessionID>.<route>. Example of the tomcat session JSESSIONID cookie that follows this convention is: 92AF8327CCC933A562FD6B7EFE8DF9C1.node01.


At this time we are done with apche balancer configuration but you probably still wonder how to add rounting identifiers to you Tomcat session cookie identifiers.


Configuring the Cluster of Tomcat Servers


Now it is about time to start configuring your Tomcat apache servers so the JSESSIONID cookie includes the route.


First we should install two Tomcat servers in separate directories (or different servers). If Tomcat servers are running on the same physical servers you need to update the listener ports in the server.xml configuration files to avoid any port conflicts. You will need to examine server.xml fo any port entries i.e.


<Connector port="14180" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="14143" />


In order to make things easier you may consider updating all default port numbers with a new series of ports i.e use a series of 14000 instead of 8000. Again this is only required if you are running multiple tomcat servers on the same physical server. You can use netstat -an command to verify that there are no port conflicts.


Once you have dealt with port entries, you need to take care of routing:


In the first Tomcat server.xml configuration file search for the Engine entry and modify this entry as follows:


<Engine name="Clustered" defaultHost="localhost" jvmRoute="node01">


Next follow the above steps for the second server


<Engine name="Clustered" defaultHost="localhost" jvmRoute="node02">


Testing the Configuration


First you need to restart all servers; one of the ways to accomplish this is to execute the following commands:


service httpd restart


catalina.sh stop


catalina.sh start


You will also need to disable the default SELinux setting which prevents Apache Server from forwarding HTTP requests:


setsebool -P httpd_can_network_connect=1


You can test your cluster using the Tomcat session example:


http://www.example.com/examples/servlets/servlet/SessionExample


Before running the example you should start monitoring you back-end server acccess logs i.e.:


tail -f catalina.out


Now you need to opern a new browser and enter the SessionExample url. Next keep pressing "Submit Query" button. The Session ID should remain the same and the Last Accessed date should be refreshing. You should also see only one log being updated with new messages. Next close the browser and reopen it. Repeat the above steps and you should see the UI is updated with a new session id and the logs of the corresponding server reporting new entries as you keep pressing "Submit Query" button. You should also notice that the Session Id contains routing information.


Well configuring an apache HTTP server as the load-balncer was not that hard after all, or was it?


References


http://www.markround.com/archives/33-Apache-mod_proxy-balancing-with-PHP-sticky-sessions.html


http://wiki.jboss.org/wiki/UsingMod_proxyWithJBoss


http://httpd.apache.org/docs/2.2/mod/mod_proxy.html


Load Balancing Configuration for beck-end Apache servers:


http://www.howtoforge.com/load_balancing_apache_mod_proxy_balancer


Tomcat Apache Engine Reference:


http://tomcat.apache.org/tomcat-5.5-doc/config/engine.html

Configuring Eclipse SDK 3.4 to run Google GWT sample applications

Posted by Kris Gemborys


As I am spending more time developing web applications using Google GWT, I am wondering why someone would want to use any other tools for developing interactive rich internet applications. For someone like me, who knows Java and UI development principles, using GWT is a natural fit. The Google online documentation is very helpful, but is missing a few details on how to setup GWT Example projects (i.e. Showcase) in the Eclipse environment. If you are interested in configuring and running GWT Examples from your Eclipse environment this article is for you.




1) Download GWT SDK from here:




2) Download Eclipse 3.4 Ganymede from here:



After downloading Eclipse SDK unzip the content to the root folder (i.e. Windows c:\) and you can start Eclispe by executing c:\eclipse\eclipse.exe)



3) Install GWT for eclipse, the Google provides instructions here:




4) Install GWT Eclipse plugin


4.1) Start Eclispe

4.2) Navigate to Help>Software Update and select Available Software tab.

4.3) Add Google site using "Add Site" button






4.4) In the Location field enter the url retieved from the Google web site containing plugin installation instructions above (i.e. http://dl.google.com/eclipse/plugin/3.4 for Eclipse SDK 3.4)















4.5) Accept the Google licensing agreement and follow plugin installer promts.

4.6) Once plugin is downloaded and installed restart eclipse

4.7) Test your GWT and App Engine installation using the Google instructions




Once you have GWT and App Engine Eclipse plugins working, you can proceed to installing and running GWT examples directly from your Eclipse SDK




5) Unzip downloaded GWT SDK from step 1)

6) Navigate to the GWT SDK samples folder -> gwt-windows-1.6.4\samples



Installing Showcase sample as Eclipse project


6.1) Using Eclipse create a new Google Web Application Project and call it Showcase. In the package field you must enter com.google.gwt.sample.showcase





6.2) Once the Eclipst project is generated, use Windows Explorer and navigate to the <GWT SDK>\gwt-windows-1.6.4\samples folder and copy/paste the Showcase folder over the previously created Eclipse project's folder, allow Windows Explorer to overwrite all previously generated Showcase project files.

6.3) Go back to Eclipse and refresh the Showcase project.

6.4) The Showcase project should compile without errors, and you should be able to run the project in the hosted environment the same way as any other GWT project.

7) Follow the same steps to install and run other GWT SDK samples:

- DynaTable

- Hello

- Mail




If you are interested in running the latest GWT 2.0 and showcase sample, you should read the Google instructions here:



In order to connect to the Goolge GWT repository you will need to install the SVN client. I am using subeclispe plugin which you can download from here:




Follow the same steps as for the GWT plugin installation.

After pressing the "Add Site" button, you will need to update the location field with http://subclipse.tigris.org/update_1.6.x



Once subversion Eclipse plugin is installed you should use SVN perspective to connect to the GWT 2.0 project and download the latest software from here:








Good luck!

WebSphere Performance Tune Up Tips

Posted by Kris Gemborys

WebSphere Configuration Tips
  • Turn on Performance Monitor Interface (PMI) and use standard settings, the standard settings have very small overhead and can be used even in production.
  • Use the Performance Monitor Viewer to analyze behavior of various pools

§ DataSource pools

§ Resource Adapter pools

§ Thread pools

§ EJB pools

  • Use the Performance Monitor Viewer to analyze

§ Servlet response time

§ EJB response time

  • Make sure that “Connection Wait Time” is zero. If “Connection Wait Time” is greater than zero it means that the size of the pool is too small for the given load
  • If the size of the pool is too large fetching connection can have measurable impact on the performance when the system is not under stress.
  • The minimum size of the heap should be sufficient large to handle the small to medium amount of load without a need for additional memory allocation.
  • Setting up Maximum Amount of Heap beyond of 1 GB on 32-but operating system will not have significant impact.
  • Do not increase stack size (-Xss) beyond 1 MB on 32-bit OS-es
  • Enable Verbose GC and use IBM Support Assistant Workbench to analyze GC logs Verbose GC is possibly the best tool for analyzing memory leaks while load testing J2EE applications and in production.
  • Upgrade to the latest JVM Fix Pack; applying WebSphere FixPacks does not apply JVM Fix Packs. In most circumstances the JVM FixPack needs to be downloaded and applied separately.
  • Use GC report created using Verbose GC option to adjust heap minimum and maximum sizes as well as heap expanded ratio until an acceptable balance between number of allocation failures and amount of pause time during each garbage collection (-Xms, -Xmx, -Xminf), The large Heap Size increases heap mark and sweep time, and it is a show stopper for all other threads (IBM JVM specific).
  • Use GC verbose report to identify the minimum heap size in order to avoid cyclical heap contractions and expansions
  • Avoid using System.gc(),GC is a "stop the world" event, meaning that all threads of execution will be suspended except for the GC threads themselves. If you must call GC, do it during a non-critical or idle phase (this is only specific to the IBM JVM implementation, the SUN JVM handles mark and sweep differently)

Java and J2EE Performance Related Programming Tips
  • Minimize backe-end calls by caching frequently requested data

  • Cache frequently used short-lived objects to avoid the need to repeatedly recreate the same objects over and over, and therefore invoke the GC. If you are concerned about memory allocations considering using WekHashMap
  • Avoid finalizers

  • Use primitive variable types instead of Java object types whenever possible. For example, use int instead of Integer. Using the Java object types still makes more sense in case of data objects because it let you avoid headaches with supporting NULLs.

  • Call EJBs by reference instead of by value whenever possible
  • Use EJB Local Interfaces (The temote calls, out-of-process, will kill your performance since they are 1000 times slower than in-process calls, this is commonly referred to as round-tripping problem)
  • If you use ThreadLocal to store user context
    data make sure that you not only remove the data explicitly when you are done but set to null all
    referenced object. If you just remove an instance without resetting all
    reference objects to null you will end-up with memory leaks and performance
    issues.
  • Avoid storing statefull data to absolute minimum

  • Do not use Stateful Session Beans if you plan to support large clusters

  • Use JSF managed beans with request scope rather than session scope (the same prnciple as Stateless vs. Stateful Beans)

  • When manipulating strings, use StringBuffer or if available StringBuilder

  • Avoid excessive writing to Java Standard Output or Standard Error (in one case removing excessive logging in production improved the application performance by 40% without additional code changes)

  • Avoid allocating objects within loops, which keeps the object alive on the Java heap longer than necessary.
  • If you need to cache referenced data (read only) consider using HashMap, LinkedList, or ArrayList
  • If you need to cache data which needs to be updated consider using ConcurrentHashMap or similar Concurrent collections
  • Do not use Hashtables
  • Limit using synchronization and if you use
    synchronization try to use instance synchronization rather than static
    synchronization, consider the following simple example to load read-only
    reference data. The badFoo() has many problems and in certain scenarios may
    even hang. While the goodFoo() is threadsafe even though it does not use
    synchronization at all. The trick is in using local varaibles to do all
    processing and assigning the final result as atomic operation. The only small
    drawback of goodFoo is that the first couple concurrent users will be executing read
    and process operations in parraller. This can be resolved by preloading data
    when application starts (i.e. ContextListener) rather than using Lazy loading. There are still some other issues with the goodFoo() related to exposing static collection to external manipulation. This can be addressed by returning only cloned elements.


References:


https://developer-content.emc.com/developer/downloads/FAQ_Websphere_Performance.pdf


http://www-01.ibm.com/support/docview.wss?rs=180&uid=swg21114927


http://www-01.ibm.com/software/support/isa/


http://www.ibm.com/developerworks/library/i-gctroub/


http://www.ibm.com/developerworks/eserver/library/es-JavaVirtualMachinePerformance.html


https://developer-content.emc.com/developer/downloads/FAQ_Websphere_Performance.pdf


http://publib.boulder.ibm.com/infocenter/javasdk/v5r0/index.jsp?topic=/com.ibm.java.doc.diagnostics.50/diag/appendixes/defaults.html


http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm.websphere.express.doc/info/exp/ae/tprf_tunejvm.html


http://publib.boulder.ibm.com/infocenter/wasinfo/v6r0/index.jsp?topic=/com.ibm.websphere.express.doc/info/exp/ae/tprf_tunejvm.html