Sunday, 26 January 2014

CSS Score

We all know that when many conflict css properties can be applied for one web element, it is by specification that the more specific properties will be applied. However, specific is an abstract word. Hence, it is better that we know about css score, or how browser choose which properties to override.

Browser categorize css to 4 categories with the specificity from high to low as:

1/ Style Attribute:  <li style="color:white"/>

2/ ID: #some_id{ color: red;}

3/ Class, pseudo class, attribute: .some_class {color:green;}

4/ Elements: li {color:black;}

From W3C recommendation, the result of this calculation takes the form of four comma-separated values, a,b,c,d,1 where the values in column “a” are the most important and those in column “d” are least important. A selector’s specificity is calculated as follows:

  • To calculate a, count 1 if the declaration is from a style attribute rather than a rule with a selector (an inline style), 0 otherwise.
  • To calculate b, count the number of ID attributes in the selector.
  • To calculate c, count the number of other attributes and pseudo-classes in the selector.
  • To calculate d, count the number of element names and pseudo-elements in the selector.
Here is one example using this rule:

body#home div#warning p.message --> 0, 2, 1, 3

Please notice the comma ',' in the css score, it is there to remind us that the score b, c, d can be equal or bigger than 10. Still, the rule to compare is left to right.

Saturday, 25 January 2014

All about transaction

As you can find many articles, tutorials in the web explaining about transaction, I will try to write this article in FAQ style. Hope you can find something interesting after reading this.

A. What is transaction?

Transaction in Java context normally mean database transaction. It comprises of one or more interactions with database, which should be treated as one unit of work. This unit of work can be committed or rolled back.

B. Why people invent transaction?

There are two concerns that cannot be solved without transaction:

1/ First, if you have some logical unit of work that involve interactions with multiple database that you will want it to success or fail together (think of transferring of money from 2 bank accounts store in 2 different databases)

2/ Secondly, there are many users accessing the database concurrently and your work requires a series of interaction with database. You do not want people to see the intermediate result of your work. For example, transfering money from one account to other account in the same DB still require 2 interactions with DB and you must not let the user see the DB changes after the first interaction (whether you have deduct the money from one account but not credit to the other or opposite).

C. Can transaction fully solve the concerns above?

Up to a certain extent.

For the first concern, transaction fit well as long as you do not write to anywhere except database. Sometimes, you do not have that luxury. For example, writing to file system, uploading content or interacting with legacy system are some of the things that do not have roll back feature.

For the second concern, you cannot fully isolate your database interaction in concurrent environment. Unless you lock other DB connections from reading records or tables, otherwise, there will be some expose up to certain level. The more isolated your transaction is, the more performance your system will suffer. Because of that, in real situations, developers rarely let the transaction fully isolated. Depend on your business requirements, you will choose one of the transaction isolation levels as below (from Productive JavaEE 5 by Adam Bien):

READ UNCOMMITTED
Already written, but still uncommitted data is visible for other transactions. If any of the changes are rolled back, another transaction might have retrieved an invalid row. This setting can even be problematic for read-only applications such as reporting. Reports created with READ_UNCOMMITTED can be partially based on stale or even non-existing data.

READ COMMITTED
Only committed data is visible to other transactions. The isolation is higher, because the stale data is isolated until the commit―and only then visible to others. You can rely on the consistency of the read data; however, the data remains consistent only for the length of the read operation. When you re-execute the same query in the current active transaction, you might see different results.

REPEATABLE READS
On this level, a query operation with the same parameters ensures the same results. Even if other transactions committed the changes, which affected some rows in the result set, it will not change the result set. Although delete and change operations are not visible in the result set, new rows can still be added by another transaction and appear in the result.

SERIALIZABLE
This is the highest possible level. The name is derived from serial access, no interferences between transactions are allowed. Rereading a query always returns the same result, regardless of whether the data was changed, added or deleted. This level is very consistent and very costly. Read and write locks are needed to ensure this consistency level. The price for the consistency is high as well: the lowest scalability (because lack of concurrency) and increased chances for deadlocks.

D. How you deal with the limit of the transaction

To make it short, to deal with the limit of transaction, your system need to know how to recover when error does happen. There are two categories of errors that may will need to tackle.

1/ Non database related error. It happens when you happen to write something to somewhere at the middle of the transaction and the transaction roll back due to some DB error. Fortunately, it is quite safe to assume that you will have some kind of exception thrown by transaction manager if the roll back happen. Unfortunately, as your system is not the database, it does not know how to undo the changes if there is transaction roll back. You will need to take responsibility to identify the exception type and attempt to recover system integrity if possible.

2/ The second issues rise from the transaction isolation. It is possible that the data you want to work with may not be latest or may not be available any more at the time the transaction commit. As the issue is well known, the solution for it is quite also well known. It is divided to two strategy:

Pessimistic Locking: As the solution is pessimistic, it is assume that interference between transaction will happen and the better way to fix the issue is to never let that happen. The idea is to avoid sharing resource, whether by row lock, table lock or synchronized access in application code. This approach sure work with the great performance sacrifice.

Optimistic Locking: As the performance sacrifice for resource locking is high, the preferred approach is optimistic locking. Optimistic locking is based on the assumption that we will have a reliable way to identify if there is any any data change at the middle of the transaction. If that happen, the transaction exception will be thrown but this unfortunate case should not happen often. To track this change, people often include version column in table and increase version any time modification happen.

Friday, 17 January 2014

Maximum concurrent connections to the same domain for browsers

Do you surprise when I told you that there is a limit on how many parallel connections that a browser can make to the same domain?

Maximum concurrent connections to the same domain

Don't be too surprise if you never heard about it as I have seen many web developers missed this crucial point. If you want to have quick figure, this table is from the book PROFESSIONAL Website Performance: OPTIMIZING THE FRONT END AND THE BACK END by Peter Smith


The impact of this limit 

How this limit will affect your web page? The answer is a lot. Unless you let user load a static page without any images, css, javascript at all, other while, all these resources need to queue and compete for the connections available to be downloaded. If you take into account that some of the resources depend on other resource to be loaded first, then it is easy to realize that this limit can greatly affect page load time.

Let analyse further on how browser load a webpage. To illustrate, I used Chrome v34 to load one article of my blog (10 ideas to improve Eclipse IDE usability). I prefer Chrome over Firebug because its Developer Tool has the best visualization of page loading. Here is how it looks like



 I already crop the loading page but you should still see a lot of requests being made. Don't be scared by the complex picture, I just want to emphasize that even a simple webpage need many HTTP requests to load. For this case, I can count of 52 requests, including css, images, javascript, AJAX, html.

If you focus on the right side of the picture, you can notice that Chrome did a decent job of highlighting different kind of resources with different colours and also manage to capture the timeline of requests.

Let see what Chrome told us about this webpage. At first step, Chrome load the main page and spend a very short time parsing it. After reading the main page, Chrome send a total of 8 parallel requests almost at the same times to load images, css and javascript. For now, we know that Chrome v34 can send up to 8 concurrent request to a domain. Still, 8 requests are not enough to load the webpage and you can see that some more requests are being sent after having available connection.

If you still want to dig further, then we can see that there are two javascripts and one AJAX call (the 3 requests at the bottom) are only being sent after one of the javascript is loaded. It can be explained as the execution of javascript trigger some more requests. To simplify the situation, I create this simple flowchart


I tried my best to follow colour convention of Chrome (green for css, purple for images and light blue for AJAX and html). Here is the loading agenda

  • Load landing page html
  • Load resources for landing pages
  • Execute javascript, trigger 2 API calls to load comments and followers.
  • Each comment and follower loaded will trigger avatar loading.
  • ...
So, in minimum you have 4 phases of loading webpage and each phase depend on the result of earlier phase. However, due to the limit of 8 maximum parallel requests, one phase can be split into 2 or more smaller phases as some requests are waiting for available connection. Imagine what will happen if this webpage is loaded with IE6 (2 parallel connections, or minimum 26 rounds of loading for 52 requests)?


Why browsers have this limit?

You may ask if this limit can have such a great impact to performance, then why don't browser give us a higher limit so that user can enjoy better browsing experience. However, most of the well-known browsers choose not to grant your wish, so that the server will not be overloaded by small amount of browsers and end up classifying user as DDOS attacker.

In the past, the common limit is only 2 connections. This may be sufficient in the beginning day of web pages as most of the contents are delivered in a single page load. However, it soon become the bottleneck when css, javascript getting popular. Because of this, you can notice the trend to increase this limit for modern browsers. Some browsers even allow you to modify this value (Opera) but it is better not to set it too high unless you want to load test the server.

How to handle this limit?

This limit will not cause slowness in your website if you manage your resource well and not hitting the limit. When your page is first loaded, there is a first request which contain html content. When the browser process html content, it spawn more requests to load resource like css, images, js. It also execute javascript and send Ajax requests to server as you instruct it to do.

Fortunately, static resources can be cached and only be downloaded the first time. If it cause slowness, it happen only on first page load and is still tolerable. It is not rare that user will see a page frame loaded first and some pictures slowly appear later later. If you feel that your resources is too fragmented and consume too many requests, there are some tools available that compress and let browser load all resources in single request (UglifyJS, Rhino, YUI Compressor, ...)

Lack of control on Ajax requests cause more severe problem. I would like to share some sample of poor design that cause slowness on page loading.

1. Loading page content with many Ajax requests

This approach is quite popular because it let user feel the progress of page loading and can enjoy some important parts of contents while waiting for the rest of contents to be loaded. There is nothing wrong with this but thing is getting worse when you need more requests to load content that the browser can supply you with. Let say if you create 12 Ajax requests but your browser limit is 6, in best case scenario, you still need to load resources in two batches. It is still not too bad if these 12 requests are not nesting or consecutive executed. Then browser can make use of all available connections to serve the pending requests. Worse situation happen when one request is initiated in another request callback (nested Ajax requests). If this happen, your webpage is slowed down by your design rather than by browser limit.

Few years ago,  I took over one project, which is haunted with performance issue. There are many factors that causing the slowness but one concern is too many Ajax requests. I opened browser in debug mode and found more than 6 requests being sent to servers to load different parts of page. Moreover, it is getting worse as the project is delivered by teams from different continents, different time zone. Features are developed in parallel and the developer working on a feature conveniently add server endpoint and Ajax request to let work done. Worrying that the situation is going out of control, we decided to shift the direction of development. The original design is like this:



For most of Ajax requests, the response return JSON model of data. Then, the Knock-out framework will do the binding of html controls with models. We do not face the nested requests issue here but the loading time cannot be faster because of browser limit and many http threads is consumed to serve a single page load. One more problem is the lack of caching. The page contents are pretty static with minimal customization on some parts of webpages.

After consideration, we decided to do a reset to the number of requests by generating page contents in one page. However, if you do not do it properly, it may become like this:



This is even worse than original design. It is more or less equal to having the limit of 1 connection to server and all the requests are handled one by one.

The proper way to achieve similar performance use Aysnc Programming



Each promise can be executed in a separate thread (not http thread) and the response is returned when all the promises are completed. We also apply caching to all of the services to ensure the service to return quickly. With the new design, the page response is faster and server capacity is improved as well.

2. Fail to manage the request queue

When you make a Ajax request in javascript and browser do not have any available connection to serve your request, it will temporarily put the request to the request queue. Disaster happens when developers fail to manage the request queue properly. This often happens with rich client application. Rich client application functions more like an application than a web page. Clicking on button should not trigger loading new web address. Instead, the page content is uploaded with result of Ajax requests. The common mistake is to let new requests to be created when you have not managed to clean up the existing requests in queue.

I have worked on a web application that make more than 10 Ajax requests when user change value of a first level combo box. Imagine what happen if user change the value of the combo box 10 times consecutively without any break in between? There will be 100 Ajax requests go to request queue and your page seem hanging for a few minutes. This is an intermittent issue because it only happen if user manage to create Ajax requests faster than the browser can handle.

The solution is simple, you have two options here. For the first option, forget about rich client application, refreshing the whole page to load new contents. To persist the value of the combo box, store it as a hash attached to the current URL address. In this case, browser will clear up the queue. The second option is even simpler, block user from making change to combo box if the queue is not yet cleared. To avoid bad experience, you can show the loading bar while disabling the combo box.

3. Nesting of Ajax requests

I have never seen a business requirement for nesting Ajax request. Most of the time I saw nesting request, it was design mistake. For example, if you are a lazy developer and you need to load flags for every country in the world, sorting by continent. Disaster happen when you decide to write code this way:
  • Load the continent list
  • For each continent, loading countries
Assume the world have 5 continents, then you spawn 1 + 5 = 6 requests. This is not necessary as you can return a complex data structure that contain all of these information. Making requests is expensive, making nesting request is very expensive, using Facade pattern to get what you want in a single call is the way to go. 

Maven Explanation - part 1


Comparing Maven and Ant, I observed that it is much easier to find developers that feel comfortable in using Ant than Maven. Time cannot be the excuse as Maven has been around and be widely adopted for sometimes. The root cause, from my own opinion, is the differences in design and goals between Maven and Ant.

Background

Ant was born as a Java build tool. It is simpler and easier to use than GNU Make, the build tool for C and C++. Compare to Makefile, Ant file is very readable because it is written in xml format. 

Fundamentally, both Ant and Make are WYSIWYG build tool. You need to specify clearly in the build descriptor file what need to be done and how to do. Most of the times, there is a need to execute more than one step to compile or package your project. This end up causing the build descriptor file long and tedious to write. 

Fortunately, unless for the first project, no one try to write build descriptor file from beginning. Rather, developers only copy/paste and modify the template file to fit other projects.

Slowly, Ant getting more popular and became de-facto build tool for all Java projects. However, Ant still have some problems. Developers often complain that Ant build descriptor file is too verbose and lack of formal structure. Because of this, Maven was introduced as a replacement for Ant. To most of developers, the most obvious benefit of using Maven is the ability to avoid including application libraries to the project but actually, Maven offers much more than that. 

To simplify and standardize thing, Maven introduce the standard lifecycle with some pre-defined steps for project build. However, to shorten the descriptor file, Maven hides most of the steps and only expose a build descriptor file that almost empty. Because of this reason, not all users of Maven have clear understanding of Maven build steps.

Maven build lifecycles

To understand Maven, we must first understand build lifecycles. Maven groups pre-defined steps of a build to 3 lifecycles, 'clean', 'default' and 'site'. The rationale behind this is some steps are interrelated and normally be executed together. 

Maven use the term 'phase' to describe build step. The order of phase execution in one lifecycle is fixed.

As the lifecycle names suggests, Maven lifecycles tackles 3 major concern of developers: clean the build, build and create documentation.

Here are the list of phases for each life cycle:


The amount of included phases for one lifecycle varies from as low as 3 to as high as 22. However, developers only need to remember some important phases rather than memorizing all.

In a standard Maven command, developer need to specify at least one lifecycle to execute. To instruct Maven to execute one lifecyle, developer must choose one phase as  target in mvn command. Because Maven executes phases by order, any phase appears before the chosen phase in selected lifecycle will be automatically executed as well. Obviously, any phase appears after the selected phase in the lifecycle will not be executed. For example, when you type:

mvn clean install What Maven actually need to execute is 

  • pre-clean, clean
  • validate, initialize, generate-sources, process-sources, compile, process-classes, test, prepare-package, pre-integration-test, integration-test, post-integration-test, verify, install


Maven plugins

Plugin and execution

The name os Maven phases may be misleading for beginner. To be precise, phase define the time to do something, not the actual tasks to be done. For developers that have prior experience with Ant, phase in Maven does not equal to goal in Ant. The equals concept of Ant's goal should be plugin execution.

A Maven projects is built with the help of Maven plugins. A plugin declaration in Maven build descriptor file combines of plugin configuration, and at least one executions.

A plugin execution is identified by id, goal and phase. 
  • The id is to identify execution if you register multiple executions of the same plugin to your build descriptor file. 
  • The phase tell Maven which step it should execute the plugin.
  • The goal tell Maven which goal of the plugin that should be executed. One plugin can support multiple goals for different purposes.
  • The execution can have optional configuration. When plugin configuration and execution configuration contains duplicated information, plugin configuration has higher priority. This feature can be helpful sometimes (for example, using a single plugin to start multiple servers).
There are two possible scenarios to trigger a plugin execution. The more popular way is to define plugin execution with a id and phase. In this case, if User trigger a lifecycle and selected phase, the plugin execution is triggered as well. The other way is to trigger plugin execution manually by specifying goal of plugin in Maven command. In this case, developer bypass lifecycle, phase and only execute one single plugin.

It is perfectly possible to migrate Ant project to Maven as Maven provide Ant execution plugin that can execute Ant task.

Default Binding

The tricky part for Maven is some plugin are automatically included in the Maven lifecycle without need for declaring. To makes things more complicated, the plugin executions are included dynamically, depend on project package.

Below is the list of all of the default executions as provided by Maven website

Clean Lifecycle Bindings
clean
clean:clean

Default Lifecycle Bindings - Packaging ejb / ejb3 / jar / par / rar / war

process-resources
resources:resources
compile
compiler:compile
process-test-resources
resources:testResources
test-compile
compiler:testCompile
test
surefire:test
package
ejb:ejb or ejb3:ejb3 or jar:jar or par:par or rar:rar or war:war
install
install:install
deploy
deploy:deploy

Default Lifecycle Bindings - Packaging ear

generate-resources
ear:generate-application-xml
process-resources
resources:resources
package
ear:ear
install
install:install
deploy
deploy:deploy

Default Lifecycle Bindings - Packaging maven-plugin

generate-resources
plugin:descriptor
process-resources
resources:resources
compile
compiler:compile
process-test-resources
resources:testResources
test-compile
compiler:testCompile
test
surefire:test
package
jar:jar and plugin:addPluginArtifactMetadata
install
install:install
deploy
deploy:deploy

Default Lifecycle Bindings - Packaging pom

package
site:attach-descriptor
install
install:install
deploy
deploy:deploy

Site Lifecycle Bindings

site
site:site
site-deploy
site:deploy

Now, let re-visit the example above, when you type:

mvn clean install

It is possible to achieve the same result by manually trigger plugins by typing:

mvn clean:clean
mvn resources:resources
mvn compiler:compile
mvn resources:testResources
mvn compiler:testCompile
mvn surefire:test
mvn jar:jar
mvn install:install

Maven called this default binding. The configuration for default binding is stored in components.xml (Maven 2.x) or default-bindings.xml (Maven 3.x). The default binding is deeply stored in maven-core jar file and you will not be able to change it without re-packaging Maven.

Build Configuration

The plugin can retrieve information from build configuration, plugin configuration and plugin execution configuration. To understand more, you can open any pom file editor that support effective Pom view. Here is the simplified version of the build configuration generated by effective pom view:


  <build>
    <sourceDirectory>maven_webapp\src\main\java</sourceDirectory>
    <scriptSourceDirectory>maven_webapp\src\main\scripts</scriptSourceDirectory>
    <testSourceDirectory>maven_webapp\src\test\java</testSourceDirectory>
    <outputDirectory>maven_webapp\target\classes</outputDirectory>
    <testOutputDirectory>maven_webapp\target\test-classes</testOutputDirectory>
    <resources>
      <resource>
        <directory>maven_webapp\src\main\resources</directory>
      </resource>
    </resources>
    <testResources>
      <testResource>
        <directory>maven_webapp\src\test\resources</directory>
      </testResource>
    </testResources>
    <directory>maven_webapp\target</directory>
    <finalName>maven_webapp</finalName>
    ...
</build>

Because of this build configuration that you was told to put your source code inside src/main/java, your resource inside src/main/resources and your web app inside src/main/webapp. If you want to override Maven default setting, simply provide different value in the build descriptor file (for example, building a project with both Ant and Maven). However, it should only be used as the last resort because overriding default behaviour of Maven can cause confusion for anyone maintain your project.

Plugin & Execution Configuration
From above explanation, we know that it is possible to choose another source folder rather than default src/main/java by overriding build properties.
There is another way to achieve this by manually configuring plugin behaviour. Here is one example as provided by Maven:
      <plugin>
        <artifactId>maven-compiler-plugin</artifactId>
        <version>2.0.2</version>
        <configuration>
          <includes>
            <include>**/core/**</include>
          </includes>
        </configuration>
      </plugin>
Look at the configuration above, it is even possible to define multiple source folders for a project. As usual, if you want to override Maven default configuration, declare the plugin in pom.xml.
Sometimes, you even need to add more execution and modify existing execution. In this case, providing execution configuration is the only choice. 
In the below example, we want to add one more execution for integration test, which suppose to run only on intergration-test phase. From our requirement, the configuration for the two execution must be different to mutually exclude test cases. Still it can share the common configuration that allow us to enable/disable both executions.
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-surefire-plugin</artifactId>
             <version>2.12</version>
            <configuration>
                 <skipTests>${skip-all-tests}</skipTests>
             </configuration>
             <executions>
                 <execution>
                    <id>default-test</id>
                    <phase>test</phase>
                    <goals>
                        <goal>test</goal>
                    </goals>
                    <configuration>
                        <skip>${skip-unit-tests}</skip>
                        <groups>unit</groups>                        
                        <excludedGroups>integration</excludedGroups>
                    </configuration>
                </execution>
                <execution>
                    <id>integration-tests</id>
                    <phase>integration-test</phase>
                    <goals>
                        <goal>test</goal>
                    </goals>
                    <configuration>
                        <skip>${skip-integration-tests}</skip>
                        <groups>integration</groups>
                        <excludedGroups>unit</excludedGroups>
                    </configuration>
                </execution>
            </executions>
        </plugin>
Please notice the id of the first execution is default-test. This is because Maven defining a default execution id for each default binding plugin execution.
The name of this implicit id is always default-_____

Dynamic and Static Languages

Recently, I had a small debate with one of my friend when he kept trying to use Ctrl + click to open method declaration for one javascript method (in Eclipse). I told him that it is impossible for the IDE to reliably find the declaration for whatever object in a dynamic language. He insisted that he has tried that before and it work.

I tell him that it is likely that the declaration of the object he found may be luckily appear on the same js file, that why Eclipse IDE manage to scan and find the method declaration.

To back my argument, I keyed in Static, Dynamic Type language and manage to find a good article that clearly explain about Strong, Weak, Dynamic, Static language

http://coding.smashingmagazine.com/2013/04/18/introduction-to-programming-type-systems/

If you manage to read the article carefully then you can see what I mean. Let say if you have two javascript files

script1.js
function foo (param){
alert(param);
}

script2.js
foo(“hello world”);

Can you expect the IDE to find the function foo declaration for you? Practically yes but definitely guess work. It is quite simple to see that we will never know if browser will load both script1.js and script2.js files. It is perfectly possible for browser to load another file that have another declaration for the identical function name and and you get another behavior when you call this function.

If you use jQuery before, you should notice this behavior. jQuery register themselves as $. If you create your own $ function, you may accidentally override jQuery methods and all usage and jQuery will throw errors in runtime.

On another occasion, I used RadRails to debug one legacy Rails application. Fortunately, the finding reference work quite well because the IDE automatically scan all the .rb files in the workspace. However, it only work up to a point when there is external library linked up by relative path like this:


require "#{File.dirname(__FILE__)}/../foo_class"

Apparently, the IDE do not know what exists in the parent folder of the current application and it does not try to guess by linking to other project. This is where Java is shining. If you develop Java application now a day, it is likely that you will use Maven to control scope of dependencies or some framework that support dependencies management (like Play frameworks). Whatever way you do, you will not start a project without knowing what will be available in your Virtual Machine. You know all of the classes that can be accessed from compile time. With that knowledge, the IDE become powerful, especially for refactoring. Throughout my years of experience, I prefer dynamic language to write simple, no legacy integration, minimal maintenance application. When the project grow bigger and bigger and change happen often, static language becomes more necessary.
One of my friend has decided to convert his jQuery based application to GWT just for the sake of quick refactoring. Imagine if you have more than ten thousand lines of code, developed by more than 10 developers and you get change request everyday. Finding which behaviour has been bind to which event and change it finally become too tedious that he moved the whole code base to Java and use GWT to translate it to Javascript. Finally, it was a good choice that the maintenance time dramatically dropped and there is no noticeable performance change.