Compressing JavaScript in a Maven Web Application

At MobilVox, we use Maven to manage many of our Java projects. Maven is a project management tool and enables us to easily keep our libraries used in our applications in sync. It also contains a plugin API that can be used to add additional features and tasks to project builds.

Open Source Software Initiative

In 2007 MobilVox launched its Open Source Software Initiative. The initiative is aimed to offer something back to to the open source community so that all can benefit. Through this initiative we have built a plugin, the maven-js-plugin, that will compress JavaScript in web applications, project sites, and JavaScript libraries outside of a project.

Compressing JavaScript

For the remainder of this post we are going to look at how to use the maven-js-plugin in a Maven web application. We are only going to look at how to include the plugin and run it via the POM.XML file. For all other uses, configuration, and goals contained in the plugin please see the maven-js-plugin site. This blog also assumes you are familiar with Maven and its project setup.

Including the plugin

To include the plugin so it can be used in your web application you will need to add it to the project POM.XML file. In the example below, the test-webapp application build element has the plugin added to it.

<build>
  <finalName>test-webapp</finalName>
    <plugins>
      <plugin>
	<groupId>com.mobilvox.ossi.mojo</groupId>
	<artifactId>maven-js-plugin</artifactId>
        <version>1.3.1</version>
     </plugin>
  </plugins>
</build>

The MobilVox OSSI release repository is in sync with the Maven master repository so if you are connected to the Internet, the next time you build your project after adding the above code the plugin will be downloaded and included.

Adding the Compression Goal

Now that the plugin can be found, the correct goal needs to be added to the plugin element so that the compression will run. In this instance we want to run the compress goal, which will compress all of the files in a war file that is built and placed in the project output directory. We are also tying the goal to the package phase. This is done because the package phase is the moment where the war is built and placed in the project output directory. In the code snippet below we have added the executions element to the plugin. Once this is added, every time the web application is built, the JavaScript in it will be compressed.

<build>
  <finalName>test-webapp</finalName>
  <plugins>
    <plugin>
      <groupId>com.mobilvox.ossi.mojo</groupId>
      <artifactId>maven-js-plugin</artifactId>
      <version>1.3.1</version>
      <executions>
        <execution>
          <phase>package</phase>
          <goals>
            <goal>compress</goal>
          </goals>
         </execution>
       </executions>
    </plugin>
  </plugins>
</build>

Configuration

The maven-js-plugin has many configuration options and we will explore a couple them next. The first one is the ability to merge the war files or have one with compressed JavaScript and one without compression applied. This is done via a mergeWarFiles element that can be added to the configuration element that we will add to the plugin. For this example we will choose false and use two war files. Since we are creating two separate files, it would be useful to have a way to tell which file is which. This is done via a classifier element that we will add to the configuration. The classifier will be added to the war file that has the compresses JavaScript. The configuration is shown in the example below.

<build>
  <finalName>test-webapp</finalName>
  <plugins>
    <plugin>
      <groupId>com.mobilvox.ossi.mojo</groupId>
      <artifactId>maven-js-plugin</artifactId>
      <version>1.3.1</version>
      <configuration>
        <mergeWarFiles>false</mergeWarFiles>
        <classifier>compressed</classifier>
      </configuration>
      <executions>
        <execution>
          <phase>package</phase>
          <goals>
            <goal>compress</goal>
          </goals>
         </execution>
       </executions>
    </plugin>
  </plugins>
</build>

Skipping Compression

There may be instances where it is necessary to skip compression of the web application. Instead of commenting out the plugin in the POM.XML, you can add the following to your command line options when launching the build process.

-Djs.compress.skip=true

Conclusion

We have discussed one method for using the maven-js-plugin to compress JavaScript in a web application that is built using Maven. There are many other options and ways to use to plugin. For more on the available options please see the maven-js-plugin site.

  • Share/Bookmark

, , , ,

No Comments

Using Selenium to test web applications.

Simulating users for web application testing has always been a difficult task. Everyone who’s done any kind of web application development has had the unfortunate luck to not find a bug until you were demoing something to a client. Early tools like Microsoft’s Web Application Stress Tool were cumbersome and not overly easy to use. Selenium on-the-other-hand tries to make recording and playing back test scripts as easy as possible. Selenium runs in Internet Explorer, Mozilla, and a Safari version on the Mac, there is also an iPhone version. Selenium has three main components:

  1. Selenium Core: The Selenium Core is the text execution framework it is where the tests are run. It’s a JavaScript based framework that runs the test scripts.
  2.  

  3. Selenium IDE: This is a FireFox add-on that greatly aides in the creation of test scripts. It allows you to record, edit, and debug tests directly from the add-on.
  4.  

  5. Selenium Remote Control: The Selenium RC is a two part system a server which automatically launches and kills browsers, and acts as a HTTP proxy for web requests from them as well as client libraries for your favorite computer language.

The Selenium IDE is the quickest and easiest way to develop test scripts. The FireFox add-on is simple and easy to use. You can choose to use its recording capability, or you may edit your scripts by hand. With autocomplete support and the ability to move commands around quickly, Selenium IDE is the ideal environment for creating Selenium tests no matter what style of tests you prefer. Once installed the IDE can be accessed from the tools menu in FireFox. You will be presented with the pop-up seen below:

Selenium IDE

Selenium IDE

Once you click the record button all your actions on the currently displayed site will be recorded. Once you’ve completed recording your script you can play it back in place for simple testing purposes. The IDE isn’t as full featured as most IDEs but it does support simple break points and debugging which can come in very handy while you are creating test cases. You may also export the script in a number of languages including, Java, C#, Perl, PHP, Python, or Ruby. You may also save the test cases for later use and editing. Once the scripts are exported as Java you can load them into any IDE such as Eclipse and use them to create a comprehensive test suite. As long as you have a Selenium server up and running to use you can always run scripts with relative ease. Here is a quick example of what the Java code might look like for a simple Selenium test.

int port = 4444;
String selURL = “selenium.mobilvox.com”;
String testURL = “http://sv101.mobilvox.com/iris/”;
browser = “*firefox C:\\Program Files\\Mozilla Firefox\\firefox.exe”;
Selenium selenium = new DefaultSelenium(selURL,port,browser,testURL);
selenium.start();
selenium.deleteCookie(“searchTerm”, “/iris”);
selenium.open(“/iris/iris/jsp/search.jsp”);
selenium.waitForPageToLoad(“5000″);
selenium.type(“simpleSearchBox”, “testTerm”);
selenium.click(“//img[@title='Click here to search']“);
String searchResultText = selenium.getHtmlSource();
…..
//Code here to test result

As you can see creating, editing, and running tests using Selenium is very simple. It’s gives you a comprehensive testing capability for all your web application needs.

  • Share/Bookmark

, ,

No Comments

Using Document Categorization to Aid Information Retrieval

One of the main products at MobilVox is the IRIS Suite of search tools. The main goal of both Desktop IRIS and Network IRIS is to aid the user in finding the documents they need as quickly as possible. In order to aid the retrieval process, business intelligence services have been and are currently being implemented in the suite. These services include:

  • Document summarization
  • Document categorization
  • Document tagging
  • Many other document related tasks

The rest of this blog will cover at a high level one of the document categorization approaches we use in the tool.

Tree Based Categorization

The IRIS Suite search tools provide a tree view of documents found from a search. There are two types of trees available:

  • A file system based tree that mimics the current directory structure
  • A categorization based tree

The main purose of the categorization based tree to provide an almost ontological view of the search results.

Knowledge Engineering

Tree based categorization starts by using a knowledge engineered rule set based on file type. The main purpose of the rule set is to provide a starting point for the categorization algorithm. In the image below, you can see the Office 2003 documents placed in the tree under:

Office 
    MS Office 
        1997-2003 
            Word
Knowledge Engineered Category Tree

Knowledge Engineered Category Tree

What this does is give an initial baseline for categorizing documents that gives some logical order to results for a user. Since this basically amounts to nothing more than categorizing based on document types, the algorithm implemented at MobilVox also examines the most frequently occuring words in the document as well as the WordNet library to examine the parent/child relationships amongst those words. Each word and relationship is given a relevancy score and if this score is above a pre-determined threshold then the category will be added to the initial baseline.

Improving the Algorithm

The initial categories generated are useful but leaves much room for improvement. One such improvement we made was to include user created categories, in the form of tags. Network IRIS provides a system where users can add tags to better describe the documents. These tags are factored into searches, displayed in search results, and can be voted upon. Given that these tags further describe the documents they can be factored into a categorization algorithm. In the image below the documents have been tagged and the category tree has been adjusted based on the tags.

Adjusting Categorization With Tags

Adjusting Categorization With Tags

As you can see this approach enables categories that will in all likelihood not be found using WordNet or knowledge engineering to be added. In the example above it would be highly unlikely that the category altemus would be found in the initial baseline but, since the tagged documents relate to the internship of the employee with the last name Altemus the category is highly relevant.

Further Improvements

In the implementation of the described algorithm the user generated categories would eventually fully replace the initial file type categories. However, removing them completely does not seem like the best option since they can be useful. A better approach that we have implemented is to improve the knowledge engineering rule sets by parsing out known data within documents. A good example is a collection of Java source files and/or API documentation. It is highly likely that they all contain a package declaration. When generating the knowledge engineered categories, parsing out the package name and adding it to the categories could provide quite useful to a user searching Java based APIs.

Conclusion

We have discussed, at a high level, a potential implementation of document categorization that can be used to aid the information retrieval process from a search tool. For more information, or to see the full implementation of the algorithm, please visit http://www.irissearch.net/.

  • Share/Bookmark

, , , ,

No Comments

The Trend Tool

Introduction

Meaningful use of a search engine typically involves two things:

1. A specific need for information (e.g., “How do I bake a pound cake?”, or “What is the GDP of Bulgaria?”).

2. A diverse data store of information (e.g., the internet).

If the internet was a book, finding specific data would be nearly impossible. If such a book had an index, you could look up “Bulgaria”, and expect to see “GDP” among the entries. But, as queries became more exact, a normal index would eventually become useless. (For instance, “names and addresses of interior decorators in Bulgaria” is not a likely index entry.) Thus one distinction between a traditional index and a search engine: A search engine like Google actually performs better as you give it more difficult input. The key to this, of course, is that the search query be appropriate–and fit the informational need. This presupposes a degree of mastery of the subject matter.

Now, given a sufficiently good website devoted to interior decoration, a search engine may suddenly become less useful in the context of that subject. The entry point of the internet is not always Google, and when it isn’t, our use of the internet typically involves these two things:

1. A specific need for information.

2. A specific data store of information.

Examples would include:

Specific need Specific Data store
Recent News cnn.com
Stock Prices bloomberg.com
Sports Scores espn.com
Product information amazon.com

And so on. Whenever not using a search engine, people tend to associate information needs with particular information providers. Companies such as Yahoo attempted to become single-sources of multiple types of content (“portals”) and so allowed the following relationship:

1. A variety of specific needs for information.

2. A single data store of information.

This model has its problems, but the convenience and business attractiveness of it led to a degree of success.

Trend Tool

This brings me to the trend tool project, in which the focus was placed away from the above well-established models. The tool is intended for users whose information needs are less clearly defined. Away from the computer, this happens often enough. The human mind does not actually have a Question-and-Answer relationship with the world, and so we use any number of research methods in real life. One might linearly consume a book; non-linearly consume reports, memoranda, interviews, events; experiment methodically; converse; brainstorm; and so on.

Whatever the case, the resulting relationship is as follows.

1. A general or category-based need for information.

2. A specific data store of information.

The tool was designed with such scenarios in mind, but also others in which unstructured data (typically text or html) can be aggregated and/or processed into a coherent dataset. Thus, a large set of forms or reports can be assembled by date and category; websites can be crawled at regular intervals over time, etc. Whatever the case, the resulting coupling occurs:

1. A general or category-based need for information.

2. A general or category-based data store.

For example, a company like Microsoft might want to track the volume and type of piracy that is occurring with its products, without presupposing what results it expects to find. And so (keeping this example as simple as possible) we can take hourly snapshots of the following URL:

http://thepiratebay.org/search/microsoft/

The trend tool processes the data, ordering it according to given parameters, and then displays the results.

The above example lists trend lines for data returned when searching for particular terms within the dataset. In this case, the tool is attempting to find which queries of the dataset result in an uptick appearance, with an “ideal” line drawn along with the actual query. A product’s popularity as a target for piracy would presumably be reflected in such graphs.

We are, in a sense, “searching backwards”, as the tool is presenting the user with queries rather than the opposite. In actual practice, the tool is intended to generate a very large (or even exhaustive) set of queries, and so allow the user to investigate the composition of the data extensively and quickly. A larger set of queries follows at this link.

Once presented with a set of results, the user can review and study them in more detail as needed, by submitting the queries back to the tool. In this case, we see the prevalence of the term “office” in the Microsoft search results:

Notice that we have replaced the trend line with the actual results. Each dot represents an hourly snapshot, and can itself be searched further. The x coordinate represents time, and the y coordinate represents the propensity of the given term within each snapshot.

In the most general sense, the tool replaces a search engine’s Input and Output relationship:

(Single Query) -> (Set of Data)

with

(Set of Data) + (Patterns within Data) -> (Multiple queries)

The difficulty, of course, is determining which patterns are meaningful, and then implementing new extensions for use with the tool that successfully detect those patterns. Ordered roughly by difficulty, the initial goal for the tool is to accept any data and find the following patterns within it, automatically:

1. Trends in the data that are appearing or disappearing.

2. Data that suddenly spikes, or shows other irregular patterns.

3. Data that occurs regularly or sporadically.

4. Data that correlates, or correlates inversely, with other data.

5. Cause-effect relationships within data.

  • Share/Bookmark

, ,

No Comments

Keeping Java Swing Applications Organized with the Swing Application Framework (JSR 296)

When it comes to writing Java Swing applications, nothing beats the flexibility of writing them by hand and from scratch. However, in most applications, the code can become impossible to manage as it grows.

This is where an application framework comes into play. For large applications, many people will point you towards the Eclipse Rich Client Platform (RCP) or the Netbeans Platform. These two are, without a doubt, the most powerful tools around in terms of application frameworks.

For small to medium sized applications, I have found that the Swing Application Framework (aka JSR 296 or SAF) is exactly what I needed. Application lifecycle, resource management and action management are just a few of the things that the SAF provides.

Application Lifecycle

The first benefit of using the SAF is the management of the application lifecycle. Since the SAF requires you to implement the Application class (or SingleFrameApplication class), the following methods are called by the application and can/should be overridden:

  1. launch – you must call this method at startup
  2. initialize – the application will automatically call this method; you may override it
  3. startup – the application will automatically call this method; you may override it
  4. ready – the application will automatically call this method; you may override it
  5. exit – you must call this method at exit
  6. shutdown – the application will automatically call this method; you may override it

Using and overriding these methods makes it extremely easy to handle the lifecycle of your application.

Resource Management

Another benefit of using the SAF is resource management. SAF supports several types of resource management, but in this article I’ll be talking about automatic resource injection, which I find to be most useful for these small to medium sized applications.

Using automatic resource injection is easy. Place a ‘resources’ directory in the same package as your main class. In the ‘resources’ directory, create a properties file and name it ‘YourMainClass.properties’. For this example, we are going to place all of our application’s resources in this single properties files. For larger applications, SAF supports a properties file for each class, but we don’t need to go that far for smaller applications.

The first lines of the properties file will specify some of our application’s specifications, such as its name, version etc. Below is a sample:

# Application settings for Your Application
Application.title = Your Application
Application.vendorId = MobilVox, Inc
Application.id = YA
Application.lookAndFeel = system
Application.version = 1.0
Application.homepage = http://www.mobilvox.com

As you can see, we can set our application’s title, version, vendor and a few others at this point. These declarations will come in handy in the next section.

The next thing we want to customize is our application’s main frame. This is also easily done with the properties file. So, under the ‘Application’ settings, we can place:

# Frame settings
mainFrame.title = ${Application.title} ${Application.version}
mainFrame.iconImage = icon.png

We can use the application title and version declarations to construct the title of our frame (or any other component in our application) so that it is always up to date. Also, we can set the icon of the frame here. Icons and images are all placed in the resources directory.

Now that we have the application’s main resources set up, we can talk about component resources (such as JLabel text, icons, buttons, etc). Setting these components up for automatic injection is also very easy. First, in your Swing code, set up your components as follows:

// Set up the status label
JLabel lblStatus = new JLabel();
lblStatus.setName("lblStatus");

When setting up any JComponent for automatic resource injection, two lines are required. Of course, you need to instantiate your component. The second line, as seen above, sets the name of the component. This name can be anything, but I like to name my components to match their member names to keep them organized. This is how SAF looks up this component in the properties file. Simply add your component into your application and SAF will inject the resources as follows (from the properties file):

# The status label properties
lblStatus.text = Text here
lblStatus.visible = false
lblStatus.foreground = 255, 255, 255

SAF resource management can handle any property for a given JComponent. It can handle boolean values (‘lblStatus.visible’), RGB colors (‘lblStatus.foreground’), String text and anything else that is needed. This is a great way to keep your resources organized and easy to maintain. It also keeps your client code very clean and easy to read.

(Note: You can read more about automatic and manual resource management here.)

The Action Manager

Actions are also managed by the Swing Application Framework. Just like other resources, an Action’s properties are also set in the properties file. The following examples assume that the Action’s methods are located in the main class.

In your main class, set up your action’s methods and mark them with the ‘@Action’ annotation. This will tell SAF to look in the properties file for corresponding properties and will map the action to the ActionMap. Below is an example of an action method with its corresponding properties (note that the method name is equal to the name in the properties file):

/** The show help action method. */
@Action
public void showHelp()
{
   // Action code here. No need to use ActionListeners.
}
# The showHelp action
showHelp.Action.icon = help.png
showHelp.Action.text = Help

In most applications, actions need to be called from several places, which could cause some duplicated code. SAF lets us map these actions (like we did above) and simply refer to them by pulling them from the ActionMap. Below is an example of adding one of our actions to a tool button:

    // Get the actions from the action map
    ActionMap actionMap = application.getContext().getActionMap();
    Action helpAction = actionMap.get("showHelp");
 
    // Set up the button with the action and name it so it get's the right properties
    JButton btnHelp = new JButton(helpAction);
    btnHelp.setName("btnHelp");
 
    // Add the button to the tool bar

That’s it! You now have actions mapped to the ActionMap that can be accessed anywhere in your application (you just need to make sure you pass around an instance of the Application from SAF).

Summary

The Swing Application Framework gives us some much needed organization in Swing apps. Resource management/injection, the action map, and the application lifecycle are examples of this. For more information on setting up an application using the SAF, check out the resources below.

Resources

  • Share/Bookmark

, , ,

No Comments

Debugging JavaScript using Visual Web Developer

So I’ve been putting a lot of work lately into MobilVox’s enterprise search application “Network IRIS”, this usually means lots and lots of JavaScript programming. Well anyone who’s reading this and has done their fair share of JavaScript programming knows two things.

  1. JavaScript does not behave the same across different browsers or even browser versions.
  2. Debugging JavaScript is notoriously difficult and headache inducing.

There are some tools out there that can help. If you need to debug in FireFox there is the Venkman Script Debugger. It has it quirks but seems to manage relatively well when it comes to debugging JavaScript in FireFox. However, when it comes to Internet Explorer the water gets a little more murky. There is an old tool the Microsoft Script Debugger, that most people have had the misfortune of dealing with at some point in their web development career. Unfortunately Microsoft hasn’t seen fit to update this tool since 2005 which kind of makes using it feel like talking on one of those giant old cell phones Zack Morris used in Saved by the Bell to drive Mr. Belding crazy.

Finally there is a solution that doesn’t involve using some ridiculously outdated piece of software. Microsoft graciously released a series of “Express” editions of some of their development tools. If you do lot of web development I strongly suggest you download the express edition of Visual Web Developer. You might not use it to do actual development but when it comes to debugging JavaScript in IE it’s invaluable. What follows is an explanation of how to hook Visual Web Developer up to your existing web application and start debugging your way to error free code.

  1. Enable debugging in IE: go to Tools > Internet Options > Advanced, and make sure “Disable Script Debugging (Internet Explorer)” is unchecked and “Display a notification about every script error” is checked.
  2. Download and install Visual Web Developer.
  3. Once installed use the following dialog to create an empty web application (this will just be a place holder).

    New Website Creation Dialog

    New Website Creation Dialog

  4. Once the new project is up and running you’ll want to make sure you have Internet Explorer setup as the default browser (you can switch later if you’d like). Do this by right clicking on the project name in the Solution Explorer tab. There should be a “Browse With…” option, select it. You’ll be presented with the following dialog, where you can choose which browser you want as your default.

    Browse With Dialog

    Browse With Dialog

  5. Now you are ready to start debugging, click the “Start Debugging” button. You should be prompted to enable debugging for this website. Visual Web Developer will then launch Internet Explorer in debugging mode. Since your website is empty you won’t see much of anything but that doesn’t really matter, it’s not this website you are trying to debug. Simply navigate to the page you wish to debug and you should be ready to go.
  6. Any and all script errors will cause Internet Explorer to give control over to Visual Web Developer in debug mode. I’m assuming if you are reading this you are semi-familiar with debuggers so I’m not going to go into them here, but sufficed to say, it works very much the same as any other standard debugger from this point forward. Here is a screen shot of me setting a break point in some JavaScript code.

    Setting a Breakpoint

    Setting a Breakpoint

That’s it folks, you are now debugging JavaScript with ease. A few final things to keep in mind.

  • Visual Web Developer is only free for 30 days unless you have a Windows Live account, so what are you waiting for get one!
  • Keep you code clean preferably with one function call per line, this way the debugger won’t have as much trouble knowing which functions to step in and out off.
  • Learn the shortcuts they will save you tons of time.
  • You can type any expression in the Watch window, and when its value changes it will turn red.
  • Set breakpoints if you want to open the debugger when there are no errors, this can be very hand in analyzing code, especially code that you didn’t write yourself.
  • Share/Bookmark

, , , ,

No Comments

Managing Java Archives

Most of the software development at MobilVox is done in Java. This means that we work with a lot of Java archives (jar files). Managing these jars can take a fair amount of time, so we wanted to have a tool to help automate this process as much as possible. To solve this problem, we wrote JarMan, an executable jar file that lets you open a jar and view its contents, and help you find errors in your jar configuration. You can run JarMan on your desktop (Java 6.0 required); this is a very small application (50 KB). The only required file is jarman.jar. Here is a screenshot:

Main Screen of JarMan

Main Screen of JarMan

Below are the major features:

  • Open a jar file and view a list of its contents
  • If the opened jar file has a manifest with a Class-Path entry, any files in the referenced jars are also included in the list
  • The list of files can be filtered with an arbitrary string; the list will be restricted to only show files that match on filename, directory or jar file name
  • The list of files can also be filtered to show all files, only files that match another file on name, or only files that match on name and content (same CRC value)
  • You can right-click a file in the list to view its contents
  • The Jars page lists the opened jar file and any jar files in its manifest’s Class-Path, and highlights any referenced jar files that were not found; the number of files in and the size of each jar is also listed
  • The Manifest page lists all entries in the opened jar file’s manifest (if it has one)

JarMan is free for any use. If you have any ideas for new features, please let us know!

  • Share/Bookmark

,

No Comments

Using Google’s Page Speed Website Optimization Tool

Google has recently released a Firefox add-on called Page Speed, it works in conjunction with the must have web development add-on Firebug, (which probably deserves it’s on entry all to itself, probably more than one). For now I’m going to assume you are at least passingly familiar with Firebug. Page Speed’s goal is to help web developers analyze their existing websites and improve their performance.

Once the add-on is installed it’s very simple to use. Just expand the Firebug console from inside Firefox and click over to the Page Speed tab, you should see something similar to the image below:

Page Speed Console

Page Speed Console

Now it’s simple, just navigate to the site you wish to analyze and click the Analyze Performance button. Once the process runs you should be presented with a new console that looks something like this:

Page Speed Optimization Suggestions

Page Speed Optimization Suggestions

What you see listed here are Page Speed’s recommendations for how to improve the performance of your site. You can expand each top level rule to see a more detailed explanation of each individual suggestion. These recommendations are based on a set of website best practices, if you’re interested you can read all about them at http://code.google.com/speed/page-speed/docs/rules_intro.html.

Here’s how to interpret the scores:

red-warning High priority: These suggestions represent the largest potential performance wins for relatively little development effort. You should address these items first.

triangle Medium priority: These suggestions may represent smaller wins or much more work to implement. You should address these items next.

check Working fine or low priority: If suggestions are displayed, as indicated with a + sign, they probably represent minor wins. You should only be concerned with
these items after you’ve handled the higher-priority ones.

info Informational messages only: Either these items don’t apply to this page or there was a problem in running the test

Obviously some of the suggestion are relatively simple to implement while others involve more labor intensive work. I would suggest going after the simple changes first. Some suggestions such as proxy caching, which involves setting a Cache-control: public header for static resources such as images, involve more work than others.

Page Speed is one of many tools that can be used to improve your overall website performance, a word of caution though, sometimes it can send you down the rabbit hole, making you chase small changes that have a very minor impact on performance. Just remember most of these things are just suggestions and helpful hints on things you can possibly do if you’re looking to increase performance. I would suggest anyone doing web development get familiar with the ins and outs of this tool, it’s not only going to make your pages load faster but it’s going to make you a better developer, by ingraining in you some simple website development best practices.

  • Share/Bookmark

, , ,

No Comments