22.11.12

HTTP DELETE failing with 411

In my configuration I'm using fabulous node-based http-proxy to do http and web-socket multiplexing from wildcard domain (*.saw.sonyx.net pointing into public-ip host), into VMs hosting application instances (nginx + passenger).
Recently, to my terror I realized that my application fails to delete resources. The persistence is provided to SAW by RESTful backbone models and collection, that in turn are speaking to the server with use of jQuery.
A quick debugging with developer console has shown that my app is getting status HTTP 411 status as a response to the DELETE request:


To my surprise the same call issued directly to the web-server succeeded without any problem. Apparently the issue had to be connected with proxy itself.
The HTTP status 411 simply says that content-length specifier is missing in the request header, which in fact is the case in the example above. It is not clear for me why should DELETE request have some payload, but that's another story. The general advice against this issue is to:
  • add content-length to the header - which in general case is a perfect idea, but a fast jQuery hacking experiment has demonstrated that browser (Chrome in my case) is refusing to set content-length. agrrr.
  • use nginx chunking module - which should be happy with streaming chunked data between browser and nginx (which doesn't support chunking mode itself). This way didn't work for me as there is very little that can be "streamed" for the DELETE request.
The third alternative is to add missing header in the http-proxy. Thanks to expertise of my colleague  - Daniele, we managed to pinpoint function that serves incoming requests and hack it accordingly. At the end of the day it actually works :)

19.11.12

Chrome autoSave is dead, long live tincr!

Chrome autosave extension was huge bump in the quality of the web-application development. It allowed editing code in the chrome developer tools and then, with use of small node.js application syncing changes back to the source files.

Tincr brings it to the next level by providing synchronisation in the opposite direction. It is capable of reloading assets that you have touched without refreshing complete browser session!

Due to my two-server configuration in which my assets (javascript & css) are loaded out of the node-static server and only RoR generated json comes from ruby, tincr required non-trivial configuration separating assets from generated contents.

For testing purposes I have made a pet project, that loads assets from localhost:4444

jQuery and HTTP acceptable content-type request header

As a consequence of the fact that SAW turns into the single-page application, the user authentication process required an adoption to use XHR requests. In fact the default mode of operation for devise is, whenever user is not authenticated, to use HTTP 302 status to forward web browser to the authentication page. For a RESTful client application this is not a desired behavior as it confuses backbone in where the particular resource can be found. In fact the proper behavior is to return HTTP 401 status, so that it can be caught and client can decide how to proceed.

Trying to implement this I found that my back-end (devise & RoR) is supporting multiple content-types for the request, and it would be most happy to deliver HTML (over JSON). To my surprise specifying JSON as desired content-type in jQuery still resulted in getting 302. Careful investigation resulted in finding that no matter what is the desired content-type, jQuery always attaches */*  to the list of accepted types (through allTypes variable). I can imagine that */* should serve as a fall-back for the situation when the desired content-type is not available, but in that case */* should have a different weight (q parameter). Specifying them in the single header line makes the server treat desired content-type and fall-back as equal.

In fact removing "allTypes" has fixed the problem. I'm now getting 401 which can be caught globally, save the requested URL in localStorage, and perform the right redirection:


jQuery(function() {
   jQuery("body").ajaxError(function(event, request, settings){
    if( request.status === 401 ) {
        localStorage.setItem('SAWurl',window.location.href);
        alert( "You are not logged in!" );
        window.location.href="/users/sign_in";
        }
   });
});


I would be eager to hear your opinion whenever this is a bug or a feature of jQuery.

GIT submodules overkill

Git has this faboulous feature of (sub-)modularization of the repositories. Over the course of over two years of development SAW grew up in terms of sub-modules. Sub-modularization has is beneficial mainly because it gives an ease of pulling updates without violating source code of the system under implementation.

The practice of managing sub-modules that I found particularly successful is to always fork official github repositories and refer to them as sub-modules.

This technique allows clear separation from the official repository and sometimes necessary customization.

The gotcha of the sub-modularization is that often libraries come in the half-backed condition that requires some degree of building them (with make, grunt, or ant). For my purposes I invested in one, central "deploy" bash script that:

  • performs pull from central repository,  
  • performs submodule initialization and updating
  • builds libraries (in particular jquery and jquery-ui) 
  • compiles JavaScript assets (with jammit
  • instructs RoR application server (passenger) to reload application sources
Full source of the script can be found here. The screen-shoot was made out of fabulous OSX git (and not only GIT) gui named SourceTree.  


6.11.12

Ruby on Rails - hack of the day

I'm developing fairly massive JavaScript application interacting with Ruby on Rails (RoR) back-end. As the amount of my JavaScript files grew to hundredths I noticed that serving then to the browser (after full-page refresh) using single-threaded RoR server takes forever.
There is an easy way to go around this issue. One can configure RoR development environment (config/environments/development.rb) to use asset server (config.action_controller.asset_host) and point it to the instance of some efficient static http server such as node-static, configured on the other port number (3030 or so). An example of the configuration can be found in RoR production environment setup (config/environments/production.rb).
In my case environment configuration line looks like:
  config.action_controller.asset_host = "http://localhost:3030"

30.10.12

Cache is your enemy

Browser cache is a phenomenal feature reducing http related traffic with 2 or 3 orders of magnitude, but during the development it can induce serious trouble. I mentioned some caching problems already in my previous post. This time I spent most of the day today hacking some nasty http-handshake problems just to figure out, that chrome is not properly rreshing my jQuery source. I arrived to the point where v8 had in memory two different versions of the same library (sic!). I have no idea how this is possible.
Apparently so-called "incognito" mode doesn't really solve the problem. The solution which worked for me was to disable cache in chrome developer tools.


Now that this works I can proceed with doing something more useful.

26.10.12

REST chattiness

Over the last few days I have given a try to very fine-granular implementation of the graph-data access over REST. As a result of that my client (backbone+marionette) -> server (nginx+passenger+RoR) turned out to be rather chatty, which is a potential performance threat. Here's what my chrome reported about it:


I think that handling over 110 requests in about 3 seconds sounds prety good. Especially taking into account that rendering happens after fetching list of issues related to the project - about 500ms after the start


The tested set-up is as follows:
  • Server-side: ubuntu 11.04 VM running inside VirtualBox
    • 4 cores of core 2 (6Mbyte of cache)
    • 1GB of ram
    • 8 nginx worker processes
    • 100Mbit wired ethernet
  • Proxy - CentOS VM running on some infrastructure
    • node.js based http-proxy forwarding http and web-sockets
    • single core, 1GB ram, 100 or 1000Mbit ethernet
  • Client -side - OSX with Google Chrome (24.0 - Canary) 
    • 8 Cores of i7
    • 8GB of ram (ton of applications running)
    • 100Mbit, wired ethernet
I don't know about you, but this sounds fairly good to me.