Browser cache is a phenomenal feature reducing http related traffic with 2 or 3 orders of magnitude, but during the development it can induce serious trouble. I mentioned some caching problems already in my previous post. This time I spent most of the day today hacking some nasty http-handshake problems just to figure out, that chrome is not properly rreshing my jQuery source. I arrived to the point where v8 had in memory two different versions of the same library (sic!). I have no idea how this is possible.
Apparently so-called "incognito" mode doesn't really solve the problem. The solution which worked for me was to disable cache in chrome developer tools.
Now that this works I can proceed with doing something more useful.
30.10.12
26.10.12
REST chattiness
Over the last few days I have given a try to very fine-granular implementation of the graph-data access over REST. As a result of that my client (backbone+marionette) -> server (nginx+passenger+RoR) turned out to be rather chatty, which is a potential performance threat. Here's what my chrome reported about it:
I think that handling over 110 requests in about 3 seconds sounds prety good. Especially taking into account that rendering happens after fetching list of issues related to the project - about 500ms after the start
The tested set-up is as follows:
- Server-side: ubuntu 11.04 VM running inside VirtualBox
- 4 cores of core 2 (6Mbyte of cache)
- 1GB of ram
- 8 nginx worker processes
- 100Mbit wired ethernet
- Proxy - CentOS VM running on some infrastructure
- node.js based http-proxy forwarding http and web-sockets
- single core, 1GB ram, 100 or 1000Mbit ethernet
- Client -side - OSX with Google Chrome (24.0 - Canary)
- 8 Cores of i7
- 8GB of ram (ton of applications running)
- 100Mbit, wired ethernet
I don't know about you, but this sounds fairly good to me.
19.10.12
RESTful communication for web applications
In my previous post I rambled about business logic in web-applications. This topic surfaced as during my implementation I came across a problem with delivering aggregated data structures to the client-side of SAW.
An implementation of Backbone.Model encourages CRUD interaction with RESTful server-side interface, and doesn't put constraints on the nature of JSON data delivered. In particular it is happy to receive nested JSON trees. Particularly in my case a representation of the model was enriched with the two lists of elements referenced by and referring to the element in question, roughly as presented on the example bellow:
{
What seemed as a good idea that could save 2-3 HTTP calls ended up in a troubles when synchronizing back the Backbone.Model after applying some changes to it. In fact server-side could have filtered related_to and related_from from the PUT request parameters, but still a substantial amount of non update-related data was traveling back and forth.
Another option would be to interpret PUT/POST of these lists as valid operations of on the related item lists. But this implies implementation of quite some model-related logic into the server-side. I decided against this and added simple, generic routes serving aforementioned lists on separate requests:
(Ruby on Rails 3.1 routes.rb)
[...]
get 'r/:id/related_to' => 'tag#dotag'
get 'r/:id/related_from' => 'tag#untag'
get 'r/:id/related_to/:type' => 'tag#dotag'
get 'r/:id/related_from/:type' => 'tag#untag'
get 'r/:id/:attribute' => 'r#attribute'
put 'r/:id/:attribute' => 'r#setAttribute'
get 'r/:item_id/dotag' => 'tag#dotag'
get 'r/:item_id/untag' => 'tag#untag'
resources :r
[...]
In general it looks for me like RESTful data access promotes small-atomic data structures with explicit division between read-write resources and read-only aggregations. I would be eager to hear your opinion about it.
An implementation of Backbone.Model encourages CRUD interaction with RESTful server-side interface, and doesn't put constraints on the nature of JSON data delivered. In particular it is happy to receive nested JSON trees. Particularly in my case a representation of the model was enriched with the two lists of elements referenced by and referring to the element in question, roughly as presented on the example bellow:
{
- name: Exercise 3
- id: 4faa69ef924ff86933000001
- type: Project
- related_from: [
- {
- _id: 4daff753798e1c6dec000027
- _type: Taggable
- created_at: 2011-04-21T09:22:27+00:00
- name: local bank logging information
- type: Issue
- updated_at: 2012-05-10T14:35:40+00:00
- {
- _id: 4fb0c54b924ff84e1a000002
- _type: Taggable
- created_at: 2012-05-14T08:41:47+00:00
- name: Banking Interface Adapter location
- type: Issue
- updated_at: 2012-05-14T08:41:48+00:00
- ]
- {
- related_to: [ ]
What seemed as a good idea that could save 2-3 HTTP calls ended up in a troubles when synchronizing back the Backbone.Model after applying some changes to it. In fact server-side could have filtered related_to and related_from from the PUT request parameters, but still a substantial amount of non update-related data was traveling back and forth.
Another option would be to interpret PUT/POST of these lists as valid operations of on the related item lists. But this implies implementation of quite some model-related logic into the server-side. I decided against this and added simple, generic routes serving aforementioned lists on separate requests:
(Ruby on Rails 3.1 routes.rb)
[...]
get 'r/:id/related_to' => 'tag#dotag'
get 'r/:id/related_from' => 'tag#untag'
get 'r/:id/related_to/:type' => 'tag#dotag'
get 'r/:id/related_from/:type' => 'tag#untag'
get 'r/:id/:attribute' => 'r#attribute'
put 'r/:id/:attribute' => 'r#setAttribute'
get 'r/:item_id/dotag' => 'tag#dotag'
get 'r/:item_id/untag' => 'tag#untag'
resources :r
[...]
In general it looks for me like RESTful data access promotes small-atomic data structures with explicit division between read-write resources and read-only aggregations. I would be eager to hear your opinion about it.
Business logic in web-oriented application
Software Architecture Warehouse started as a web-based client-server application with major part implemented on the server-side. Ruby on Rails based back-end hosted persistence, business logic, and view generation. Client (web-browser) was merely responsible for rendering server-side generated views. Over time it became clear that this set-up is not capable of fulfilling requirements of highly-interactive collaborative usage.
Today, weight of SAW shifted dramatically towards the client-side. Thanks to application of frameworks such as Backbone or Marionette, user interface rendering moved completely to the client-side. One of the hesitations that I had recently was where to position application business logic.
In fact I find it useful to speak of two kinds of business logic:
Today, weight of SAW shifted dramatically towards the client-side. Thanks to application of frameworks such as Backbone or Marionette, user interface rendering moved completely to the client-side. One of the hesitations that I had recently was where to position application business logic.
In fact I find it useful to speak of two kinds of business logic:
- presentation/view oriented logic - can and should be implemented on the client-side, because this way it offers very good responsiveness (low latency) and thus good user-experience
- data/process oriented logic - can remain on the server side, because of the data access security and consistency concerns.
12.10.12
Web application development culture
I started developing SAW as a web application based on the backbone.js framework. It appeared to be minimal and small enough to be bullet proof. It wasn't. During 1.5 years of development it improved dramatically and it grew to have interesting extensions such as marionette.js or geppetto. I topped it up with backbone.subroute magic so that it appeared to be good to go.
It isn't. I ended up in relaying solely on my own forks of the aforementioned projects. I understand that github is an inherently social development place, but being required to continuously hack libraries on which my project is based is really frustrating.
Just to name few:
It isn't. I ended up in relaying solely on my own forks of the aforementioned projects. I understand that github is an inherently social development place, but being required to continuously hack libraries on which my project is based is really frustrating.
Just to name few:
- Marionette module implementation uses very awkward initialization - it starts with the child modules and finishes with parent module. Correct me if I'm wrong, but this is both counter-intutitive and useless. Reference is here.
- Subroute implementation fires routing event every time sub-router is instantiated. No idea what for. In my application I have one sub-router per module - this caused every url being routed n-times. grrrr...
I would be very eager to hear your comments about my fixes/frustrations with Backbone/Marionette/Github coding.
Subscribe to:
Posts (Atom)