Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The pros and cons of developing a complete Javascript UI (supportbee.com)
58 points by prateekdayal on Aug 10, 2011 | hide | past | favorite | 47 comments


Every day, I have to work with two applications developed this way, with the "Javascript UI" pattern. It's the most frustrating period of my day, bar none.

When you build one of these things, you spend most of your dev time running it locally, and as a result it's snappy. You can do things like loading a blank page for your task list, then flowing in tasks via Ajax and it will render pretty much instantly. It's so fast that you'll even consider loading those tasks one at a time.

For the rest of us, using an application like YouTrack or Flow (getflow.com) is sheer torture. View an item, hit the back button, wait 10 seconds for the list page to come back up, reload its content, and rebuild the page before your eyes before you can even start trying to scroll back to where you were.

Compare that to the old user experience where that list page would be a chunk of HTML generated at the server in 50ms and handed to you across the wire. Your browser would cache it, so the back button operation I describe above wouldn't even involve a trip to the server most of the time. Your browser would remember its previous scroll position. It was fast and it worked. It never occurred to any of us that people would decide to throw it out and replace it with something worse.

So if you're considering building one of these things, step one needs to be spinning up an EC2 box on another continent to act as your "dev" server. Latency is suddenly your number one priority in life, with your app being secondary. Please don't complain about this fact, since it's something you chose for yourself.

Thanks.


I think you would never feel this with Pivotal tracker, Gmail or other well designed apps. Its easy to cache rendered htnk elements in js and show them when back button is pressed and then update them in the background (in fact this is what we do).

If the initial page load is a problem, you can use a CDN to distribute the javascript files. I am not sure about the scroll problem though. Thanks for pointing it out


Gmail seems to fix the "back" issue by going out of their way to break the "open in new window" experience. Ctrl+clicking messages to open them in new tabs takes up so much of your life waiting for them to load that you quickly become trained not to try doing that.

The thing is, the more pieces of the browser that you try to replace with javascript, the more opportunities you have to get them subtly wrong. Just watch the discussion here whenever somebody submits an article using that wordpress plugin that tries to replicate an iPad's scrolling functionality inside of your iPad.

It's just barely different, but that's why it's so annoying. If it were completely different, it actually wouldn't be a problem at all. But it's not. It just feels like it's not working right.


Gmail's terrible performance / unresponsiveness is the bane of my working life at the moment. I'd be a lot happier if I could open real HTML pages for emails in new tabs.


Gmail does have a basic html view[1].

[1] https://mail.google.com/mail/?ui=html&zy=h


Totally agree!

Ever tried the new Twitter? You now have to wait 12 seconds to see the Tweets.

Why not load the page in 1 sec. without the use of Javascript and use Javascript for realtime updates? Because it's cool to use Javascript and make your page slow as hell?

Whats wrong with page-reload-apps that are lightning fast?

Maybe the hip Javascript developers don't realize a Javascript driven page needs at least 3 request (page + javascript + content) instead of 1 (page)?


There are ways around that.

One of the easy ways is to simply in-line your JSON model when you generate the initial page.

So essentially your webpage contains the html structure you need, followed by something like

    <script>
        Myapp.start( { tasks: [ { id: 1, title: 'do something' } ]) 

    </script>
So essentially you bootstrap your application in order to avoid a large AJAX call right off the bat.


In fact Backbone's documentation on fetch (the method for populating collections using an AJAX call) specifically recommends this pattern - http://documentcloud.github.com/backbone/#Collection-fetch


I did almost exactly that on my last project. Works a treat.

With html5 features (i.e. local storage) you can just cache data client-side and sync in the background.

I.e. build a disconnected client-server application.


Some potential mitigating factors, if you break it up as follows:

* a tiny anchor HTML page with data key info

* a general purpose .js file (or small number of such)

* template HTML pages

* content (e.g. - JSON)

The tiny "anchor" page should be small (latency still hurts, of course);

The JS should be cached locally once the app has been used for the day;

The HTML templates pages should also cache;

The content must be loaded, suffering latency -- however, if the content is large, your app can display something while the content is being generated (perhaps by a big, slow, legacy data source???).

Another consideration: if all content is delivered via a REST API as JSON or (ugh) XML, there is a better chance it will be properly encoded for display. Hopefully, I can save a nice JavaScript fragment in a submission to said site, and the AJAX page construction code will simply add it to the resulting page to be displayed, rather than to be run. Hmmmm, perhaps I want my testing fixtures to fill the test database with XSS samples while I am at it, such as the newer entries in http://ha.ckers.org/xss.html (or a more recent list), and then to display that data on a few pages of manual testing.


> step one needs to be spinning up an EC2 box on another continent to act as your "dev" server

To reduce bandwidth and latency, I'd rather recommend using a throttling proxy such as Charles Proxy on your machine.


and it's even worse when you throw mobile into the mix.


The pros in this post do not seem terribly credible.

First, are there companies other than Twitter that have made an API strategy work? In other words, are there lots of examples of companies for whom their open API has been critical to adoption?

Second, it seems like requiring the developers to develop an API means that now they have two problems: designing a UI and designing an API. But in any case, are there modern MVC web frameworks that don't make providing an API as easy as creating views for XML and JSON as well as HTML?

Third, has any website released their UI code as open source to help people code against their API? Is this a sensible way for people to learn how the API works?

Overall, I think there is something compelling about a single page, complete Javascript UI, but parts of this feel a bit like back-solving a justification. Are the better arguments all really UI-related and not backend-related?


I'd argue Facebook has been pretty successful with their API. It certainly sped up adoption thanks to integrated apps like Farmville.

It seems to me that the an API strategy can be very effective in making something already popular really take off. No one is going to build cool apps on top of your api unless you have users to actually leverage them. I'm always skeptical of models that depend completely on an API play.


Facebook is a great example of "APIs are important" but the Facebook API for apps is completely different than anything they'd use for their pages. Especially back in the old days, when apps lived in a proxied sandbox and all of the params were passed into them, this code could not have been useful to the regular Facebook-only experience.

So, I don't think Facebook really counts as an example that supports the author's argument.


Completely agree. Facebook was already worth $750 million and had millions of users by the time they released their API. However, the API does seem (with 20/20 hindsight) like it was a brilliant move in terms of growth, engagement, and eventually monetization.


Like many others, having dealt with both issues of 1) building one of these single-page frameworks on the client side, and 2) the prospect dealing with two application, I've always been wondering "is there a better way?"

Now, if you go with a service oriented approach, as has Twitter and Facebook, you can build an API your JS client can talk to. One issue with this is, developing these kinds of services assumes a sort of stability, which can't be taken for granted in the earlier stages of application development.

Of course, there are frameworks that help speed this sort of thing up, but it makes it difficult to leverage libraries outside of the framework, as they're intended for single page applications. And you're still stuck with writing a client for each of your target platforms (JS/Static/Mobile Web/Mobile API).

When I first encountered Mustache, the dream of having one view rule them all tantalized me. So I started playing around with it, and have been working on an experiment I call "Marionette". It's basically the antithesis of something like SproutCore, in that the client is fairly dumb, and more or less draws the same as the server using an exposed API, and any widgets would be added on top in the form of your regular MooQuery libraries. I have a lot of work to do with the execution (history navigation is broken, for starters), but I think the overall premise has merit.

I'd wanted to polish it up a little before showing it off, but this article has inspired me to get it out the door. "Release early...", and all that.

https://github.com/bigbento/marionette https://github.com/bigbento/marionette-demo


I've ben working on an extjs app for the past few years. My experience is slightly different. I've found development actually speeded up. The extjs components are pretty mature, which is probably part of the reason. Benefits so far are the ones listed in the article, and also a structurally lower cost to supporting different browsers (even ie6), and a much easier time validating the security architecture, because of the reduced surface between client and server. Oh, and our users are very enthusiastic about the improved usability of our apps. They keep asking when we'll move over our older apps to the new architecture.


I think the development speed goes up once you have all the basic infrastructure in place (in our case it was getting jasmine and other testing frameworks and code to create different kinds of listings etc).


ExtJS has one major disadvantage - documentation is also done in ExtJS so it is not indexed by google and it's difficult to search for.


Aren't you worried about speed? When something like Ext.js has to make all that DOM things slow down, right? Is that not a problem?


In IE, yes.

I work with an exceptionally complex ext.js application (not develop, I'm just a user of it). In IE you can click a button, make a cup of tea, and come back to see it finish working. In Chrome and Firefox, there's nothing wrong with the performance of the client.


My last project was a very complex ExtJS single-page app, being used on low-spec PCs running IE7. It's still pretty snappy.


Pushing JS, and HTML to the limit always feels like the biggest con of building javascript UI apps. The farther you move away from browser defaults the more hairy thing seem to get.

No doubt we are moving to a more dynamic client side, but I haven't seen the way forward yet.


Do you feel that way with modern browsers too? We have not been paying much attention to IE compatibility but things have been working good in Chrome/FF/Safari for most part. Also, frameworks like Backbone help you write code thats easier to debug and more or less runs out of the box on most modern browsers


Thing are getting much better, all the time, but simple things like progressively enhancing select box's just feel way harder then it should be. Select boxes are tricky in any browser by the way because they are natively rendered so differently in many different browsers.

Backbone, and others, are awesome but still don't seem like the whole solution.

Even then if you the site you are building gets a little more popular, you will have to support more platforms, like older versions of IE, and then were back at the start.


The problems we got with older browser versions(especially IE) is mainly with CSS. Not JS or HTML. Rendering on the server would have caused the same issues.


I think in recent versions this is true (the problems being mainly CSS). 4-5 years ago I remember doing JS DOM manipulation and at one point IE taking 100x longer to do something than Firefox. It was so bad that FF was instant and IE was unusable.


Use a component driven framework that abstracts away the dom. You'll realize that it's the dom that makes things hairy, because it's too low level.


Do you have an example?


The article does not mention the main problem with this approach: you're breaking the hypertext model of the web. All of a sudden your pages don't have URL's, you can't bookmark them, back and forward buttons in your browser don't work, page source does not reflect your document structure, etc. There are some cases when all this is not important, but in many, many cases I want my webpage to be just that - a webpage.


Not really. Frameworks, like the mentioned Backbone, let you maintain links to all important functionality - http://documentcloud.github.com/backbone/#Router


You probably have to worry about hypertext in content websites (search engine indexable) but for rich web apps, pure JS UI seems to be a good option. GMail's hypertext is totally messy.

As for urls and bookmarking, HTML5 history api can help (Github source navigation)


You can make back and forward work by changing the url hash, and if you poll the hash and adjust the content accordingly, you can make unique urls work as well.

I think most javascript frameworks make this pretty easy, I am most familiar with Backbone's Router (recently renamed from Controller)

See: http://documentcloud.github.com/backbone/#Router

In the future, the HTML5 history API should make this even simpler. People definitely do screw this up though.. I don't think the recent Gawker/Gizmodo redesign did this initially, and it seemed like that was one of the reasons people really hated it.


I would argue that the hypertext idea isn't relevant for an app (sorry Tim Berners-Lee). At least not as relevant as for e.g. a blog. Maybe we should talk about the hyperapp model..


Another con: Unless you want to build three applications, going the JS-only route also means that your site is now only accessible for people with JS enabled.

We've seen how well such sites are received when HN was dominated by blogs ranting about the new Gawker design.

So to fix this, you'd have to do three applications:

1) the API that you use in the JS GUI

2) The JS GUI

3) The dynamically generated HTML for JS-less browsers

Or you change the JS GUI so it requests page fragments and work with history.pushState (PJAX) allowing you to skip the API again, though, of course, then you don't have an API.

I've reasoned about this last april: http://pilif.github.com/2011/04/ajax-architecture-frameworks...


This is one of the reasons javascript should be used both on the server and the client.

The server would offload as many procedures to the client as possible (that is none if js is disabled, some if you don't want to expose your guts).

The server html renderers would be the exact same code in the client, the api would be a simple thin wrapper around your "protected" procedures.

You could event support ecmascript5 features on IE6, executing the js in the server.


Valid point. I should have noted in the article that in our case, we assume our clients to have javascript enabled. As a startup, you have to be willing to let go of some customers to provide a compelling experience to others.


I totally agree with you. Even as a not-so-startup any more (we've been around for 11 years now). It really depends on the type of your customers: If you are selling a specific app to them, you might more easily get away with JS only than if you are producing a page for general consumption.

But, of course, if you go the JS only route (which I have done for my tempalias.com fun project last year), then you also have to keep in mind that you will be duplicating model code and dealing with two separate ORM layers:

One on the server for the API itself and one on the client which gets used by the GUI controllers. Backbone can be of help there, but if you want to, say, do some validation logic already on the client without the roundtrip, then you begin duplicating logic on both server and client.

So how ever you put it, going JS only is probably more work in the end, but it's also less work than going the traditional way and then bolting on a fully-fledged API because the lack of dogfooding it will cause you trouble once you get real users for it.

Overall it's a really difficult decision to make and it's also quite final as changing the direction, again, means a lot of work.


Personally I think it depends on the scale. As soon as you have actions that can't easily be implemented in server side code or have a large divergence in possible behaviour for clients, you can say "you need javascript to view this" (pretty much, when it can be called an application).

I find the 'javascript to enhance' guideline helpful to. Then sites are more likely to be functional (but not pretty) in older browsers. A good example is javascript to make menus slide, but which use CSS to make them come up.


I would argue that there are only two important questions when considering which of the two approaches to take: what are you building, and who you are building it for. Everything else falls behind. Points like "but than I will have to build two applications instead of one"... Common, It's just a matter of perspective. It doesn't really matter if you call it two applications or two modules or two battery staples, as long as it does what you imagined in the best possible way for the user.

If you are building an interactive web application that should be akin to classic desktop applications, you should really build a separate UI, just like if you were building a desktop application. If you are using a framework such as GWT, than it doesn't even have to "look" like two different applications from the developers point of view. Building a detached UI doesn't imply creating a nice public API at the same time.

On the other hand, if you are trying to create a content based site logically designed as linked pages of content, there really is no good reason to brake the classic web page layout and reinvent the wheel.

The problem, ofc, is what to do when your site is a hybrid between the two. Well, maybe a hybrid approach should apply also?

Most of the problems mentioned with the newer approach, js UI, are technical problems that must be solved (by us), not conceptual problems in the approach itself.


Building a JS front end only makes sense to me if you're leveraging a complete UI kit on the client side to ease handling of commonly-encountered UI problems: sortable, filterable, lists; autocomplete; tokenized inputs; etc.

The hassle of building a custom UI and event strategy without a pre-packaged UI kit seems like a recipe for headaches, but that may be a function of my own limitations in Javascript. I've tried it in Backbone and in Sproutcore 2 beta 2. Both are very promising, but I found myself dealing with so many common UI use cases that I really wished for plug-and-play UI kit modules.

Also, the online community for JS frameworks so young that it's hard to find other people's solutions. For instance, I wanted to hook into devise to authenticate users and store the current_user on the client-side. I ended up making my own solution, with no guidance from google. As a self-taught developer working alone on a few projects, I always feel better when I can at least find a random gist of another solution, if for no other reason than to see if I've missed a feature of the framework in my solution. Given how incredibly useful these JS frameworks are for imposing some structure and convention, I expect the online guidance to improve drastically over the coming year.


The main con in the article is that you are basically creating 2 applications in the same time (web API and JS Client).

But, this is also a pro in the same time. That way, you are forced to layer your application(s) appropriately, it is much easier to test and faster to develop (browser F5 instead of redeploy), and, as a bonus, you get nice rest API that you can expose to your clients or third-party developers.


From the headline and seeing that you are a customer support company, I was expecting to read about how the different approaches can benefit users. Boy was I wrong.


Sorry to disappoint you. Unfortunately news YC eats up the subdomain. Probably seeing devblog.supportbee.com would have set the expectation that this is a dev post.

We do believe JS UIs if done right can be very usable. I think thats why Gmail is so popular right now. I would love to discuss more with you and push a post later.


This extension[1] helps in getting the entire subdomain of the link.

[1] https://chrome.google.com/webstore/detail/amenlkcfjlmchdpogj...


a big pro that wasn't mentioned in the article is that a javascript UI allows you to write the API in almost any language you desire. writing your dynamic pages in C++ or java can be very time consuming - whereas writing it all in ruby or php may require you to outsource resource heavy parts of your web application.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: