Showing posts with label html. Show all posts
Showing posts with label html. Show all posts

Wednesday, July 25, 2012

Business Logic and Interactive Web Applications

By the mid-2000s, the structure of an MVC web-app had gelled: business logic belonged in the model (which was usually divided into a service layer and persistent objects), a thin controller would invoke model methods and select data to be shown to the user, and a view held markup and the minimal amount of code needed to generate repeating or optional content. There might be some client-side validation written in JavaScript, but it was confined to verifying that fields were filled in with something that looked like the right value. All “real” validation took place on the server, to ensure that only “good” data got persisted.

Then along came client-side JavaScript frameworks like JQuery. Not only could you make AJAX calls, but they let you easily access field values and modify the rendered markup. A few lines of JavaScript, and you have an intelligent form: if option “foo” is selected, hide fields argle and bargle, and make fields biff and bizzle read-only.

The problem with intelligent front-ends is that they almost always duplicate the logic found in the back end. Which means that, inevitably, the two will get out of sync and the users will get upset: bargle was cleared but they expected it to hold the concatenated values of biff and bizzle.

There's no good solution to this problem, although it's been faced over and over. The standard solution with a “thick client” application was layered MVC: each component had its own model, which would advertise its changes to the rest of the app via events. These events would be picked up by an application-level controller, which would initiate changes in an application-level model, which would in turn send out events that could be processed by the components. If you were fastidious, you could completely separate the business logic of both models from the GUI code that rendered those models.

I don't think that approach would work with a web-app. The main reason is that the front-end and back-end code are maintained separately, using different languages. There's simply no way to look one place and see all the logic that applies to bizzle.

Another problem is validation. The layered approach assumes that each component sends data that's already been validated; there's no need for re-validation at the lower levels. That may be acceptable for internal applications, but certainly not for something that's Internet-facing.

One alternative is that every operation returns the annotated state of the application model: every field, its value, and a status code — which might be as simple as used/not-used. The front-end code can walk that list and determine how to change the rendered view. But this means contacting the server after every field change; again, maybe not a problem on an internal network, but not something for the Internet.

Another alternative is to write all your code in one language and translate for the front end. I think the popularity of GWT says enough about this approach.

I don't have an answer, but I'm seeing enough twisted code that I think it's an important topic to think about.

Monday, November 16, 2009

Building a Product List Service: HTML Templating

Along with the product list service itself, I built a demo page. The first iteration of this page was very simple: a table to display list entries, a visible form to add new entries, and some invisible forms for generating update/delete requests. Even with this primitive page, however, I ran into the problem of how to turn the JSON data from the service into HTML.

My first approach was “the simplest thing that could possibly work”: I wrote a JavaScript function that built up an HTML string from the JSON, and then inserted this string into DIV using innerHTML. It worked, and had the benefit that the various event handlers were defined in the same file — if I changed a function's name or interface, I didn't have to update multiple files. But embedding markup in a script is ugly and hard to maintain; just keeping quotes matched takes a lot of work.

My second approach was to develop a utility library containing functions that build markup based on model objects. This approach was clearly influenced by Swing; in fact, the object passed to my buildTable() function looked a lot like Swing's TableModel. Doing this got the markup out of my main script, and gave me reusable components, which I liked. However, the code defining the model object alone was larger than my original concatenation function.

If my second approach resembled Swing, my first approach resembled early servlets. Keeping with the Java analogies, what I was really looking for was an approach that resembled JSP: all markup within one file, making references to data provided elsewhere.

With some pointers from friends who are adept JavaScript programmers, I started looking at different templating solutions. John Resig's micro-templates even looked like JSP directives (which meant, unfortunately, that they couldn't be used within a JSP-generated page). I tried out a few of the existing solutions, then decided to write my own — it was only a dozen lines of code, using regular expressions.

But while the template approach did the job, and provided a fairly clean break between markup and scripts, I was still uncomfortable. In part because there was now too much of a break between the markup and the scripts that interacted with it: a script in one file would blindly access markup from another file. My JavaScript friends would say that this break is a Good Thing, but I think that it ignores the fact that the markup and scripts are tightly coupled by nature — and in fact takes us back to a 1960s view of programs manipulating data. But hat's a topic for a future post.

Monday, October 12, 2009

Building a Wishlist Service: HTML Forms

Back to the wishlist service, and it's time to look at the client side. In particular, the mechanism that the client uses to submit requests. XML on the browser is, quite simply, a pain in the neck. While E4X is supposed to be a standard, support for it is limited. Microsoft, as usual, provides its own alternative. Since XML is a text format, you could always construct strings yourself, but there are enough quirks that this often results in unparseable XML.

Against the pain of XML, we have HTML forms. They've been around forever, work the same way in all browsers, and don't require JavaScript. They're not, however, very “Web 2.0 friendly”: when you submit a form, it reloads the entire page. Filling this gap, the popular JavaScript libraries provide methods to serialize form contents and turn them into an AJAX request. As long as you're sending simple data (ie, no file uploads), these libraries get the job done.

To simplify form creation, I created some JSP tags. This provides several benefits, not least of which is that I can specify required parameters such as the user ID and wishlist name. I also get to use introspection and intelligent enums to build the form: you specify an “Operation” value in the JSP, and the tag implementation can figure out whether it needs a GET or a POST, what parameters need to go on the URL, and what fields need to go in the body.

One of the more interesting “learning experiences” was the difference between GET and POST forms. With the former, the browser will throw away any query string provided in the form's action attribute, and build a new string from the form's fields. With the latter, the query string is passed untouched. In my initial implementation I punted, and simply emitted everything as input: the server didn't care, because getParameter(), doesn't differentiate between URL and body. This offended my sense of aesthetics however, so I refactored the code into a class that would manage both the action URL and a set of body fields. Doing so had the side benefit that I could write out-of-container unit tests for all of the form generation code.

The other problem came from separating the form's markup from the JavaScript that makes it work. This is current “best practice,” and I understand the rationale behind it, but it makes me uncomfortable. In fact, it makes me think that we're throwing away lessons learned about packaging over the last 30 years. But that's another post.