Is Separating HTML and JavaScript Harmful?

I’ve spent the last many years religiously honoring the principle of separation of concerns. Separating concerns produces code that’s easier to read, maintain, and test. Right? Well, as I’ve found, virtually every “best practice” has exceptions. We have to consider the context and the implications of following any principal. And for HTML and JavaScript, separating concerns doesn’t always play out so well.

You see, HTML and JavaScript are joined at the hip. Changes to one directly impact the other. The problem is, there’s no explicit interface between these two technologies. This means when you change a line of HTML, you need to read every line of JavaScript to assure you haven’t just produced a bug (or lean heavily on your automated UI tests, if you have any).

The Unobtrusive JavaScript Movement is Dead

Let’s rewind the clock nearly a decade to explain why many have come full circle on this issue. In 2006, jQuery was on its way to becoming the de-facto standard for creating interactive web apps. About that same time the concept of unobtrusive JavaScript was picking up steam.1 I was a big proponent of the technique. It was exciting to finally have HTML that was pure HTML. Finally, through the power of jQuery selectors and event handlers residing in separate JS files, JavaScript was no longer “tainted” with HTML. And this perfect separation of concerns seemed wonderful…at first. But then reality set in.

Wait, when I want to change a piece of HTML, how do I know I’m not breaking the app? Due to the complete separation, every change to a piece of HTML required reviewing every single jQuery selector to assure I wasn’t breaking behavior. The flip-side was true as well. Every time I changed the JavaScript, I had to carefully review the HTML to assure I hadn’t regressed the display. And since jQuery simply returns an empty array when a selector doesn’t match anything, the application would often quietly fail when selectors didn’t match. Yes, one can create a fully comprehensive suite of automated UI tests via Selenium, et. al, but that’s far from free and incurs a big maintenance cost due to the brittle nature of automated UI tests.

The unobtrusive JavaScript movement treated separation of concerns as a global best practice without considering whether it truly made sense in this instance. The fact is HTML and JavaScript are NOT separate concerns. They are fundamentally intertwined. JavaScript without some corresponding UI isn’t very useful. And HTML without corresponding JavaScript isn’t interactive. The two must move in lockstep to work at all, and there’s no strongly typed interface to assure that the two sides remain compatible. Separating HTML and JavaScript isn’t a separation of concerns – it’s a separation of technologies.

Still not sold? See if React’s Pete Hunt can sway you.

The Solution? How About Components?

Since there’s no strongly typed interface between my HTML and JavaScript, maybe we should rethink the value of all these MV* frameworks. I’ve worked in Knockout/Durandal and Angular and found the common frustration is typos. When I’m in the HTML, I get no intellisense support since the markup has no knowledge of the corresponding data structure. This also makes refactoring more time-consuming since typos in the markup lead to cryptic run-time errors.

Contrast this experience with React. React eschews the common MV* style and instead focuses on a small simple problem: components. Each component is autonomous and can be composed of other components. The beauty of React is your markup is conveyed in JavaScript. Woah there, put down your torches. Re-read the paragraphs above if you don’t see the obvious win here. Once you convey your markup and behavior in the same file, then you get three big wins:

  1. Rich intellisense support – Since React uses JavaScript to create markup, you get intellisense support as you reference variables.
  2. Precise error messages – Errors in React typically point you to a specific line in the JavaScript. Make a typo in your markup in Angular or Knockout and the errors are often more ambiguous and do not provide a specific line number in the markup where the error is occurring.
  3. Leverage the full power of JavaScript – Composing your markup within JavaScript means you can enjoy all the power of JavaScript when working with your markup instead of a small proprietary subset that is offered within the HTML of frameworks like Angular and Knockout.

So I’m React Fanboy?

No, I really enjoy KO and Angular as well for different reasons. I enjoy the two-way binding offered in Angular and KO and find the extra plumbing in React/Flux tedious for all but the most complex apps. I could go on about other things I prefer in KO/Angular over React/Flux, but that’s another post. The point of this post is I now see the merits of composing behavior and markup using the same powerful technology: JavaScript. This idea of unifying tightly related concerns is gaining traction, as my friend Nik Molnar outlines in his recent post Something Razor Should Learn from JSX.

If two things go together logically, then it sure would be nice if they go together physically as well.
-Nik Molnar

The great news is everyone in the JS space seems to be quickly learning from each other and borrowing ideas. Thanks to React, I’ve come full circle on unobtrusive JavaScript movement, and I hope the idea catches on elsewhere. Whether you’re using React, Angular Directives, Knockout Components, Ember Components or native Web Components, self-contained components look like the future. And for self-contained components, I’m sold on composing markup and JavaScript in the same file.2


Footnotes

1. I’m referring solely to the separation of HTML & JS. The Wikipedia article lumps progressive enhancement under the unobtrusive JS banner. Progressive enhancement is an orthogonal concern in my mind and certainly continues to be a useful pattern.

2. I’m not advocating placing all JavaScript inline in the HTML. I’m merely advocating composing markup and JavaScript together when creating self-contained components. For instance, there’s still value in placing libraries and other non-component JavaScript in separate files for caching and reuse purposes.

7 Ways to Handle Circular Dependencies in RequireJS

When working with RequireJS, you’re likely to run across two modules that need to reference each other. When you create a circular reference using the standard RequireJS define statement on both sides, the module that’s loaded last fails and is thus undefined. This is confusing since the syntax looks valid, but the declared dependency is ultimately undefined. Here’s a quick example of a circular dependency to clarify:

Imagine you have a “user” module that references a “cart” module. The cart module needs a reference to the user module to determine where the user lives in order to calculate taxes.

Cart.js:
A separate module called “user” needs a reference to “cart” in order to show a cart icon with the number of items next to it. User.js

So in this example, the cart and user modules have circular references. Thankfully, there’s a variety of ways you can handle circular references when working with RequireJS.

1. Use the Inline Require Syntax

The simplest way to handle circular references is to update one of the two modules to use the following syntax instead of the traditional dependency array approach:

So given the above example, you could update the cart.js module to use the inline require syntax like this:

This approach is outlined in RequireJS’ documentation. The advantage is it’s simple and obvious. But that’s where the benefits end.

This approach hinders automated unit testing. You can’t test modules that use this pattern in isolation because the inline require syntax assumes the module has already been loaded by another module. Remember, to use this syntax the module must have already been loaded elsewhere. If it hasn’t already been loaded by another module, then RequireJS will throw an error. This leads to the other downside of this approach: It assumes that modules are loaded in a certain order. This syntax is effectively saying “Before this module can be loaded, the module that I’m referencing inline must have already been loaded.” Of course, this dependency isn’t clear until you understand the implications of this syntax. Due to these downsides, this simple approach should be avoided.

2. Utilize the Publish-Subcribe Pattern

When two modules need to communicate and interact with one another, the publish-subscribe (aka pub/sub) pattern a simple and popular approach. This pattern allows two modules to interact in a loosely coupled manner by publishing and subscribing to events. When an event occurs in module a, module b is notified and can thus respond or stay in sync as desired.

There are simple libraries for implementing the pub/sub pattern in various frameworks such as Knockout’s Postbox and Angular PubSub for AngularJS, or this tiny library for jQuery users.

The advantage of this approach is loose coupling. Two modules can interact without having any direct references to each other. This means you can test each module in isolation. However, this loose coupling comes with a cost: Changes in one module can ripple throughout the system in ways that are hard to predict. Since there’s no explicit connection between the two (or more) modules that are interacting with one another, it can be difficult to understand how and why the system responds when a given piece of data changes.

3. Pass Data into the Constructor

Often a circular reference is introduced so that some specific data in module A is available in module B. The easiest way to resolve this issue is to use approach #1 outlined above, but as we already discussed, that approach should be avoided. A superior approach is to pass the data module B desires into module B’s constructor or initialization function. This scenario assumes that module A calls module B and requires some data from module A. In this approach module A calls an initialization function on module B to pass in the necessary data. This approach honors established object-oriented principles by assuring the constructor clearly conveys the necessary parameters for instantiating the module.

4. Unify Closely Related Modules

Often a circular reference is simply a sign of a design problem. Before attempting the other approaches, consider whether the two interacting modules should be unified. The obvious advantage to this approach is it resolves a fundamental design flaw rather than masking the issue using the other approaches outlined in this post. Of course, you have to consider carefully whether unifying truly produces a superior design. Is it practical to interact with a single larger module? Does it still have a clear single responsibility? Does unifying the two modules impact the UI design? These are questions to consider before implementing this approach.

5. Pass a Function Reference into the Constructor

If module B simply needs to be able to call a method on module A, then module A can pass a reference to the desired function into the constructor of Module B. This scenario assumes module A initializes module B, so the downside is it assumes a coupling between the two modules. Any modules that initialize module B would need access to the data in module A. Thus, this approach is typically only practical when there’s a single caller to module B.

6. Move Logic / Data to One Side

In some cases the data or method that necessitates the circular reference can simply be moved to the other side. For example, consider whether the responsibilities in module A that depend upon logic in module B could simply be moved to module B. Or vice versa. Sometimes a circular reference simply stems from placing the logic in the wrong spot.

7. Create a Business Object / Service

Creating separate business objects or services is a useful way to eliminate circular dependencies as well. The two modules that rely upon the same methods can now both reference the business object instead. This eliminates the circular reference and creates a clean, simple business object that can typically be easily tested in isolation. A common scenario for this approach is when two viewmodels interact with one another. The common logic can be refactored to a business object so that they both rely on the business object instead.

How do you handle circular dependencies? Chime in via the comments below.

Cache Busting via Gulp.js

Have you ever thought about how many HTTP requests your app is wasting? Many developers think the native caching mechanisms of browsers are sufficient. However, did you know every time a page is loaded, an HTTP request is still typically made for every single resource to confirm that the server doesn’t have a new version? The web server will return a 304 Not Modified if the resource hasn’t changed. You may not think this is a big deal, but every HTTP request has a cost, and they add up. As you can see, when you load a blog post here on my site, many files return a 304 if you’ve already visited:

A List of HTTP 304 Unmodified Responses

A List of HTTP 304 Unmodified Responses

Now you may think this doesn’t matter much today in the age of high bandwidth, but all these 304’s aren’t costly because of their size. They’re costly because of latency. You see, over the years, bandwidth has indeed skyrocketed, but way back in the days of modems we’d already nearly hit a theoretical limit on latency. As Paul Irish outlined in Delivering the Goods in 1000ms, latency is an issue for all connections:

From https://www.youtube.com/watch?v=E5lZ12Z889k

From https://www.youtube.com/watch?v=E5lZ12Z889k

The pain of latency is really magnified with mobile networks:

Wireless Latency from https://www.youtube.com/watch?v=E5lZ12Z889k

Wireless Latency from https://www.youtube.com/watch?v=E5lZ12Z889k

Factor in the pain and complexity of TCP slow start, and the verdict is clear: Worrying about page size isn’t enough. For the ultimate performance, we must minimize the number of HTTP requests our apps make.

Declaring a War on Latency

So, it’s settled. If you want to provide users the ultimate in performance, you need to eliminate as many HTTP requests as possible. Thus, you must consider setting far future expires headers. This basically tells your visitors browsers “Never ask for this file again. Ever. I mean it.” Now the details on how to do this vary by web server (Here’s how in IIS, Apache) but you basically set an expiration date for your assets in the distant future so the browser won’t check the server for those resources again. Of course, eventually you’re going to change the file, so how do you tell everyone’s browsers that you didn’t mean it? That’s where cache busting techniques come into play. And while there are various ways to pull this off, I prefer using Gulp.

Gulp makes it easy to quickly create powerful build scripts that read much like English. Piping streams together feels really natural for anyone familiar with Linux. And processing files in memory not only gives Gulp a significant performance advantage over Grunt, it also means the configuration is lighter and much easier to read.

Both Gulp and Grunt offer a massive list of plugins, so in either case the hardest decision is deciding which mix of plugins best solve your problem. I’ve just created a cache-busting process in Gulp using two different methods. Let’s explore two approaches and consider their merits.

Option 1: Dynamically Inject Script Tag on the Server

This option utilizes the gulp-rev plugin. Gulp-rev versions your assets by appending a hash to filenames. So for example, if you run script.js through gulp-rev, it’ll spit out something like script-ddc06e08.js. Handy.

The big problem, of course, is how do you assure that any files which referenced script.js now reference script-098f6bcd.js instead? Well, handily, the gulp-rev plugin optionally generates a manifest.json file. This file contains JSON that maps the source filename to the new dynamically generated filename. So, using the above example, the manifest.json file would look like this:

You can probably guess why this is handy. Now all you need to do is read this file to get the new valid filename. Assuming you’re doing server-side rendering, you can simply use your server-side language to open this file, grab the value, and change the src attribute of the corresponding <script> tag accordingly. There’re some other similar approaches provided in the docs, but they all operate with a similar philosophy of relying on the manifest.json mapping file. I wasn’t in love with this approach or any of the approaches they outlined, so I created my own approach below.

Option 2: Replace Script References via Regex

This approach is similar to the approach above, but the <script> src attribute is set immediately by Gulp instead. To pull this off, I dropped the gulp-rev plugin and simply created my own suffix using today’s date. I like the clarity of being able to see when the file was last built. But here’s the real win, instead of generating the <script> tag dynamically on the server like we did in step #1, we use gulp-replace to change the relevant src attribute immediately upon build.

Here’s an example gulpfile:

Let’s dissect this file.

  1. Lines 1-5 pull in the necessary gulp plugins and define a task called js.
  2. Line 6 defines the new filename we want to use. The filename will end up being script-1-24-2015.js for example, assuming that’s the current date. I left the guts of the getDate() function out for brevity, but it simply returns a string in the above format.
  3. Line 7 specifies a glob that will retrieve all .js files in the scripts directory. This list of files is looped through on the following two lines.
  4. Line 8 concatenates all the files into a single file.
  5. Line 9 writes the concatenated file to the specified destination. Again, I left out the initialization of paths.build on this example for brevity.
  6. Line 11 defines a new src that needs our attention. This is the file that needs to reference the new dynamically named bundle we just generated. We specify the path to the filename, and also include a second parameter, base. This parameter is necessary so we can overwrite the file in the final step.
  7. Line 12 is where the magic happens. We simply use a regex to replace the current script tag src with a new value that matches the filename created on line 6. This way our file references the new filename. We access the script tag by using a regex that finds the script tag by id. In this case, we’re looking for a script tag with an id of bundle.
  8. Line 13 the updated file is written to disk.

I prefer approach #2 for two reasons:

  1. It occurs on build instead of load. In my opinion, a build task shouldn’t add overhead to every request (such is the case with approach #1 above since the path is dynamically generated on each request).
  2. It’s more discoverable. All the code that is manipulating your application for your build resides within your Gulp file with option #2.

And now with this wired up, I can save my users from making HTTP requests just to see if resources have changed. Bandwidth and time are saved on every page load. When the time comes to update the file, I have a simple build task to generate a new filename and update the corresponding file(s) that reference it.

One final piece of note: If you prefer for the filename to only change when the file contents have changed, you can continue to use the gulp-rev plugin and simply read the hashed filename that it assigns from the config file (since the hash only changes when the file contents change). In my use case, the large bundled file nearly always changes with each release, so I preferred the simplicity of a date based filename. Another approach often considered is to simply append a cache busting querystring on a reference to a static filename, but that may cause issues with some proxies, so I recommend truly changing the filename to bust cache as outlined above.

Have another approach to cache busting that you prefer? Please chime in via the comments.

6 Quick Tips for Presenting Code in Visual Studio

As a frequent conference presenter and attendee, I often see live code walk-throughs. There are a variety of tweaks you can make to optimize the Visual Studio experience for presenting code to others. It’s important to minimize visual distractions, size text properly, and practice your demo in a low resolution.

To that end, here’s 5 quick tips for presenting in Visual Studio.

1. Run Full Screen

Hit Alt+Shift+Enter to toggle full screen mode. This hides the Windows taskbar, title bar, and many items in the Visual Studio UI such as Solution Explorer. This is especially helpful when presenting on low resolution projectors. As a side-note, this mode is also useful for daily use since it helps avoid distractions while coding by hiding all other OS elements.

2. Configure Full Screen Mode

Visual Studio full screen mode hides all explorer windows by default. You may decide to show certain items to assist with the demo. Simply re-enable displaying selected portions of the UI while in Full Screen mode and Visual Studio will keep track of your separate full screen settings. So once you select UI elements you’d like to display in this mode, it will remember from now on. Handy. (also, remember you can open files by hitting Ctrl+, and typing the name, so you likely don’t need to display the Solution Explorer).

3. Enable Presenter Mode

Presenter Mode is a simple feature in Productivity Power Tools. Once you’ve installed this extension, all you have to do is hit Ctrl+Q to place the cursor in the quick launch input, then type the word present. Select Present On/Present Off to toggle. You’ll find font sizes throughout the UI are increased.

Enable Visual Studio Presentation Mode via Quick Launch

Enable Visual Studio Presentation Mode via Quick Launch

4. Resize Further as Needed

In larger rooms with smaller projection screens, it may still be difficult for people in the back to read code, even with presenter mode enabled. In this case you can increase the font size of individual files by holding Ctrl and scrolling or via the zoom dropdown in the bottom left-hand corner of Visual Studio. When using the above techniques, this is rarely necessary, which is good because this setting doesn’t “stick” between files. If you need a more global solution, you can increase DPI scaling in Windows, though you may notice sizing quirks in any apps that haven’t been updated to support DPI scaling.

5. Zoom it

Zoom It is a free tool that allows you to zoom in and annotate specific portions of the screen. Simply hit Ctrl+1 to zoom in on a specific portion of the screen. This can help draw attention to an area and is also useful when a piece of UI outside of Visual Studio can’t be scaled by other means. I’ve even seen some presenters utilize Zoom It throughout an entire presentation, though I find this a bit disorienting.

6. Use the Light Theme

The standard light theme of dark text on a white background is typically easiest to read on projectors. The lower contrast alternatives look great on a nice LCD, but often lack sufficient contrast for projector use. (Credit to @craigber)

Bonus tip

You can consider simply creating multiple Visual Studio settings files and switching between them as desired.

Have other tips for presenting code? Please chime in below.

Two Quick TFS Performance Tips

Team Foundation Server (TFS) continues to improve, but one area I’ve struggled recently is performance. I work in a very large codebase that knocks up against the 100,000 file limit with a single branch (yes, that’s a smell of bigger issues). Anyway, here’s two quick TFS performance tips that may help you be more productive.

1) Create Separate Workspaces for Each Branch

Annoyed that the TFS Source Control explorer is slow? The culprit may be too many files in a single workspace. The solution is simple. Create a separate workspace for each branch you’re working with. Name each workspace descriptively so you can easily switch between them. Having multiple mapped branches in the same workspace will make TFS Source Control Explorer extremely slow if you have over 100,000 files in your workspace (we hit this cap after a single branch). You can manage and rename your workspaces in Visual Studio here:

TFS Workspaces

2) Consider not creating a feature branch at all

If you’re working by yourself on a feature. you can simply work off main and create a shelveset each day to save your work until you’re ready to commit to the main line. This avoids the time-consuming overhead of branching. Just think about the list of overhead you take on with a feature branch:

  • Create branch
  • Get the entire repo again
  • Burn a ton more space on your hard drive due to duplicated physical files
  • Keep the branch updated via merges from main
  • Merge your changes back to main later
  • Switch between your branch and main so you can fix bugs in the main line

That’s a lot of pain that may not be necessary. Assuming you’re working by yourself, the only potential downside to using shelvesets off of main that I see is this: Bug fixes in main may be tricky if your new feature changes impact code related to a reported bug in a previous version. That’s a minor downside that I find worth accepting for all the time it saves by avoiding the overhead listed above.

Shadow DOM vs iframes

I’m really excited about the new HTML5 Web Components Standard. The Shadow DOM is particularly interesting, as it finally gives us encapsulated markup and styling. This should radically decrease the complexity of our CSS and help us finally design and deliver reusable components that don’t conflict with one another. We now have the tools to design our own custom HTML elements that feel native and offer even greater power than today’s HTML elements.

However, at first I had to wonder: Why do we need the Shadow DOM when we already have a way to encapsulate markup and styles using iframes? It’s an interesting point. Today iframes are commonly used to assure separate scope and styling. Examples include Google’s embedded maps and YouTube videos.

However, iframes are designed to embed another full document within another HTML document. This means accessing values in a given DOM element in an iframe from the parent document is a hassle by design. The DOM elements are in a completely separate context, so you need to traverse the iframe’s DOM to access the values you’re looking for.

Contrast this with HTML5 web components which offer an elegant way to expose a clean API for accessing the values of custom elements. Well written web components that utilize the Shadow DOM are as easy to access and manipulate as any native HTML elements. Try accessing the value of an input that exists in iframe from the parent document. It is a painful, clunky, and brittle process in comparison.

Imagine creating a page using a set of 5 iframes that each contain one component. Each component would need a separate URL to host the iframe’s content. The resulting markup would be littered with iframe tags, yielding markup with low semantic meaning that is also clunky to read and manage. Oh, and you’d also have to deal with the pain of properly sizing each iframe.

In contrast, web components support declaring rich semantic tags for each component. These tags operate as first class citizens in HTML. This aids the reader (in other words, the maintenance developer).

So while both iframes and the shadow DOM provide encapsulation, only the shadow DOM was designed for use with web components, and thus avoids the excessive separation, setup overhead, and clunky markup that occurs with iframes.

Excited about web components? Chrome and Opera already offer full support so get started!

Knockout Bindings are Evaluated Left to Right

I just resolved an odd behavior that tripped me up. Have you ever attached a click handler to a checkbox/radio with Knockout and wondered why the old value is received in the click handler? Well here’s the solution. In Knockout, bindings fire left to right.

So if you put your bindings in this order:

<input type="checkbox" data-bind="checked: value, click: funcToCall">

Then funcToCall will see the updated state for the checkbox as expected. However, if you reverse the binding order to this:

<input type="checkbox" data-bind="click: funcToCall, checked: value">

Then the click will fire before the checked binding which means the checkbox’s new state won’t be reflected. So be sure to declare your bindings in a logical order!