Tales from the Evil Empire

Bertrand Le Roy's blog


Bertrand Le Roy

BoudinFatal's Gamercard

Tales from the Evil Empire - Blogged

Blogs I read

My other stuff


November 2009 - Posts

Orchard team looking for a new developer

(c) 2005 Bertrand Le Roy My team is looking for a new full-time developer. The project is to build a completely new open-source CMS based on ASP.NET MVC 2. It’s a lot of fun :)


Enabling the ASP.NET Ajax script loader for your own scripts

(c) 2009 Bertrand Le Roy In previous posts, I’ve shown different ways to build a client-side class browser, using the ASP.NET Ajax Libary and jQuery.

In this post, I’ll focus on a few lines of code from the latest version of that sample. Those few lines of code enable my custom script to benefit from the script loader’s features such as lazy and parallel loading and dependency management.

An important feature of the script loader is the separation of the script meta-data from the script code itself. The meta-data can include the name of the script, its dependencies, instructions on how to figure out if it’s already loaded, its debug and release URL patterns and a declaration of the lazy components and plug-ins it introduces. The script loader can also handle composite scripts but I won’t cover that in this post.

  • The name of the script is what will be used to reference it from other scripts and to generate the URL from a pattern. It shouldn’t include the “.js” extension.
  • The dependencies and executionDependencies are each a simple array of script names that the script depends on. I’ll explain the difference in a minute.
  • The is loaded criterion is a JavaScript expression that returns true if the script is already loaded. This is usually a test on an object that the script file defines. For example, the “templates” script uses !!(Sys.UI && Sys.UI.Templates). It can assume Sys because it’s defined by start.js.
  • Debug and release patterns are expressions that enable the framework to map script names into debug and release URLs. The ASP.NET Library uses “%/MicrosoftAjax{0}.debug.js” and “%/MicrosoftAjax{0}.js” where % gets replaced with the path where the script loader was downloaded from (this can be CDN or local) and {0} gets replaced with the script name. You can create your own pattern or just provide fixed URLs if you have only one script file. The debug pattern is optional.
  • Lazy components and plug-ins are helpers that the script loader can create for you and that enable developers to start using your components before they are even loaded and hide the script loading aspects as much as possible. For example, it’s that feature that enables you to write this while still having only start.js loaded:
    Sys.create.dataView("#myDataView", {
        data: myData
    The code that needed to be written for this helper to be created was this:
    behaviors: ["Sys.UI.DataView"]
    This gets automatically transformed by the script loader into the Sys.create.dataView method, before it even tries to load the actual script that defines Sys.UI.DataView. And if you have jQuery loaded as well, you’ll also get a jQuery plug-in out of it for free, which means that you can do:
    $(".data").dataView({data: myData});
    and instantiate DataView controls over the results of a selector query in one operation. Groovy. Of course, you can do the same with the behaviors and controls that you write in your own scripts, with lots of options to customize names, add parameters, etc.

In the case of the class library code, here’s the meta-data declaration code that I had to write:

  name: "classBrowserTree",
  releaseUrl: "%/TreejQuery.js",
["jQuery", "Templates", "ComponentModel"], isLoaded: !!(window.jQuery &&
window.jQuery.fn.classBrowserTreeView) });

This declares that my script, named “classBrowserTree”, can be found at the URL TreejQuery.js relative to the base URL of the script loader, that it depends on jQuery, Templates and ComponentModel (which themselves have their own dependencies) and that the loader can determine whether it’s already loaded by evaluating the classBrowserTreeView jQuery plug-in.

Now, loading that script and all its dependencies is as simple as writing this:


Notice that as you’re typing this, you actually get full IntelliSense in Visual Studio on the name of the script, which is great for speed and to avoid typos:ScriptLoaderIntelliSense

One of the things that enable the script loader to do its job is a special way to write your script that makes it possible to separate the loading and parsing of the code from its execution.

Don’t freak out (yet) though as the code you have to introduce is very lightweight and it doesn’t prevent your script from being loaded without the script loader (with a plain old script tag for example).

Reversely, a script that doesn’t have the special script loader code can be handled by the script loader but its dependencies must be declared using “dependencies” instead of “executionDependencies” in the meta-data declaration code, which will in effect disable the more advanced features of the script loader such as parallel loading.

The special code in question is the following:

(function() {
  function execute() {
    // Your code goes here.
  if (window.Sys && Sys.loader) {
"classBrowserTree", null, execute); } else { execute(); } })();

What you see here is a (function() {…})(); wrapper, which is standard practice to isolate code from the global namespace (essentially, it’s a local scope); we also have another named scope in there that we arbitrarily call “execute” (but the name doesn’t matter). This is where you’ll typically put your actual code. Then we have the bootstrapping code that looks for the script loader. If it isn’t found, the code in the execute function is immediately run. If it is found, the execute function is registered with the script loader but is not immediately executed.

This wrapper code is what enables the script loader to do its magic. Thanks to this little bit of code, it doesn’t care at all in what order the scripts are being loaded, because it can delay the time when any script actually gets executed until all its dependencies have been successfully loaded.

This also allows the script loader to load scripts in parallel, even if one depends on another. Normally a script loader would have to download a dependent script first and wait for it to completely load before loading the next script. This ‘serializes’ the loading process and is not good for performance. Modern browsers can download 6 to 8 scripts simultaneously. Separating the loading of a script from its execution allows the loader to take advantage of that, and in most cases, even complex dependency trees can be downloaded completely in parallel, meaning the total time is not the sum of each script, but only the longest one.

Of course, in order to be wrappable, your code needs to be able to run in a non-global scope. This is good practice anyways and is relatively easy to achieve in most cases, so if it can’t, it may be a good idea to try to determine why and fix it.

And this is it. I hope I’ve convinced you that the script loader can help you to improve the performance of your applications at a really low cost. It also provides a very worthy client-side equivalent of the server-side ScriptManager that ASP.NET has been providing to developers for years: switching the whole application between debug and release scripts is a breeze, which minimizes the risk of deploying debug scripts in production; dependencies are being handled automatically; script combination is handled.

(many thanks to Dave Reed for helping with this post and of course for implementing an awesome script loader)

PDC week! Panel on OSS + ASP.NET

imageI’ll be at PDC tomorrow, Wednesday and Thursday. If you are attending and want to say hi, you’re most likely to find me in the Web Pavilion (in the big room, next to the Surface lounge).

Please also join us for a panel discussion on Wednesday about Open Source on ASP.NET. We’ll have the following people on the panel to answer your questions, and ask you a few too:

  • Scott Hanselman
  • Shaun Walker
  • Sara Ford
  • Glenn Block
  • Clemens Vasters
  • Myself

The discussion will be between 2PM and 3PM, in the Web Pavilion.

Metrics in software and physics

A Horrible experiment Every so often, somebody points out how bad of a metric code coverage is. And of course, on its own, it doesn’t tell you much: after all, it’s a single number. How could it possibly reflect all the subtlety (or lack thereof) of your designs and of your testing artillery? Of course, within all the various *DD approaches, some better than others enable you to know whether or not your code conforms to its requirements, but I thought I’d take a moment to reflect on the general idea of a software metric and how it relates to the mothers of all metrics: physical ones, cause you know, I used to be a scientist. Proof: the lab coat on the picture.

The theory of measurement is at the center of all experimental physics. This comes from the realization that any observation of the natural world is ultimately indirect.

For example, when you look at a red ball, you don’t directly perceive it. Rather, photons hit it, some of them are absorbed by the surface of the ball (violet, blue, green and yellow ones, but not red ones) and some of them bounce back (the red ones if you’ve been following). Those red photons that bounced back then hit your eyes, where a lens distorts their paths so that all those photons that came from a specific point on the ball converge to roughly the same spot on your retina. Then, the photoreceptor cells on the retina transform the light signal into electric impulses in your optic nerve, which conveys all that information into your brain and then, only then the complex mechanisms of conscience give you the wonderful illusion of seeing a red ball in front of your eyes.

The brain reconstructs a model of the universe, but what it really ever perceives is a pattern of electric impulses. Everything in between is a rather elaborate Rube-Goldberg contraption that can be trusted most of the time but that is actually rather easy to fool. That it can be fooled at all is the simple consequence that what you observe is an indirect and partial measure of reality rather than reality itself.

When we measure anything in physics, we build our own devices that transform objective reality into perceivable quantities. For example, when physicists say they have “seen” a planet around a faraway sun, they don’t (always) mean that they put their eyes on the smaller end of a telescope and perceived the shape of that planet with their own eyes like I saw the red ball of the previous paragraph. No, what they saw is something like this on a computer monitor:What a beautiful planet!This shows the very small (1.5%) variation of the light coming from the star as the planet transits in front of it. All this really tells them is that something dark that takes about 1.5% of the area of the star passed in front of it. By repeating that observation, they can see that it happens every 3.5 days. That’s it. No image, just measures of the amount of light coming out of a powerful telescope aimed at a star against time.

But just from that minimal data and our centuries old knowledge of celestial mechanics, researchers were able to deduce that a planet 1.27 times the size of Jupiter but 0.63 times its mass and a surface gravity about the same as Earth’s was orbiting that star. That’s an impressively precise description of a big ball of gas that is 150 light years away (that’s 1.4 million billion kilometers in case you’re wondering or 880 thousand billion miles if you insist on using an archaic unit system).

The Rube Goldberg device that enables us to see that big ball of gas from so far away is a mix of optics, electronics and knowledge, the latter being the really awesome part. Science is awesome. The bottom line of all this is that although it seems less “direct” than seeing the red ball with our own eyes, it does just as well deserve to be described as “seeing” it. The only difference is that we’re not seeing with our eyes but more with our brains. How awesome is that?

Where was I?

Yes, you might be wondering what this has to do with software. Well, all that long digression was to show that little data is necessary to infer a lot about the object you’re observing. So code coverage? Sure, it’s just a number, but combined with a few other numbers, it can help get a reliable picture of software quality.

Another point I’d like to make is that a lot of resistance to software metrics comes from the illusion that we know a lot more about our own code than any tool can tell us. But as anyone who has ever tried to read code he wrote only five years ago knows, that is delusional. What you know about your code is a combination of what you remember and what you intended to write, neither of which is particularly reliably representative of what your code is doing. Tools give us a much more reliable picture. Sure, it’s a narrow projection of the code and it doesn’t capture its full reality, but that is exactly the point of a measure: to project a complex object along a scale of our choosing. What set of projections you choose to make is what determines their relevance.

The conclusion of all this is that we should assume that our code is an unknown object that needs to be measured, like that big ball of gas 150 light years away, if we want to get an objective idea of its quality without having our judgment clouded by our own assumptions.

And probably the best tool you can use to do exactly this by the way is NDepend by Patrick Smacchia.

More Posts