“Try to avoid foreach/for loops”–Over my Dead Body!

Before I get into this a little bit, know that my comments are a direct response to this post: http://www.codekicks.com/2011/01/try-to-avoid-foreachfor-loops.html. The reason I’m writing this post is because the author is making invalid comparisons through his use of each of the looping mechanisms in C# to QUICKLY come to the conclusion that do-while and while are faster than for/foreach. I’ve done my own tests modeled after the author’s tests, but with significantly different results. As a computer scientist, I want to make sure I actually analyze the data before coming to a conclusion about the runtime behavior.

 

What He Got Wrong

The first test he’s running is a foreach loop. This all fine and good… this is something we probably do on a daily basis.

 

image

What is the foreach loop actually doing? The compiler translates this into a local variable for the enumerator and a boolean. In IL, one of my runs produced a binary with the IEnumerator being named CS$5$0000. Then the compiler translates the body and the behavior of the foreach into a bunch of labels in IL. First MoveNext is called and the body of the forloop (now a label) is jumped to and executed. Then the MoveNext is called again and the loop starts over.

The next thing the author moves to is the for loop. Here’s his code:

image

What this is actually doing is COMPLETELY different from the semantics of the forloop. What he’s doing is just running a for loop on a separate integer. He’s not running through the iterator. This is the thing that invalidates his tests. He has WAY too many experimental variables, leading to invalid comparisons. The same fallacy occurs in his while test and the do-while tests:

imageimage

He’s not actually iterating through the list of numbers. This is a huge issue. You can’t possibly make valid comparisons when you completely change your test scheme. This is just bad test design.

 

The Right Idea

The idea of testing the different looping mechanisms isn’t a bad one. However, his test design doesn’t really take a few things into account that is really significant to the outcome of the time.

First of all, GC is a huge issue. He’s running all 4 tests in succession on the same thread. The GC in .NET, unlike the looping, can kick in whenever it damn well pleases to cleanup garbage. When you’re dealing with 10 million items, there’s bound to be at least 1 GC run in there somewhere, which can significantly mess with the times of the stopwatch. The stopwatch is a really dumb object… it just keeps track of CPU clicks. It has NO idea what the GC is up to.

A better design would be to simply run 1 trial at a time. That way, the order of the tests doesn’t influence the GC.

 

Wrong Data Structure

The author probably doesn’t know how lists are implemented (as I suspect most .NET devs don’t). If you’ve ever taken a class in C++ or C, you know that you have to manually resize arrays yourself if you’re going to have expandable array (“list”) behavior. The author chose to use a list to add 10 million items in it. The list, when setup, doesn’t know how many items you’ll be adding to the data structure. It choses an arbitrary size, allocates an array of that size, and expands if you add more items than the size it picked. But when it expands, it has to do a complete copy of the array EVERY SINGLE TIME an item is added (see the iternals of Insert on list). Since you already know the size of the data, just use an array to begin with!

image

Let’s Test This For Real

I’m done ranting about a poorly design perf test. It’s time to run my own. The proof is in the pudding.

First I’m going to run identical tests in succession and let GC do whatever it wants. As you can see from my code, I’m using iterators EVERY TIME for consistency.

image

Surprising results: they’re all about the same.

image

All except for that do-while… anyone wanna tell me why? My guess is that GC took over at some point inside the stopwatch execution. Let’s run it again.

image

Hmmmmmmmmmm interesting! While and do-while are actually SIGNIFICANTLY SLOWER than for and foreach…. what an interesting turn of events. Let’s run them individually now.

 

image

image

image

image

All are around the same time, interestingly. Honestly, I think this is a testament to the great work the C# compiler guys do. Here’s the deal. At the end of the day, the body of any looping construct is a label generated by the compiler and then interpreted by the CLR. These guys shouldn’t have completely different running times.

 

Conclusion

This post started off as a complete rant about an incorrect test and ended with a proof that there’s a lot more going on in the test design than the original author was fooled into thinking. Let me just say that this post isn’t personal. I don’t even know the original author. I just happened on his post and about flew out of my chair. Just think about how this sounds: different looping constructs have different runtime behavior. This seems completely rediculous to me. If this were true, the C# compiler writers would certainly have converted all loops to the one that performed better… it wouldn’t be hard to do at all. However, this isn’t the case since all loops are turned into labels in IL.

So the moral of the story is that you can keep writing your for loops and foreach loops, there’s no performance gain either way.

 

UPDATE: The author changed the title from "Try to Avoid Foreach/For loops" to "An observation on .NET loops". I copied and pasted the original title for my blog post... so that's where the title comes from :)

Posted by zowens | 7 comment(s)
Filed under: , ,

ASP.NET JavaScript Routing for ASP.NET MVC–Constraints

If you haven’t had a look at my previous post about ASP.NET routing, go ahead and check it out before you read this post: http://weblogs.asp.net/zowens/archive/2010/12/20/asp-net-mvc-javascript-routing.aspx

And the code is here: https://github.com/zowens/ASP.NET-MVC-JavaScript-Routing

 

Anyways, this post is about routing constraints. A routing constraint is essentially a way for the routing engine to filter out route patterns based on the day from the URL. For example, if I have a route where all the parameters are required, I could use a constraint on the required parameters to say that the parameter is non-empty. Here’s what the constraint would look like:

image

Notice that this is a class that inherits from IRouteConstraint, which is an interface provided by System.Web.Routing. The match method returns true if the value is a match (and can be further processed by the routing rules) or false if it does not match (and the route will be matched further along the route collection).

Because routing constraints are so essential to the route matching process, it was important that they be part of my JavaScript routing engine. But the problem is that we need to somehow represent the constraint in JavaScript. I made a design decision early on that you MUST put this constraint into JavaScript to match a route. I didn’t want to have server interaction for the URL generation, like I’ve seen in so many applications. While this is easy to maintain, it causes maintenance issues in my opinion.

So the way constraints work in JavaScript is that the constraint as an object type definition is set on the route manager. When a route is created, a new instance of the constraint is created with the specific parameter. In its current form the constraint function MUST return a function that takes the route data and will return true or false. You will see the NotEmpty constraint in a bit.

Another piece to the puzzle is that you can have the JavaScript exist as a string in your application that is pulled in when the routing JavaScript code is generated. There is a simple interface, IJavaScriptAddition, that I have added that will be used to output custom JavaScript.

Let’s put it all together. Here is the NotEmpty constraint.

image

There’s a few things at work here. The constraint is called “notEmpty” in JavaScript. When you add the constraint to a parameter in your C# code, the route manager generator will look for the JsConstraint attribute to look for the name of the constraint type name and fallback to the class name. For example, if I didn’t apply the “JsConstraint” attribute, the constraint would be called “NotEmpty”.

The JavaScript code essentially adds a function to the “constraintTypeDefs” object on the “notEmpty” property (this is how constraints are added to routes). The function returns another function that will be invoked with routing data.

Here’s how you would use the NotEmpty constraint in C# and it will work with the JavaScript routing generator.

image

The only catch to using route constraints currently is that the following is not supported:

image

The constraint will work in C# but is not supported by my JavaScript routing engine. (I take pull requests so if you’d like this… go ahead and implement it).

 

I just wanted to take this post to explain a little bit about the background on constraints. I am looking at expanding the current functionality, but for now this is a good start.

Thanks for all the support with the JavaScript router. Keep the feedback coming!

ASP.NET MVC JavaScript Routing

Have you ever done this sort of thing in your ASP.NET MVC view?

image

The weird thing about this isn’t the alert function, it’s the code block containing the Url formation using the ASP.NET MVC UrlHelper. The terrible thing about this experience is the obvious lack of IntelliSense and this ugly inline JavaScript code. Inline JavaScript isn’t portable to other pages beyond the current page of execution. It is generally considered bad practice to use inline JavaScript in your public-facing pages. How ludicrous would it be to copy and paste the entire jQuery code base into your pages…? Not something you’d ever consider doing.

The problem is that your URLs have to be generated by ASP.NET at runtime and really can’t be copied to your JavaScript code without some trickery.

How about this?

image

Does the hard-coded URL bother you? It really bothers me. The typical solution to this whole routing in JavaScript issue is to just hard-code your URLs into your JavaScript files and call it done. But what if your URLs change? You have to now go an track down the places in JavaScript and manually replace them. What if you get the pattern wrong? Do you have tests around it? This isn’t something you should have to worry about.

 

The Solution To Our Problems

The solution is to port routing over to JavaScript. Does that sound daunting to you? It’s actually not very hard, but I decided to create my own generator that will do all the work for you.

What I have created is a very basic port of the route formation feature of ASP.NET routing. It will generate the formatted URLs based on your routing patterns. Here’s how you’d do this:

image

Does that feel familiar? It looks a lot like something you’d do inside of your ASP.NET MVC views… but this is inside of a JavaScript file… just a plain ol’ .js file.  Your first question might be why do you have to have that “.toUrl()” thing. The reason is that I wanted to make POST and GET requests dead simple. Here’s how you’d do a POST request (and the same would work with a GET request):

image 

The first parameter is extra data passed to the post request and the second parameter is a function that handles the success of the POST request. If you’re familiar with jQuery’s Ajax goodness, you’ll know how to use it. (if not, check out http://api.jquery.com/jQuery.Post/ and the parameters are essentially the same).

But we still haven’t gotten rid of the magic strings. We still have controller names and action names represented as strings. This is going to blow your mind…

image

If you’ve seen T4MVC, this will look familiar. We’re essentially doing the same sort of thing with my JavaScript router, but we’re porting the concept to JavaScript. The good news is that parameters to the controllers are directly reflected in the action function, just like T4MVC.

And the even better news… IntlliSense is easily transferred to the JavaScript version if you’re using Visual Studio as your JavaScript editor.

image

image

The additional data parameter gives you the ability to pass extra routing data to the URL formatter.

 

About the Magic

You may be wondering how this all work.

It’s actually quite simple. I’ve built a simple jQuery pluggin (called routeManager) that hangs off the main jQuery namespace and routes all the URLs. Every time your solution builds, a routing file will be generated with this pluggin, all your route and controller definitions along with your documentation. Then by the power of Visual Studio, you get some really slick IntelliSense that is hard to live without.

But there are a few steps you have to take before this whole thing is going to work.

First and foremost, you need a reference to the JsRouting.Core.dll to your projects containing controllers or routes.

Second, you have to specify your routes in a bit of a non-standard way. See, we can’t just pull routes out of your App_Start in your Global.asax. We force you to build a route source like this:

image

The way we determine the routes is by pulling in all RouteSources and generating routes based upon the mapped routes.

There are various reasons why we can’t use RouteCollection (different post for another day)… but in this case, you get the same route mapping experience. Converting the RouteSource to a RouteCollection is trivial (there’s an extension method for that).

Next thing you have to do is generate a documentation XML file. This is done by going to the project settings, going to the build tab and clicking the checkbox. (this isn’t required, but nice to have).

The final thing you need to do is hook up the generation mechanism. Pop open your project file and look for the AfterBuild step. Now change the build step task to look like this:

image

The “PathToOutputExe” is the path to the JsRouting.Output.exe file. This will change based on where you put the EXE. The “PathToOutputJs” is a path to the output JavaScript file. The “DicrectoryOfAssemblies” is a path to the directory containing controller and routing DLLs. The JsRouting.Output.exe executable pulls in all these assemblies and scans them for controllers and route sources.

 

Now that wasn’t too bad, was it :)

 

The State of the Project

This is definitely not complete… I have a lot of plans for this little project of mine. For starters, I need to look at the generation mechanism. Either I will be creating a utility that will do the project file manipulation or I will go a different direction. I’d like some feedback on this if you feel partial either way.

Another thing I don’t support currently is areas. While this wouldn’t be too hard to support, I just don’t use areas and I wanted something up quickly (this is, after all, for a current project of mine). I’ll be adding support shortly.

There are a few things that I haven’t covered in this post that I will most certainly be covering in another post, such as routing constraints and how these will be translated to JavaScript.

I decided to open source this whole thing, since it’s a nice little utility I think others should really be using.

Currently we’re using ASP.NET MVC 2, but it should work with MVC 3 as well. I’ll upgrade it as soon as MVC 3 is released. Along those same lines, I’m investigating how this could be put on the NuGet feed.

Show me the Bits!

OK, OK! The code is posted on my GitHub account. Go nuts. Tell me what you think. Tell me what you want. Tell me that you hate it. All feedback is welcome!

https://github.com/zowens/ASP.NET-MVC-JavaScript-Routing


kick it on DotNetKicks.com

Multi-tenant ASP.NET MVC – Inversion of Control

Part I – Introduction

Part II – Foundation

Part III – Controllers

Part IV – Views

Source Code

One of the most important aspects of my Multi-tenant ASP.NET MVC implementation is Inversion of Control containers. These containers are essential to wiring up and finding the proper controllers. In my last iteration of code that followed the Views post, my setup for the container was a simple association of types added to the PluginGraph. I really do not want to have to do this manually. Therefore, I want to utilize StructureMap the best I can. This post will show you how I wire up controllers by convention rather than by configuration.

 

The Convention

The main convention I will be targeting is this: the controller that will be wired must have the same name as the root (or a unique name) and must inherit from a controller of the same name or “Controller”. For example, this first class would be in the “Host” project (for this example, the controller is “AccountController”). If a tenant project DID NOT override the AccountController, the host’s AccountController would be added to the container. If a tenant DID override AccountController, the AccountController in the tenant would be used as the AccountController. Here’s the example in code:

 

// AccountController in the Host project
namespace MyApp.HostWeb
{
    public class AccountController : Controller
    {
        ....
    }
}
// AccountController in the Tenant project
namespace MyApp.FirstTenant
{
    public class AccountController : MyApp.HostWeb.AccountController
    {
        ....
    }
}

The obvious downside to this implementation is that the fully qualified controller name must be used after the “:” to denote inheritance. You could implement this a bit differently, but I’ve chosen this way for simplicity.

 

Wiring Up Controllers

The question that arises is how to inject this convention into container configuration using StructureMap. The way I’ve chosen to go is by using assembly scanner conventions. An assembly scanner will blindly go through all the types in a specified assembly and run the types against a convention. There are some conventions already in place, but we can also write our own conventions.

In my implementation, the host and tenant project assemblies will both be scanned for controllers. Here’s how the assembly scanning works inside the configuration of a container:

 

var container = new Container();
container.Configure(config =>
{
    config.Scan(scanner =>
    {
        // add the conventions
        scanner.Convention<ControllerConvention>();
        // specify assemblies to scan (just examples here, simpler in practice)
        scanner.TheCallingAssembly();
        scanner.Assembly(typeof(Something).Assembly);
        scanner.Assembly("MyAssembly");
        scanner.AssemblyContainingType<Foo>();
    });
});

 

Writing the Convention

The next step is to write ControllerConvention so that the controllers will be added correctly to the container. I’m using a little black magic to intercept types and initialize them properly using an interceptor. However, this is frankly the only way I’ve found that works. But anyways, here’s the code.

public class ControllerConvention : IRegistrationConvention
{
    public void Process(Type type, Registry registry)
    {
        if (registry == null || !IsValidController(type))
            return;
        var baseClass = type.BaseType;
        if (!IsValidController(baseClass) || !baseClass.Name.Equals(type.Name))
            registry.AddType(typeof(IController), type);
        else
        {
            registry.AddType(typeof(IController), baseClass);
            registry.RegisterInterceptor(new TypeReplacementInterceptor(baseClass, type));
        }
    }
    private static bool IsValidController(Type type)
    {
        return type != null && !type.IsAbstract && typeof(IController).IsAssignableFrom(type) &&
               type.Name.EndsWith("Controller") && type.IsPublic;
    }
    private class TypeReplacementInterceptor : TypeInterceptor
    {
        private readonly Type typeToReplace;
        private readonly Type replacementType;
        public TypeReplacementInterceptor(Type typeToReplace, Type replacementType)
        {
            this.typeToReplace = typeToReplace;
            this.replacementType = replacementType;
        }
        public bool MatchesType(Type type)
        {
            return type != null && type.Equals(this.typeToReplace);
        }
        public object Process(object target, IContext context)
        {
            // Sanity check: If the context is null, we can't do anything about it!
            if (context == null)
                return target;
            return context.GetInstance(this.replacementType);
        }
    }
}

Going back to the AccountController example, suppose that the controller is overridden by the tenant. The way the convention will look at it is that the names match and the base type is a controller. Therefore, the BASE TYPE is added to the PluginGraph and will be INTERCEPTED once requested by the TypeInterceptor.

 

Another Way to Implement

Another way to implement this behavior is through a type scanner (ITypeScanner). The process is the same as the convention, but you are given a PluginGraph directly rather than using a registry. There is a subtle difference in the types, but the idea is the same. In fact, you can copy and paste the body of the “Process” code right into a type scanner and only have to change “registry.RegisterInterceptor(…)” to “graph.InterceptorLibrary.AddInterceptor(…);”.

 

Caveat

There’s a bit of a wonkiness in the type interceptor. If you look at the “Process” method of the interceptor, you’ll notice you’re given a target. If you’re overriding a controller, target will be the base controller. For example, going back to the AccountController example, target will be an instance of the AccountController from the Host. While this is not so problematic, there is still no need for this instance since it will be intercepted and not used. This is the price you pay for having such a structure in StructureMap. If someone has a better way than I’ve described, I’m all ears :)

 

Up Next

I have 3 things on my list that I need to talk about. The first is routing with multi-tenancy, second is content resolution and the third is wiring this all up in IIS7. This might be the order of the post or it might not be. We will see. But until next time, keep the questions and discussions going. Email me, @-reply or DM me on Twitter(Source code link)

Multi-tenant ASP.NET MVC - Views

Part I – Introduction

Part II – Foundation

Part III – Controllers

 

So far we have covered the basic premise of tenants and how they will be delegated. Now comes a big issue with multi-tenancy, the views. In some applications, you will not have to override views for each tenant. However, one of my requirements is to add extra views (and controller actions) along with overriding views from the core structure. This presents a bit of a problem in locating views for each tenant request. I have chosen quite an opinionated approach at the present but will coming back to the “views” issue in a later post.

What’s the deal?

The path I’ve chosen is to use precompiled Spark views. I really love Spark View Engine and was planning on using it in my project anyways. However, I ran across a really neat aspect of the source when I was having a look under the hood. There’s an easy way to hook in embedded views from your project. There are solutions that provide this, but they implement a special Virtual Path Provider. While I think this is a great solution, I would rather just have Spark take care of the view resolution. The magic actually happens during the compilation of the views into a bin-deployable DLL. After the views are compiled, the are simply pulled out of the views DLL. Each tenant has its own views DLL that just has “.Views” appended after the assembly name as a convention.

The list of reasons for this approach are quite long. The primary motivation is performance. I’ve had quite a few performance issues in the past and I would like to increase my application’s performance in any way that I can. My customized build of Spark removes insignificant whitespace from the HTML output so I can some some bandwidth and load time without having to deal with whitespace removal at runtime.

 

How to setup Tenants for the Host

In the source, I’ve provided a single tenant as a sample (Sample1). This will serve as a template for subsequent tenants in your application. The first step is to add a “PostBuildStep” installer into the project. I’ve defined one in the source that will eventually change as we focus more on the construction of dependency containers. The next step is to tell the project to run the installer and copy the DLL output to a folder in the host that will pick up as a tenant. Here’s the code that will achieve it (this belongs in Post-build event command line field in the Build Events tab of settings)

%systemroot%\Microsoft.NET\Framework\v4.0.30319\installutil "$(TargetPath)"
copy /Y "$(TargetDir)$(TargetName)*.dll" "$(SolutionDir)Web\Tenants\"
copy /Y "$(TargetDir)$(TargetName)*.pdb" "$(SolutionDir)Web\Tenants\"

The DLLs with a name starting with the target assembly name will be copied to the “Tenants” folder in the web project. This means something like MultiTenancy.Tenants.Sample1.dll and MultiTenancy.Tenants.Sample1.Views.dll will both be copied along with the debug symbols. This is probably the simplest way to go about this, but it is a tad inflexible. For example, what if you have dependencies? The preferred method would probably be to use IL Merge to merge your dependencies with your target DLL. This would have to be added in the build events. Another way to achieve that would be to simply bypass Visual Studio events and use MSBuild.

 

I also got a question about how I was setting up the controller factory. Here’s the basics on how I’m setting up tenants inside the host (Global.asax)

protected void Application_Start()
{
    RegisterRoutes(RouteTable.Routes);
    // create a container just to pull in tenants
    var topContainer = new Container();
    topContainer.Configure(config =>
    {
        config.Scan(scanner =>
        {
            scanner.AssembliesFromPath(Path.Combine(Server.MapPath("~/"), "Tenants"));
            scanner.AddAllTypesOf<IApplicationTenant>();
        });
    });
    // create selectors
    var tenantSelector = new DefaultTenantSelector(topContainer.GetAllInstances<IApplicationTenant>());
    var containerSelector = new TenantContainerResolver(tenantSelector);
    
    // clear view engines, we don't want anything other than spark
    ViewEngines.Engines.Clear();
    // set view engine
    ViewEngines.Engines.Add(new TenantViewEngine(tenantSelector));
    // set controller factory
    ControllerBuilder.Current.SetControllerFactory(new ContainerControllerFactory(containerSelector));
}

The code to setup the tenants isn’t actually that hard. I’m utilizing assembly scanners in StructureMap as a simple way to pull in DLLs that are not in the AppDomain. Remember that there is a dependency on the host in the tenants and a tenant cannot simply be referenced by a host because of circular dependencies.

 

Tenant View Engine

TenantViewEngine is a simple delegator to the tenant’s specified view engine. You might have noticed that a tenant has to define a view engine.

public interface IApplicationTenant
{
    ....
    
    IViewEngine ViewEngine { get; }
}

The trick comes in specifying the view engine on the tenant side. Here’s some of the code that will pull views from the DLL.

protected virtual IViewEngine DetermineViewEngine()
{
    var factory = new SparkViewFactory();
    var file = GetType().Assembly.CodeBase.Without("file:///").Replace(".dll", ".Views.dll").Replace('/', '\\');
    var assembly = Assembly.LoadFile(file);
    factory.Engine.LoadBatchCompilation(assembly);
    return factory;
}

This code resides in an abstract Tenant where the fields are setup in the constructor. This method (inside the abstract class) will load the Views assembly and load the compilation into Spark’s “Descriptors” that will be used to determine views. There is some trickery on determining the file location… but it works just fine.

 

Up Next

There’s just a few big things left such as StructureMap configuring controllers with a convention instead of specifying types directly with container construction and content resolution. I will also try to find a way to use the Web Forms View Engine in a multi-tenant way we achieved with the Spark View Engine without using a virtual path provider. I will probably not use the Web Forms View Engine personally, but I’m sure some people would prefer using WebForms because of the maturity of the engine. As always, I love to take questions by email or on twitter. Suggestions are always welcome as well! (Oh, and here’s another link to the source code).

Posted by zowens | 2 comment(s)

Mulit-tenant ASP.NET MVC – Controllers

Part I – Introduction

Part II – Foundation

 

The time has come to talk about controllers in a multi-tenant ASP.NET MVC architecture. This is actually the most critical design decision you will make when dealing with multi-tenancy with MVC. In my design, I took into account the design goals I mentioned in the introduction about inversion of control and what a tenant is to my design. Be aware that this is only one way to achieve multi-tenant controllers.

 

The Premise

MvcEx (which is a sample written by Rob Ashton) utilizes dynamic controllers. Essentially a controller is “dynamic” in that multiple action results can be placed in different “controllers” with the same name. This approach is a bit too complicated for my design. I wanted to stick with plain old inheritance when dealing with controllers. The basic premise of my controller design is that my main host defines a set of universal controllers. It is the responsibility of the tenant to decide if the tenant would like to utilize these core controllers. This can be done either by straight usage of the controller or inheritance for extension of the functionality defined by the controller. The controller is resolved by a StructureMap container that is attached to the tenant, as discussed in Part II.

 

Controller Resolution

I have been thinking about two different ways to resolve controllers with StructureMap. One way is to use named instances. This is a really easy way to simply pull the controller right out of the container without a lot of fuss. I ultimately chose not to use this approach. The reason for this decision is to ensure that the controllers are named properly. If a controller has a different named instance that the controller type, then the resolution has a significant disconnect and there are no guarantees. The final approach, the one utilized by the sample, is to simply pull all controller types and correlate the type with a controller name. This has a bit of a application start performance disadvantage, but is significantly more approachable for maintainability. For example, if I wanted to go back and add a “ControllerName” attribute, I would just have to change the ControllerFactory to suit my needs.

 

The Code

The container factory that I have built is actually pretty simple. That’s really all we need. The most significant method is the GetControllersFor method. This method makes the model from the Container and determines all the concrete types for IController. 

The thing you might notice is that this doesn’t depend on tenants, but rather containers. You could easily use this controller factory for an application that doesn’t utilize multi-tenancy.

public class ContainerControllerFactory : IControllerFactory
{
    private readonly ThreadSafeDictionary<IContainer, IDictionary<string, Type>> typeCache;

    public ContainerControllerFactory(IContainerResolver resolver)
    {
        Ensure.Argument.NotNull(resolver, "resolver");
        this.ContainerResolver = resolver;
        this.typeCache = new ThreadSafeDictionary<IContainer, IDictionary<string, Type>>();
    }

    public IContainerResolver ContainerResolver { get; private set; }

    public virtual IController CreateController(RequestContext requestContext, string controllerName)
    {
        var controllerType = this.GetControllerType(requestContext, controllerName);

        if (controllerType == null)
            return null;

        var controller = this.ContainerResolver.Resolve(requestContext).GetInstance(controllerType) as IController;

        // ensure the action invoker is a ContainerControllerActionInvoker
        if (controller != null && controller is Controller && !((controller as Controller).ActionInvoker is ContainerControllerActionInvoker))
            (controller as Controller).ActionInvoker = new ContainerControllerActionInvoker(this.ContainerResolver);

        return controller;
    }

    public void ReleaseController(IController controller)
    {
        if (controller != null && controller is IDisposable)
            ((IDisposable)controller).Dispose();
    }

    internal static IEnumerable<Type> GetControllersFor(IContainer container)
    {
        Ensure.Argument.NotNull(container);
        return container.Model.InstancesOf<IController>().Select(x => x.ConcreteType).Distinct();
    }

    protected virtual Type GetControllerType(RequestContext requestContext, string controllerName)
    {
        Ensure.Argument.NotNull(requestContext, "requestContext");
        Ensure.Argument.NotNullOrEmpty(controllerName, "controllerName");

        var container = this.ContainerResolver.Resolve(requestContext);

        var typeDictionary = this.typeCache.GetOrAdd(container, () => GetControllersFor(container).ToDictionary(x => ControllerFriendlyName(x.Name)));

        Type found = null;
        if (typeDictionary.TryGetValue(ControllerFriendlyName(controllerName), out found))
            return found;
        return null;
    }

    private static string ControllerFriendlyName(string value)
    {
        return (value ?? string.Empty).ToLowerInvariant().Without("controller");
    }
}

One thing to note about my implementation is that we do not use namespaces that can be utilized in the default ASP.NET MVC controller factory. This is something that I don’t use and have no desire to implement and test. The reason I am not using namespaces in this situation is because each tenant has its own namespaces and the routing would not make sense in this case.

 

Because we are using IoC, dependencies are automatically injected into the constructor. For example, a tenant container could implement it’s own IRepository and a controller could be defined in the “main” project. The IRepository from the tenant would be injected into the main project’s controller. This is quite a useful feature.

 

Again, the source code is on GitHub here.

 

Up Next

Up next is the view resolution. This is a complicated issue, so be prepared. I hope that you have found this series useful. If you have any questions about my implementation so far, send me an email or DM me on Twitter. I have had a lot of great conversations about multi-tenancy so far and I greatly appreciate the feedback!


kick it on DotNetKicks.com
Posted by zowens | 4 comment(s)

Multi-tenant ASP.NET – Foundation

Part I – Introduction

 

In my last post, I talked about some of the goals of multi-tenancy in general and some hints about my implementation. Now it is time to put a little meat on the bones. I’m ready to share my ideas about how to implement a multi-tenant system on ASP.NET MVC and IIS7. I’m excited about some of the improvements in both areas in recent years… without some of the advancements in those technologies, I’m sure my code would have been much more complicated. (Here' is the link to the source code if you’d like to follow along :) )

 

Modeling the tenant

As I described in my first post on multi-tenancy, I’m working with a multi-tenancy architecture because of my job. At eagleenvision.net, we are contracted by clients to create websites, all with similar structure. Because of the similarity in implementation, I felt that it was necessary to eliminate our issue of maintaining different codebases for the same application. This is problematic in that some enhancements applied to some clients and not to others.

Going back to multi-tenancy, a “tenant” to me is simply a client. This client has a few different attributes that are reflected in the model. Here’s the interface I use to describe a tenant.

public interface IApplicationTenant
{
    string ApplicationName { get; }

    IFeatureRegistry EnabledFeatures { get; }

    IEnumerable<string> UrlPaths { get; }

    IContainer DependencyContainer { get; }

    IViewEngine ViewEngine { get; }
}

 

The first thing you might notice is the UrlPaths property. Url paths are root URL paths that are mapped to a tenant. One of my requirements is that multiple bindings can exist for the same tenant. In some applications, this might not be a requirement, in which case you might want to make this a single string.

The next thing you’ll notice is that a tenant has a dependency container. Like I alluded to in my previous post, my implementation uses 1:1 tenant to container. That is, every tenant has it’s own isolated container from the other tenants. This is an important aspect of the tenant and it really sets my implementation apart from the others I have seen. IoC is my friend… I intend to use StructureMap to do a lot of my bidding. This allows for a lot of flexibility and isolation that a multi-tenancy application should have.

 

Features

As you can see from the IApplicationTenant interface, a tenant has a list of features that are enabled by the tenant, called a feature registry. The feature registry is responsible for describing which features are enabled and which features the tenant is allowed to utilize. Here’s some relevant code for features.

 

public interface IFeature
{
    string FeatureName { get; }
}

public interface IComplexFeature : IFeature
{
    IEnumerable<IFeature> SubFeatures { get; }
}

public interface IFeatureRegistry
{
    IEnumerable<IFeature> Features { get; }

    bool IsEnabled(IEnumerable<string> featurePath);
}

 

A feature is a simple string. A complex feature is a feature that has sub features. The feature registry is responsible for the features enabled by the application tenant. Feature path describes the path to the lowest enabled sub feature. For example, if there’s a feature path such as A – B – C, A and B are complex features where B is a sub feature of A and C is a sub feature of B. This enables a lot of power in describing features and sub features. In MvcEx, I believe the concept of Module is similar. The distinction here is that a controller doesn’t have to be a feature, but it can be a feature. Here’s an example of some of the uses of the “Feature” attribute that will check the tenant’s feature registry once a controller has been called for execution.

// explicit feature paths

public class AccountController : Controller
{
    [Feature("Account", "Login")]
    public ActionResult Login()
    {
        return View("Login");
    }

    [Feature("Account", "Logout")]
    public ActionResult Logout()
    {
        return View("Logout");
    }

    [Feature("Account", "Register")]
    public ActionResult Register()
    {
        return View("Register");
    }
}

// implicit feature path from controller name

[Feature]
public class AccountController : Controller
{
    // Account -> Login
    public ActionResult Login()
    {
        return View("Login");
    }

    // Account -> Logout
    public ActionResult Logout()
    {
        return View("Logout");
    }

    // Account -> Register
    public ActionResult Register()
    {
        return View("Register");
    }
}

 

The implementation of this attribute are a bit complicated. Just look at the source for a complete implementation.

I have also included a default implementation for IsEnabled in the Features class of the source code. It’s a pretty simple implementation this is hardcoded for IFeature and IComplexFeature. (This probably isn’t the best design for the API, but it suits my needs. Any suggestions on an API would be wonderful.)

 

Tenant Selection

One of the concepts that is essential to the delegation of requests to the various tenants is tenant selection. As you can see from the code below, there’s an interface that handles tenant selection and querying of the tenants (ITenantSelector). The tenant selector is responsible for getting the tenant for the request or an exception can be thrown if a tenant cannot be found. The default tenant selector queries the tenants based upon the root URL (“BaseUrl” in my implementation) and selects the first tenant with that URL. Obviously you can implement your own tenant selector that works off something else, such as a route value, etc. However, this sort of selection process makes sense for my implementation.

 

public interface ITenantSelector
{
    IEnumerable<IApplicationTenant> Tenants { get; }

    IApplicationTenant Select(RequestContext context);
}

public class DefaultTenantSelector : ITenantSelector
{
    public DefaultTenantSelector(IEnumerable<IApplicationTenant> tenants)
    {
        Ensure.Argument.NotNull(tenants, "tenants");
        this.Tenants = tenants;
    }

    public IEnumerable<IApplicationTenant> Tenants { get; private set; }

    public IApplicationTenant Select(RequestContext context)
    {
        Ensure.Argument.NotNull(context, "context");

        string baseurl = context.HttpContext.BaseUrl().TrimEnd('/');

        var valid = from tenant in this.Tenants
                    from path in tenant.UrlPaths
                    where path.Trim().TrimEnd('/').Equals(baseurl, StringComparison.OrdinalIgnoreCase)
                    select tenant;

        if (!valid.Any())
            throw new TenantNotFoundException();
        return valid.First();
    }
}

 

A related concept is container resolution. Because I have a lot of different containers floating around and I don’t want certain things knowing about tenants, there is an abstraction on container resolution. The default resolver selects a tenant. This code will be available with the source code.

 

Where’s the code?!

I have the source code on my GitHub account. Feel free to fork it or use to your whim. Make sure you keep up with my blog series to get the inside scoop. This post goes along with the f8cf7e5408eff7b659f3 commit. Be aware that the code introduced in this post are subject to change with subsequent commits.

 

Up Next

Next up is controller resolution. You’ll see how controllers are resolved and how my implementation uses controllers. You will be surprised at how simple the implementation is. In contrast, MvcEx uses a lot of Reflection.Emit to achieve “Dynamic Controllers”. While this may work for some applications, I think it’s a bit overkill. You will see what I mean in my next post.

Until next time, leave questions in the comments, DM me on Twitter, or send me an email.


kick it on DotNetKicks.com
Posted by zowens | 6 comment(s)

Multi-tenant ASP.NET MVC – Introduction

I’ve read a few different blogs that talk about multi-tenancy and how to resolve some of the issues surrounding multi-tenancy. What I’ve come to realize is that these implementations overcomplicate the issues and give only a muddy implementation! I’ve seen some really illogical code out there. I have recently been building a multi-tenancy framework for internal use at eagleenvision.net. Through this process, I’ve realized a few different techniques to make building multi-tenant applications actually quite easy. I will be posting a few different entries over the issue and my personal implementation. In this first post, I will discuss what multi-tenancy means and how my implementation will be structured.

 

So what’s the problem?

Here’s the deal. Multi-tenancy is basically a technique of code-reuse of web application code. A multi-tenant application is an application that runs a single instance for multiple clients. Here the “client” is different URL bindings on IIS using ASP.NET MVC. The problem with different instances of the, essentially, same application is that you have to spin up different instances of ASP.NET. As the number of running instances of ASP.NET grows, so does the memory footprint of IIS. Stack Exchange shifted its architecture to multi-tenancy March. As the blog post explains, multi-tenancy saves cost in terms of memory utilization and physical disc storage. If you use the same code base for many applications, multi-tenancy just makes sense. You’ll reduce the amount of work it takes to synchronize the site implementations and you’ll thank your lucky stars later for choosing to use one application for multiple sites. Multi-tenancy allows the freedom of extensibility while relying on some pre-built code.

 

You’d think this would be simple.

I have actually seen a real lack of reference material on the subject in terms of ASP.NET MVC. This is somewhat surprising given the number of users of ASP.NET MVC. However, I will certainly fill the void ;). Implementing a multi-tenant application takes a little thinking. It’s not straight-forward because the possibilities of implementation are endless. I have yet to see a great implementation of a multi-tenant MVC application. The only one that comes close to what I have in mind is Rob Ashton’s implementation (all the entries are listed on this page). There’s some really nasty code in there… something I’d really like to avoid. He has also written a library (MvcEx) that attempts to aid multi-tenant development. This code is even worse, in my honest opinion. Once I start seeing Reflection.Emit, I have to assume the worst :)

In all seriousness, if his implementation makes sense to you, use it! It’s a fine implementation that should be given a look. At least look at the code.

I will reference MvcEx going forward as a comparison to my implementation. I will explain why my approach differs from MvcEx and how it is better or worse (hopefully better).

 

Core Goals of my Multi-Tenant Implementation

The first, and foremost, goal is to use Inversion of Control containers to my advantage. As you will see throughout this series, I pass around containers quite frequently and rely on their use heavily. I will be using StructureMap in my implementation. However, you could probably use your favorite IoC tool instead. <RANT> However, please don’t be stupid and abstract your IoC tool. Each IoC is powerful and by abstracting the capabilities, you’re doing yourself a real disservice. Who in the world swaps out IoC tools…? No one!</RANT> (It had to be said.) I will outline some of the goodness of StructureMap as we go along. This is really an invaluable tool in my tool belt and simple to use in my multi-tenant implementation.

The second core goal is to represent a tenant as easily as possible. Just as a dependency container will be a first-class citizen, so will a tenant. This allows us to easily extend and use tenants. This will also allow different ways of “plugging in” tenants into your application. In my implementation, there will be a single dependency container for a single tenant. This will enable isolation of the dependencies of the tenant.

The third goal is to use composition as a means to delegate “core” functions out to the tenant. More on this later.

 

Features

In MvcExt, “Modules” are a code element of the infrastructure. I have simplified this concept and have named this “Features”. A feature is a simple element of an application. Controllers can be specified to have a feature and actions can have “sub features”. Each tenant can select features it needs and the other features will be hidden to the tenant’s users. My implementation doesn’t require something to be a feature. A controller can be common to all tenants. For example, (as you will see) I have a “Content” controller that will return the CSS, Javascript and Images for a tenant. This is common logic to all tenants and shouldn’t be hidden or considered a “feature”; Content is a core component.

 

Up next

My next post will be all about the code. I will reveal some of the foundation to the way I do multi-tenancy. I will have posts dedicated to Foundation, Controllers, Views, Caching, Content and how to setup the tenants. Each post will be in-depth about the issues and implementation details, while adhering to my core goals outlined in this post. As always, comment with questions of DM me on twitter or send me an email.



kick it on DotNetKicks.com

Filter IQueryable by String for ASP.NET MVC

In ASP.NET web applications, mostly seen in MVC, it is really nice to have a standard way to filter a query based on a pre-defined set of combinators. It is often annoying to have to test for different Request parameters in a controller action for MVC or on a page for WebForms. In this post I will describe what I’m calling StringToIQueryable, an open source parser library I built in a few days that I’m using on a few projects. Basically you feed a string to the parser and it manipulates an IQueryable according to a set of pre-defined combinators. The syntax is URL-friendly… that was the goal. Hopefully I will show you in this post how useful this tool can be for both consumption and extension.

I figure I might as well tell you where the source is first. I decided to use GitHub for this project… so it is hosted on GitHub here. Download it and have at it!

 

MONADS MONADS MONADS!

If Steve Ballmer were a functional programmer, he’d FOR SURE be screaming MONADS MONADS MONADS! I’ve discussed monads in the past (with a little less understanding than I have now :) ). Basically a parser (like what I’ve implemented) is like the IEnumerable monad in C#. There is a whole list of combinators you can use to define exactly what a parser does to arrive at a result. Intermediate results are stored as pairs of the parsed value and the string left to parse. I have defined a few simple combinators (I will talk about Or, When, and OrWhen specifically) that are useful. I’ve also defined some of the standard LINQ query operators… so you can easily use a parser as a LINQ query. As Erik Meijer says, “everything is a query!”.

 

Parser Internals

The parser has really 2 important internals, a delegate and a storage container for the result and “rest” string. Here’s the code for each.

// generic delegate for getting result/rest
public delegate ParserResult<T> Parse<T>(string input);

// class for holding the result and rest
public sealed class ParserResult<T>
{
    public ParserResult(T parsed, string remaining)
    {
        Parsed = parsed;
        Remaining = remaining;
    }

    public T Parsed { get; private set; }
    public string Remaining { get; private set; }
}

 

You’ll notice I’m using a generic… so you can pretty much parse anything here. The Parse delegate takes an input and returns a parser result. The parser’s job is to make sure everything is parsed correctly. This is in the ParserLib project of my source code.

 

The Combinators

Combinators in functional programming are kind of an awesome thing. The main operators are bind and return. Bind is implemented with Select, SelectMany and Then. These are really the important operators, but Or, When, and OrWhen are what I find particularly useful when dealing with constraints and possibilities in my URL to IQueryable configuration.

Or is a combinator that takes 2 Parse delegates. If the first should return null (indication of failure) then the result of the Parse will be the result of the second delegate.

When is a unary parser (takes only 1 parser) and takes a string predicate. If the predicate is satisfied by the input, then the result will be delegated to the Parse instance. Otherwise the parser fails and null is returned. This is especially useful in my implementation of String to IQueryable because I associate a beginning keyword to a parser.

OrWhen is a hybrid of Or and When that I built to mainly chain my parsers together. OrWhen takes a string predicate (like When) and 2 parsers (like Or). If the result isn’t null for the first parser, then that is the result. Otherwise the string is tested against the predicate and fails if the predicate fails or returns the result of the parse if the string predicate passes.

If you download the code, you will see these defined in the ParserLib project. I’m not going to show you the implementation for the sake of brevity. But I would recommend having a look.

 

The Query Parser

Ah now to the good stuff. QuerParser is it’s own separate project in my solution. I have defined what I call “ParserExpressions”, which mostly conform to the LINQ standard query operators. The expressions are take, skip, where, sort, and page (which isn’t a standard query operator). I will discuss why some of the other operators were not implemented in a minute. But you do have the option of extending the parser by creating your own operators. Here is the interface that all the expressions use.

 

public interface IParserExpression<T>
{
    IQueryable<T> Map(IQueryable<T> queriable);
}

 

That’s it. Just one method (plus a ToString… more on that in a bit). The expression has to be able to transform an IQueryable using the Map method. How this method works is up to the expression. For example, the take expression transforms the IQueryable by calling the Take method and passing an integer (which is passed in the constructor) to generate a new IQueryable. You will see how this is useful in a minute.

It is important to note that the ParserExpression does not parse the string. They simply act as containers for state until the IQueryable is mapped upon. The parse delegate, as I said before, is the logic behind the parser.

Let’s now have a look at the parse code.

public class StringQueryParser<T>
{
    private string expressions;
    public StringQueryParser(string exprs)
    {
        expressions = exprs.ToLower();
    }

    protected Predicate<string> StartsWith(string test)
    {
        return str => str.StartsWith(test);
    }

    protected virtual Parse<IParserExpression<T>> Parsers
    {
        get
        {
            return Parser.When(StartsWith(ParserConstants.ExpressionSeparator.ToString()), IgnoreSeparatorParser)
                     .OrWhen(StartsWith(ParserConstants.PageIndicator), PageParser)
                     .OrWhen(StartsWith(ParserConstants.SkipIndicator), SkipParser)
                     .OrWhen(StartsWith(ParserConstants.TakeIndicator), TakeParser)
                     .OrWhen(StartsWith(ParserConstants.SortIndicator), SortParser)
                     .Or(WhereParser); 
        }
    }

    public IEnumerable<IParserExpression<T>> Parse()
    {
        if (string.IsNullOrEmpty(expressions))
            return new List<IParserExpression<T>>();

        return Parsers.Repeat()
                      .Invoke(expressions)
                      .Parsed
                      .Where(x => x != null);
    }

    public IQueryable<T> Map(IQueryable<T> queriable)
    {
        foreach (var parseExpr in Parse())
            queriable = parseExpr.Map(queriable);

        return queriable;
    }
    
    
    // the parsers have been removed for brevity. once
    // again, download the code for have a look at the
    // entire parser code
    ......
}

 

As you’ll see I’ve removed the parsers. Just go download the code to look at the parser code. Those are generally unimportant for this post. I can go in-depth if there is a demand for it.

I’ve implemented a Parsers property that returns a composite parser that encapsulates the logic to parse the string. If you were to override this class, you’d want to override the Parsers property to append your own parsers or just completely start anew. Each indicator (page, take, sort and skip) will be covered in the next section, but basically we filter an expression down to it’s parts and create the expression from there. You’ll have a better idea of how to form these expressions by the end. Don’t worry :) So really there’s no surprises here (other than repeat… that’s in ParserLib. Kind of intuitive… you parse until the input is empty). When all is said and done, the IQueryable is mapped through all the expressions generated by the string. Here’s how accomplish a parse in an ASP.NET MVC controller.

 

public class MyController : Controller
{
    ....
    // option #1, instantiate a StringQueryParser
    public ActionResult List(string query)
    {
        // replace with your data access
        IQueryable<User> usersQueryable = Session.Linq<User>();
        // do the parsing
        var parser = new StringQueryParser<User>(query);
        usersQueryable = parser.Map(usersQueryable);
        
        // manipulate and parse as needed
        return Json(usersQueryable.ToList());
    }

    // option #2, use the extension method
    public ActionResult List2(string query)
    {
        // replace with your data access
        var usersQueryable = Session.Linq<User>().Parse(query);        
        
        // manipulate and parse as needed
        return Json(usersQueryable.ToList());
    }
    ......
}

 

I personally prefer option 2. That’s a little neater, but both do the same thing. You will want to setup a “catch all” route in MVC that maps to this.

 

How to form expressions

At this point, I’ve been blabbering about the internals of the parser. Some of you probably don’t care… which is actually fine with me. The important part of this whole project is the real world application. Basically you will be passing a well-formed expression to your controllers that will manipulate the IQueryable accordingly. Here’s the low down on how to do this by string.

Singular Expressions

/skip-5/

skips the first 5 elements

/take-4/

returns only 4 elements

/take-all/

returns all elements

/page-5-11/

returns a paged result on page 5 (1-based) with page size 11

/page-5/

returns a paged result on page 5, default page size is 10 in my library

/sort-name/

applies an ascending sort on the name property

/sort-desc-name/

applies a descending sort on the name property

/sort-name,age,birthmonth/

applies an ascending sort on name property, then ascending sort on age property, then on birthmonth property

/sort-desc-name,age,birthmonth/

applies a descending sort on the name property, then ascending sort on age property, then on birthmonth property

/name-equals-jon/

applies a where name equals jon

/age-greaterthan-4/

applies a greater than 4 where

/name-not-null/

applies a name must not be null constraint

 

For the where expression, there are a lot of combinators that we support (equals, like, not, greater than, less than, greater than or equal, and less than or equal). We also support null as a “keyword”. So don’t expect your name properties to be null or something… that would be interpreted as null by the parser and not “null” as in the string.

 

Multiple Expressions

It is particularly useful to chain these expressions together. Here’s some examples. Feel free to combine these however you’d like.

/page-5/name-equals-jim/age-lessthan-90/age-greaterthanequal-4/

/skip-4/take-3/name-equals-kim/haschainsaw-equals-true/

/gender-equals-female/page-5-10/

The order is typically of no importance except for using skip and take together. It is a good idea to use a skip before a take. That’s just been my experience. You’ll notice in these examples (which are fictitious, by the way :)) that we support a lot of different types. Strings, int, double, float, enum, and more. In the future I’ll add an interceptor that you can parse where’s differently.

 

How properties work

When designing this solution, I had this idea that you might want to name a property differently or ignore a property. I have added two attributes (ParserIgnore and ParserPropertyName). The parser takes these into effect. If you pass the parser an ignored property then the constraint isn’t parsed. You can define a parser property name to a property and that name will be used in determining which property to apply a constraint. This avoids the issue of property name conflicts… you have to resolve those yourself using the property name attribute for the expected behavior to work when you have the same property name in different case.

Also, it’s worth mentioning that everything is parsed in lower-case. So the case in your expressions are of no importance… it’ll just be changed to lower-case upon parsing.

 

Using the generator

Sometimes you don’t want to form these by hand… concatenation isn’t exactly useful when you’re using paged data, for example. Because of this, I’ve built a generator. Basically you’d pass a few IParserExpression’s to this generator and it’ll generate the string for the query for you. The generator uses the ToString method on each IParserExpression. It is important to note that you must override ToString for the generator to work on your custom expressions. You can take a look at the code yourself, but I thought I might show you an example of how to use this.

 

....
<h2>My MVC Page</h2>
....
<a href="#<% new StringQueryGenerator<User>(new IParserExpression<User>[] 
                                            { 
                                                 new WhereExpression<User>(u => u.Name, WhereCombinator.equals, "foo"),
                                                 new PageExpression<User>(4, 10)
                                            }).Generate() %>
......

 

This will generate a string like /name-equals-foo/page-4/ . Both where and sort have a constructor overload that you can pass in a property expression… so you don’t have to use the property name as a string.

 

 

Conclusion

Wow that was a long blog post :) I hope that you’ve seen that this is useful for ASP.NET MVC in particular and in an instance where a user can define their own query. There are many examples of this sort of thing on the web where people are using this approach to filter results based on a clean-looking query. I hope you check out the source code and let me know if you have any suggestions or comments on my implementation. And with that I say DEATH TO QUERY STRING FILTERING!


kick it on DotNetKicks.com

jQuery DataTables Plugin Meets C#

Over the weekend, I was doing some work on the internal CMS we use over at eagleenvision.net and I wanted to scrap my custom table implementation for a table system that would use JSON to return data rather than have the data be statically allocated on the page. Basically I wanted to have the ability to refresh, etc for editing purposes. My little table marker was too much code for something so simple.

I’m a HUGE fan of jQuery, especially of the recent changes in jQuery 1.4. We have already made a significant investment in jQuery code for our CMS and I didn’t want to simply add another library or framework into the mix… by that I mean I didn’t want to throw in the Ext framework, which specialized in UI controls rather than general purpose JavaScript.

I stumbled upon the jQuery DataTables plugin. It has a lot of great features… one of which is the ability to have server-side processing of your data. The examples on the site are written with PHP, as are the downloadable demos. I don’t use PHP, I use ASP.NET :). So I had to write my own library to process the incoming request. DataTables has a set of pre-defined request variables that are passed to the server for processing. A successful implementation will take all these variables into account and send the correct data back per these variables. (To see a complete list, check out the server-side usage page).

 

Our friend, IQueriable

If you’re a C# developer, there’s no way you don’t already know about LINQ… (double negatives… oops…. every C# knows about LINQ, or they’re not really current with technology… that’s better :) ). This includes VB people as well. IQueriable is the fundamental type of LINQ that drives the functionality of the various sequence operators. In my implementation of DataTables processing, I wanted to leverage LINQ so that you could throw this thing an IQueriable from Linq2Sql, Linq to NHibernate, Entity Framework, LightSpeed, in-memory list, or ANYTHING that has IQueriable functionality and it would just work.

 

How to output

The DataTables plugin accepts either JSON or XML… whichever jQuery will parse. My opinion is never use XML with JavaScript. It’s slower and there’s no point to using XML over JSON… especially in .NET where there are built-in JSON serializers. Having said that, you could certainly use XML… although I haven’t tested my code for this, I think (in theory) it will work the same. He’s the output type, which should be serialized into JSON or XML, which I will cover in a minute.

public class FormatedList
{
    public FormatedList()
    {
    }
    public int sEcho { get; set; }
    public int iTotalRecords { get; set; }
    public int iTotalDisplayRecords { get; set; }
    public List<List<string>> aaData { get; set; }
    public string sColumns { get; set; }
    public void Import(string[] properties)
    {
        sColumns = string.Empty;
        for (int i = 0; i < properties.Length; i++)
        {
            sColumns += properties[i];
            if (i < properties.Length - 1)
                sColumns += ",";
        }
    }
}

Basically the only interesting thing here is the output of the columns. I made a custom Import method that just takes a list of properties and forms the column string that DataTables will parse. Other than that the code here is just basic property holding.

 

ASP.NET MVC

Readers of my blog and twitter will know I am also a HUGE fan of ASP.NET MVC. I don’t think I’ll ever return to ASP.NET WebForms. But who knows. Anyways, here’s how you will output the thing in MVC

public ActionResult List()
{
    IQueriable<User> users = Session.Linq<User>();
    if (Request["sEcho"] != null)
    {
        var parser = new DataTableParser<User>(Request, users);
        return Json(parser.Parse());
    }
    return Json(users);
}

You’ll notice that I referenced DataTableParser, which I will get to in a minute. This takes an HttpRequestBase (or an HttpRequest) and an IQueriable of whatever type. It will output a new FormattedList in the parse method, which you will return via JSON (which is serialized in the Json method for MVC).

 

ASP.NET Webservices

While I don’t claim to be an expert at ASP.NET Webservice, this I can handle :). He’s how would would do the same thing in ASP.NET Webservices.

using System.Web.Script.Serialization;
using System.Linq;
...
public class MyWebservice : System.Web.Services.WebService
{
    public string MyMethod()
    {
        // change the following line per your data configuration
        IQueriable<User> users = Session.Linq<User>();
        
        response.ContentType = "application/json";
        
        if(Request["sEcho"] != null)
        {
            var parser = new DataTableParser<User>(Request, users);
            return new JavaScriptSerializer().Serialize(parser.Parse());
        }
        
        return new JavaScriptSerializer().Serialize(users); 
    }
}

 

This is the same code… it just uses webservices and the JavaScriptSerializer (which MVC uses under the covers) to serialize the FormatedList object.

 

It should be noted that you should ALWAYS check if the request is a DataTables request (which is what that sEcho business is all about).

 

The Parser

Now is the time to show you my parser. I have taken out my code comments to keep this short on my blog… but you can download my code from here and the comments should explain what is going on.

 

public class DataTableParser<T>
{
    private const string INDIVIDUAL_SEARCH_KEY_PREFIX = "sSearch_";
    private const string INDIVIDUAL_SORT_KEY_PREFIX = "iSortCol_";
    private const string INDIVIDUAL_SORT_DIRECTION_KEY_PREFIX = "sSortDir_";
    private const string DISPLAY_START = "iDisplayStart";
    private const string DISPLAY_LENGTH = "iDisplayLength";
    private const string ECHO = "sEcho";
    private const string ASCENDING_SORT = "asc";
    private IQueryable<T> _queriable;
    private readonly HttpRequestBase _httpRequest;
    private readonly Type _type;
    private readonly PropertyInfo[] _properties;
    public DataTableParser(HttpRequestBase httpRequest, IQueryable<T> queriable)
    {
        _queriable = queriable;
        _httpRequest = httpRequest;
        _type = typeof(T);
        _properties = _type.GetProperties();
    }
    public DataTableParser(HttpRequest httpRequest, IQueryable<T> queriable)
        : this(new HttpRequestWrapper(httpRequest), queriable)
    { }
    
    public FormatedList Parse()
    {
        var list = new FormatedList();
        list.Import(_properties.Select(x => x.Name).ToArray());
        
        list.sEcho = int.Parse(_httpRequest[ECHO]);
        
        list.iTotalRecords = _queriable.Count();
        
        ApplySort();
        
        int skip = 0, take = 10;
        int.TryParse(_httpRequest[DISPLAY_START], out skip);
        int.TryParse(_httpRequest[DISPLAY_LENGTH], out take);
        
        list.aaData = _queriable.Where(ApplyGenericSearch)
                                .Where(IndividualPropertySearch)
                                .Skip(skip)
                                .Take(take)
                                .Select(SelectProperties)
                                .ToList();
                                
        list.iTotalDisplayRecords = list.aaData.Count;
        return list;
    }
    private void ApplySort()
    {
        foreach (string key in _httpRequest.Params.AllKeys.Where(x => x.StartsWith(INDIVIDUAL_SORT_KEY_PREFIX)))
        {
            int sortcolumn = int.Parse(_httpRequest[key]);
            if (sortcolumn < 0 || sortcolumn >= _properties.Length)
                break;
                
            string sortdir = _httpRequest[INDIVIDUAL_SORT_DIRECTION_KEY_PREFIX + key.Replace(INDIVIDUAL_SORT_KEY_PREFIX, string.Empty)];
            
            var paramExpr = Expression.Parameter(typeof(T), "val");
            var propertyExpr = Expression.Lambda<Func<T, object>>(Expression.Property(paramExpr, _properties[sortcolumn]), paramExpr);
            
            if (string.IsNullOrEmpty(sortdir) || sortdir.Equals(ASCENDING_SORT, StringComparison.OrdinalIgnoreCase))
                _queriable = _queriable.OrderBy(propertyExpr);
            else
                _queriable = _queriable.OrderByDescending(propertyExpr);
        }
    }
    
    private Expression<Func<T, List<string>>> SelectProperties
    {
        get
        {
            // 
            return value => _properties.Select
                                        (
                                            prop => (prop.GetValue(value, new object[0]) ?? string.Empty).ToString()
                                        )
                                       .ToList();
        }
    }
    
    private Expression<Func<T, bool>> IndividualPropertySearch
    {
        get
        {
            var paramExpr = Expression.Parameter(typeof(T), "val");
            Expression whereExpr = Expression.Constant(true); // default is val => True
            foreach (string key in _httpRequest.Params.AllKeys.Where(x => x.StartsWith(INDIVIDUAL_SEARCH_KEY_PREFIX)))
            {
                int property = -1;
                if (!int.TryParse(_httpRequest[key].Replace(INDIVIDUAL_SEARCH_KEY_PREFIX, string.Empty), out property) 
                    || property >= _properties.Length || string.IsNullOrEmpty(_httpRequest[key]))
                    break; // ignore if the option is invalid
                string query = _httpRequest[key].ToLower();
                
                var toStringCall = Expression.Call(
                                    Expression.Call(
                                        Expression.Property(paramExpr, _properties[property]), "ToString", new Type[0]),
                                    typeof(string).GetMethod("ToLower", new Type[0]));
                
                whereExpr = Expression.And(whereExpr, 
                                           Expression.Call(toStringCall, 
                                                           typeof(string).GetMethod("Contains"), 
                                                           Expression.Constant(query)));
                
            }
            return Expression.Lambda<Func<T, bool>>(whereExpr, paramExpr);
        }
    }
    
    private Expression<Func<T, bool>> ApplyGenericSearch
    {
        get
        {
            string search = _httpRequest["sSearch"];
            
            if (string.IsNullOrEmpty(search) || _properties.Length == 0)
                return x => true;
                
            var searchExpression = Expression.Constant(search.ToLower());
            var paramExpression = Expression.Parameter(typeof(T), "val");
            
            var propertyQuery = (from property in _properties
                                let tostringcall = Expression.Call(
                                                    Expression.Call(
                                                        Expression.Property(paramExpression, property), "ToString", new Type[0]),
                                                        typeof(string).GetMethod("ToLower", new Type[0]))
                                select Expression.Call(tostringcall, typeof(string).GetMethod("Contains"), searchExpression)).ToArray();
                                
            Expression compoundExpression = propertyQuery[0];
            
            for (int i = 1; i < propertyQuery.Length; i++)
                compoundExpression = Expression.Or(compoundExpression, propertyQuery[i]);
                
            return Expression.Lambda<Func<T, bool>>(compoundExpression, paramExpression);
        }
    }
}

 

Caveat

Currently there’s a bug here, that I need to research :). If you have a boolean property and apply a sort on it, you’ll get an exception because I am trying to cast it as an object with the Expression.Lambda<Func<T, object>> call. I’ll look into this and update this blog post accordingly. If you can provide any help, that would be great :)

 

Conclusion

This is just a simple example of parsing a request and mutating IQueriable acordingly. I hope this helps someone out there who would like to use C# with the DataTables plugin. Again, you can download my code from here with full comments.

Posted by zowens | 26 comment(s)
More Posts Next page »