CacheAdapter 3.0 Released
Sunday, July 21, 2013 5:52 PM

I am happy to announce that CacheAdapter Version 3.0 has been released. You can grab the nuget package from here or you can download the source code from here

For those not familiar with what the CacheAdapter is, you can read my past posts here, here, here, here and here but basically, you get nice consistent API around multiple caching engines. Currently CacheAdapter supports in memory cache, ASP.Net web cache, memcached, and Windows Azure Appfabric cache. You get to a program against a clean easy to use API and can choose your caching mechanism using simple configuration. Change between using ASP.Net web cache to a distributed cache such as memcached or Appfabric with no code change,just some config.

Changes in Version 3.0

This latest version incorporates one new major feature, a much requested API addition and some changes to configuration.

Cache Dependencies

CacheAdapter now supports the concept of cache dependencies. This currently is rudimentary support for invalidating other cache items automatically, when you invalidate a cache item that is linked to the other items. That is, you can specify that one cache item is dependent on other cache items. When a cache item is invalidated, their dependencies are automatically invalidated for you. The diagram below illustrates this. In the scenario below, when ‘ParentCacheKey’ is invalidated/cleared, then all its dependent items are also removed for you. If only ‘ChildItem1’ was invalidated, then only ‘SubChild1’ and ‘SubChild2’ would be invalidated for you.


This is supported across all cache mechanisms and will include any cache mechanisms subsequently added to the supported list of cache engines.Later in this blog post (see ‘Details of Cache Dependency features’ below), I will detail how to accomplish that using the CacheAdpter API.

Clearing the Cache

Many users have asked for a way to programmatically clear the cache. There is now a ‘ClearAll’ API method on the ICacheAdapter interface which will do just that. Please note that Windows Azure Appfabric cache does not support clearing the cache. This may change in the future however the current implementation will attempt to iterate over the regions and clear each region in the cache but will catch any exceptions during this phase, which Windows Azure Appfabric caching will throw. Local versions of Appfabric should work fine.

Cache Feature Detection - ICacheFeatureSupport

A new interface is available on the ICacheProvider called ICacheFeatureSupport which is accessed via the FeatureSupport property. This is fairly rudimentary for now but provides an indicator as to whether the cache engine supports clearing the cache. This means you can take alternative actions based on certain features.

As an example:

if (cacheProvider.FeatureSupport.SupportsClearingCacheContents())
    // Do something    

Configuration Changes

I felt it was time to clean up the way the CacheAdapter looks for its configuration settings. I wasn’t entirely happy the way it currently works so now the CacheAdapter supports looking for all its configuration values in the <appSettings> section of your configuration file. All the current configuration keys are still there but if using the <appSettings> approach, you simply prefix the key names with “Cache.”. An example is probably best. Previously you used the configuration section to define configuration values as shown here:

      <setting name="CacheToUse" serializeAs="String">
      <setting name="IsCacheEnabled" serializeAs="String">
      <setting name="IsCacheDependencyManagementEnabled" serializeAs="String">

Now you can achieve the same effect using <appSettings> element as shown here:

    <add key="Cache.CacheToUse" value="memory"/>
    <add key="Cache.IsCacheEnabled" value="true"/>
    <add key="Cache.IsCacheDependencyManagementEnabled" value="true" />

This approach makes it easier to integrate into your current solutions and a little bit cleaner (IMHO). If you specify configuration values of the same key in both the configuration section *and* the <appSettings> section, then the <appSettings> values will take precedence and override the values in configuration section. You do not need the configuration section anymore.

Support new configuration values for Windows Azure Caching (& Appfabric)


The CacheAdapter now supports us of the ChannelOpenTimeout value in the CacheSpecificData section of config for Windows Azure Appfabric caching. This can be useful for debugging scenarios where you need to allow a much longer connection window so you can see the actual error being returned from the host, as opposed to the client forcibly disconnecting before the error is returned. This value is set in seconds. For example, to set this value to 2 minutes, simply use:

<add key="Cache.CacheSpecificData" value="UseSsl=false;ChannelOpenTimeout=120;SecurityMode=Message;MessageSecurityAuthorizationInfo={your_security_token}"/>



The CacheAdapter now supports us of the MaxConnectionsToServer value in the CacheSpecificData section of config for Windows Azure Appfabric caching. This allows fine tuning performance for the number of concurrent connections to the cache service. Currently, the   Azure client defaults to 1.

Details and usage of Cache Dependency features

The major feature introduced in this release is the cache dependency support. Not all cache implementations support this natively (like memcached for example) but generic support has been added across all current and future cache mechanisms. This is performed through the use of a GenericDependencyManager class. This class uses simplistic techniques to achieve dependency management. No cache specific features are employed which is why it works across all implementations of ICache, the generic cache interface that abstracts each cache mechanism. Later releases will include specific classes that implement dependency management in specific ways for each cache engine that support it. Appfabric, for example, has some dependency and notification support built into its API which can be used.

Enough chatter, lets see how its used.

The following code retrieves some data from the cache using a key of “Bit1”. If not present in the cache, it is added. The interesting part is the very last argument, "MasterKey”. This argument specifies that this cache item with a key of “Bit1” is dependent on “MasterKey”.

var bitOfRelatedData1 = cacheProvider.Get<string>("Bit1", DateTime.Now.AddDays(1), () => "Some Bit Of Data1","MasterKey");

When “MasterKey” is invalidated from the cache using the API call

InvalidateCacheItem(string key)

then the cache item with key “Bit1” will also be invalidated. Thats pretty much it.

it should be noted that the ‘ParentKey’ or ‘MasterKey’ specified as the parent dependency does not actually have to be an item in the cache. It can be simply any arbitrary name. Simply call InvalidateCacheItem for that parent key and all dependencies are also invalidated.

The CacheProvider exposes a new interface which is used to manage all cache dependencies. This interface is the ICacheDependencyManager and is exposed as the InnerDependencyManager property on the ICacheProvider instance.

This interface has the following methods which allow low level implementation of the CacheDependency definitions and perform actions against those dependencies.

public interface ICacheDependencyManager
    void RegisterParentDependencyDefinition(string parentKey, CacheDependencyAction actionToPerform = CacheDependencyAction.ClearDependentItems);
    void RemoveParentDependencyDefinition(string parentKey);
    void AssociateDependentKeysToParent(string parentKey, IEnumerable<string> dependentCacheKeys, CacheDependencyAction actionToPerform = CacheDependencyAction.ClearDependentItems);
    IEnumerable<DependencyItem> GetDependentCacheKeysForParent(string parentKey, bool includeParentNode = false);
    string Name { get;  }
    void PerformActionForDependenciesAssociatedWithParent(string parentKey);
    void ForceActionForDependenciesAssociatedWithParent(string parentKey, CacheDependencyAction forcedAction);
    bool IsOkToActOnDependencyKeysForParent(string parentKey);


As mentioned previously, the GenericDependencyManager implementation that is packaged with this version of the CacheAdapter uses no specific cache features to manage dependencies and so works with all cache types. In the future, more specific implementation will be provided to utilise cache specific features to make dependency management more efficient.

Finally, you will notice an action enumeration ( CacheDependencyAction ) that can be associated with dependencies when adding items to the cache which suggest that there is support for raising events or invoking callbacks when a cache item is invalidated. Currently there is no functionality to support this but future revisions may include these depending on the features of the cache used.

And that’s a wrap

I hope these features provide useful for people. I welcome all feedback and also try to get around to managing all pull requests (yes I am pretty slow at this).


Owin, Katana and getting started
Friday, April 5, 2013 9:19 AM


This article describes an emerging open source specification, referred to as Owin – An Open Web Interface for .Net, what it is, and how this might be beneficial to the .Net technology stack. It will also provide a brief, concise look at how to get started with Owin and related technologies. In my initial investigations with Owin and how to play with it, I found lots of conflicting documentation and unclear ways of how to make it work and why. This article is an attempt to better articulate those steps so that you may save yourself lots of time and come to a clearer understanding in a much shorter time than I did.

Note: The code for this article can be download from here.

First, What is it?

Just in case you are not aware, a community driven project referred to as Owin is really gaining traction. Owin is a simple specification that describes how components in a HTTP pipeline should communicate. The details of what is in the communication between components is specific to each component, however there are some common elements to each. Owin itself is not a technology, just a specification.

I am going to gloss over many details here in order to remain concise, but at its very core, Owin describes 1 main component which is the following interface:

Func<IDictionary<string, object>,Task>

This is a function that accepts a simple dictionary of objects, keyed by a string identifier. The function itself returns a task. The object in the dictionary in this instance will vary depending on what the key is referring to.

More often, you will see it referenced like this:

using AppFunc = Func<IDictionary<string, object>,Task>

An actual implementation might look like this:

public Task Invoke(IDictionary<string, object> environment) 
   var someObject = environment[“some.key”] as SomeObject; 
   // etc… 

This is essentially how the “environment” or information about the HTTP context is passed around. Looking at the environment argument of this method, you could interrogate it as follows:

var httpRequestPath = environment[“owin.RequestPath”] as string; 
Console.WriteLine(“Your Path is: [{0}]”,httpRequestPath);

if a HttpRequest was being made to ‘http://localhost:8080/Content/Main.css” then the output would be:

Your Path is [/Content/Main.css]

In addition, while not part of the spec, the IAppBuilder interface is also core to the functioning of an Owin module (or ‘middleware’ in Owin speak):

public interface IAppBuilder
    IDictionary<string, object> Properties { get; }

    object Build(Type returnType);
    IAppBuilder New();
    IAppBuilder Use(object middleware, params object[] args);

The IAppBuilder interface acts as the ‘glue’ or host to bring any registered Owin compatible libraries/modules together.

So what? I can do that now without Owin.

Okay so it may not look ground breaking, but this is the core of Owin and key to understanding what it offers. Since we have a basic understanding of the how, I am going to jump to what it currently offers as a result of supporting this mechanism. With support of this simple mechanism, I can now write isolated components that deal with specific parts of functionality related to Http requests. I can then chain them together to build capabilities of my Http server. Internet Information Server does not need to get a look in. You can literally chain together Owin components to form a pipeline of only the necessary features you want. In addition, the components I write do not have to have specific references to any particular pipeline component, and yet can still take advantage of any Owin compatible component.

Basically you can build or use a custom host, then you can insert whatever custom modules into the Http request processing pipeline. Owin provides the specification for writing those modules that make it easy to insert into the chain.

Internet Information Services comes with a plethora of pipeline and request handlers, management functionality, knobs, dials and features that make a comprehensive web server. However many people want a lean, cut down web server with only the features they want. This provides a much leaner pipeline that can provide great performance, in addition to only providing the feature set you want, meaning a smaller surface area which can be beneficial from a complexity and sometimes a security perspective.

Quit yer jibba jabba fool, show me some code  MrT1

Yes yes, I will, but first, a diagram illustrating the previous paragraph. As already mentioned, Owin is just a specification of what interface to expose and how to work with that interface.In order for modules written to the Owin spec to be functional, they must exist in a host. So at a conceptual level, Owin compared to Internet Information Server looks like this:


Since we have an empty Owin host, we can programmatically add Owin compatible modules based on only what our needs are as illustrated below:


The diagram above shows an Owin host with 3 modules registered to be used, Static files,  Authentication and SignalR.

(Note: SignalR is a popular real-time web messaging/connection library and has been written as an Owin compatible module)

Are we there yet?

Ok, so lets finally write an Owin compatible module or library.

  1. Within Visual Studio, Create a new blank solution.
  2. Add a new class library project to it.
  3. Install the Owin nuget package using:
    Install-Package Owin
    (via the package manager console in Visual Studio or right click on the project in Visual Studio and select Manage Nuget Packages)
  4. Add a new class, lets call it TestLogger and paste in the following code:
using System;
using System.Collections.Generic;
using System.Threading.Tasks;
using AppFunc = Func<IDictionary<string, object>, Task>;
    public class TestLogger
        private readonly AppFunc _next;
        public TestLogger(AppFunc next)
            if (next == null)
                throw new ArgumentNullException("next");
            _next = next;
        public Task Invoke(IDictionary<string, object> environment)
            System.Diagnostics.Trace.WriteLine(string.Format("Hitting TestLogger, path: {0}", environment["owin.RequestPath"]));
            return _next(environment);

That is actually all you need to write an Owin compatible library. Strictly speaking, you don’t need the reference to the Owin nuget package but we will make use of that in the next part. All this module does is write out the contents of the request path to the trace output for each request that comes in. All we need to do is examine the environment that is provided to us through the environment variable.

However, we have a module but nothing to host it in and no way to tell it to be included as part of a web server pipeline. We need an Owin capable host that loads our module and houses the running code. We also need to programmatically configure that host to include our module. Much like Internet Information Server acts as the host for most web apps in .Net..

So, we are going to add a web project to our solution which will act as the entry point, allowing us to configure a module for use in the web request pipeline.

  1. Within Visual Studio, add a new empty MVC project (I used MVC4) to your current solution.
  2. You need to reference at least the Owin assemblies. In addition, we are going to reference the Owin HttpListener written by Microsoft to act as the actual Http server in our Owin pipeline. To do this:
    Install the Owin nuget package and the Microsoft.Owin.HttpListener package using:
    Install-Package Owin
    Install-Package Microsoft.Owin.Host.HttpListener -Pre

    (via the package manager console in Visual Studio or right click on the project in Visual Studio and select Manage Nuget Packages. Note that the Microsoft.Owin.HttpListener package is pre-release so we need include the –pre option.)
    After installing those packages, you should see the appropriate references in your project as shown in the screen grab below: proj-refs
  3. Add a new class to this MVC project called ‘Startup
  4. Paste in the following code:
public class Startup
    // Invoked once at startup to configure your application.
    public void Configuration(IAppBuilder builder)

Here you can see we are using an instance of IAppBuilder passed into our configuration method to tell the host to ‘Use’ our custom pipeline component, often referred to as ‘Middleware’ in Owin speak.

Owin components use conventions for easy configuration, so when the Owin host starts up and loads our assemblies, it looks for the Startup class, and also a ‘Configuration’ method which accepts an IAppBuilder instance, and calls that, providing the instance.

The final piece of the puzzle

Ok, we have our custom pipeline component, our middleware, and we have an entry point where we can build or configure our pipeline. We need a host process to load these assemblies and invoke them with the requisite information.

This is where Project Katana comes in. Katana is a Microsoft written generic Owin host. You can go grab the source of this project, compile and run it, but it is easier to just install it so you can call it from the command line. By far, the easiest way to do this is to install a tool called ‘Chocolatey’ which is like a Nuget package installer but for windows binaries.

(Side Note: Chocolately is an awesome tool. If you get nothing else from this post, just use Chocolately to install some software and keep it up to date. It is easy to use and has a heap of applications supported.)

To install Chocolotey, run the following from a command line (literally copy and paste into a command line/console window):
@powershell -NoProfile -ExecutionPolicy unrestricted -Command "iex ((new-object net.webclient).DownloadString(''))" && SET PATH=%PATH%;%systemdrive%\chocolatey\bin

Once its installed, from the command line, simply type:
cinst Katana –pre

And Katana will be magically installed. Now opening up a command line/console window and typing ‘Katana’ will load and run the Katana application, which is our Microsoft written Owin host. Nothing will get loaded because we haven’t specified anything, but that is what is next.

We have written our custom Owin component, we have created our web application entry point used to configure our Owin pipeline and we have installed our host application, Katana, to glue it all together. So lets see in action.

First we need to tell our web project to NOT automatically use the current page when starting up. Rather, we want it to startup the Katana host, and tell Katana to use our assemblies. To do this:

  • Open up the properties of the web project in your solution and select the ‘Web’ section. You should see a screen similar to the one shown below:.


For clarification, the things you need to enter are:

Ensure the ‘Start External Program’ is selected and enter the full path to the Katana executable. I installed to the default location using Chocolately so mine is set at:


Set the command line arguments to:

-p8080 –v

This represent the port number of 8080 and enabling verbose mode

Enter the working directory where your web project is located. In my case it was:


  • Now hit F5 to run the app. You should see a console application launched which looks similar to the one below:


Now remember, the only component we have injected into the Owin pipeline is our own custom logging component. Since we have included the Microsoft.Owin.Host.HttpListener package into our project, that assembly also gets included in the output. When Katana executes, if no specific Http server is defined, it will automatically look for the Microsoft.Owin.Host.HttpListener assembly and use that.

  • Again, we only have our logging component installed, so if we load a browser, a navigate to http://localhost:8080 , we wont see a web page but the console should show the logging output of our custom component as in the screen shot below:



All that work, for that?

Yes, but now we have everything in place to easily add other components in the pipeline. In addition, now you should have a reasonable understanding of the various pieces at play.  Remember, we have created a custom web server that only does exactly what we want it to.So conceptually, what we have created looks roughly like the following:


Now lets create another component but I don’t want to create another project for that.

Add the following method block to your ‘Startup’ class in the web project in your solution:

// Invoked once per request.
public Task Invoke(IDictionary<string, object> environment)
   var responseBytes = System.Text.ASCIIEncoding.UTF8.GetBytes(
            string.Format("Serviced request on {0} at {1}",DateTime.Now.ToLongDateString(), DateTime.Now.ToLongTimeString()));
    Stream responseStream = (Stream)environment["owin.ResponseBody"];
    IDictionary<string, string[]> responseHeaders =
        (IDictionary<string, string[]>)environment["owin.ResponseHeaders"];
    responseHeaders["Content-Length"] = new string[] { responseBytes.Length.ToString(CultureInfo.InvariantCulture) };
    responseHeaders["Content-Type"] = new string[] { "text/plain" };
    return responseStream.WriteAsync(responseBytes, 0, responseBytes.Length);

Then add this line of code as the last line in the ‘Configuration’ method within the ‘Startup’ class.

builder.Use(new Func<AppFunc, AppFunc>(ignoredNextApp => (AppFunc)Invoke));

Now, hit F5 to run your app, and browser to http://localhost:8080

You should see a response in the browser similar to the screen grab shown below:


That code we added in the Invoke method, did the following:

  • Created a simple text response as a byte array.
  • Grabbed the response body stream and the response headers object from the environment variable using standard Owin defined key names.
  • Set the response headers according to the response itself.
  • Wrote the bytes to the response stream.

In order to use the code in the Invoke method, we needed to tell our pipeline to use it. This was achieved in the Configuration method via the line:

builder.Use(new Func<AppFunc, AppFunc>(ignoredNextApp => (AppFunc)Invoke));

This simply injected the delegate for our Invoke function into the pipeline. So now our comceptual pipeline looks like this:


All without any real dependencies and in a relatively small amount of code.

And you made it

After all that, you can finally see how composable and modular the Owin concepts and architecture are. There are quite a few Owin compatible middleware modules out there such as SignalR, FubuMVC, NancyFx to name a few. These can be added to your pipeline and you then have the power that each brings at your disposal.

This is just the beginning. Imagine being able to easily build a custom web server with only the capabilities you need. An open, composable and modular world with no lock in. Sounds good to me. Hopefully the Owin concepts and goals are a little clearer now.

Remember, the code for this article can be downloaded from here.

Debugging Pain–End to End
Thursday, February 21, 2013 7:18 AM

We had an issue recently that caused us some time and quite a lot of head scratching. We had made some relatively minor changes to our product and performed a release into staging for testing. We released our main web application as well as our custom built support tool (also a web app).

After a little bit of testing from our QA team, a few bugs were uncovered. One where a response to a cancel action seemingly was not actioned, and an issue where a timeout occurred on a few requests. Nothing too huge and certainly seemed fixable.

Off to work

The timeouts “seemed” to be data specific and possibly because of our 3rd party web service call being made. It seemed to be only occurring in our support tool, the main web app was not affected. Since the main web app is the priority, I looked at the “cancel” issue not working. It seemed that the cancel request was being made to our server (via an ajax call) but never returning from said call. This looked very similar to our issue with the support tool.

A little further investigation showed that both the support tool and our main web app were issuing ajax requests to a few action methods (we use ASP.Net MVC 4.5) and never returning. Ever. I tried recycling the application pool in IIS. This worked for a brief period, then requests to a few particular requests to action methods were not returning. Web pages from other action methods and even other ajax requests were working fine so at least we knew what surface area we had to look at.

Looking at the requests via Chrome, we could see the request in a constant pending state, never satisfied. We began looking at the server. We instituted some page tracing, looked at event logs and also looked at the Internet Information Server logs. Nothing. Nada. We could see the successful requests come in, but these pending requests were not logged. Fiddler showed they were definitely outgoing, but the web server showed nothing.

Using the Internet Information Service management console, we looked at the current worker processes which is available when clicking on the root node within the IIS Management console.



We could see our application pool and right clicking on this allowed us to view current requests, and there they were, all waiting to be executed, and waiting…..and waiting.

What’s the hold up?

So what was causing our requests to get backlogged? We tried going into the code and removing calls to external services and trying to isolate the problem areas. All of this was not reproducible locally, nor in any other environment. Eventually trying to isolate the cause led us to removing everything from the controller actions apart from simple Thread.Sleep.While this worked and the problem did not present, we were in no way closer as it was still in potentially any number of code paths.

Take a dump

A colleague suggested using DebugDiag (a free diagnostic tool from Microsoft) to look at memory dumps of the process. So that is what we did.

Using DebugDiag, we extracted a memory dump of the process. DebugDiag has some really nice features, one of which is to execute predefined scripts in an attempt to diagnose any issues and present a summary of what was found and also has a nice wizard based set of of steps to get you up and running quickly.

We chose to monitor for performance:


and also for HTTP Response time:


We then added the specific URL’s we were monitoring. We also chose what kind of dumps to take, in this case web application pools:


We decided on the time frequency (we chose every 10 seconds) and a maximum of 10 Full dumps:



After that, we set the dump path, named and activated the rule, and we good to go. With the requests already built up in the queue and issuing some more ‘pending’ requests, we could see some memory dumps being taken.

A cool feature of DebugDiag is the prebuilt scripts to analyze your memory dumps (available on the advanced tab):


We initially chose performance, but didn’t glean much information from that. We then chose the “Crash/Hang Analyzers” which produced  great summary of all the threads in the pool. It was apparent that almost every thread was waiting on a .Net Lock. We couldn’t get much more than that though.

WinDbg to the rescue

So I copied the memory dump locally and use WinDbg to examine the memory process. I loaded in the SOS extension so I could use the managed memory extensions.

(Side Note: I almost always have issues with incorrect format –32 or 64 bit- and SOS versions when doing this so it usually takes a bit of frigging around before I get the right combination)

I looked at the threads via:


Sure enough, there were heaps of them in there. Using Dump object (!do) and trying to poke around didn’t reveal too much except a lot of thread locks. So I used the !syncblk command to look at blocked and locking threads:


Here we had some real indication of what was going on. You can see we have 285 locks via the Apache.NMS functions. Our application uses ActiveMQ (which is awesome btw) for our message queuing and we use the Apache.NMS library as the C# interface. In addition, the code paths being executed utilise extensive diagnostic information among all code paths. This diagnostic information is posted to the Message Queue for logging.

A quick test verified this. We commented out all calls to the queue within the queue manager interface (so it effectively did nothing). Put this code on staging and all was working without a hitch.

So we had our culprit, but not the root cause.

We used the admin tool for ActiveMQ to look at the queues themselves.Some queued messages, but not subscribers even though our subscription service was running. Restarted the service, nothing. No subscribers. Using the admin tool we purged all messages. Restarted the service. Nothing. Refreshed the admin tool, the purged messages re-appeared!

We tried deleting the offending queues.Normally, this operation is sub-second. In this case, it took 20+ seconds before we tried again. Something was amiss. We tried creating a new queue from the admin tool.Again, an abnormally long time, but this time it did it. We could then delete queues without issue. Restarted the service and viola, queues subscribed to. We re-instituted the commented code and now all working fine.

So what really happened?

A short time ago, we had run out of space on our staging server. No big deal, freed up the space promptly, all seemingly good. it was during this time that I believe that our message queue repository, which persists messages to disk, got corrupted and this started the issue occurring. What is apparent is that we need to release resources more aggressively so that this issue would not be so detrimental to the rest of the application function.

So there you have it. All in all that took about 2-3 days.

CacheAdapter minor update to version 2.5.1
Wednesday, January 23, 2013 1:42 PM

For information on previous releases, please see here.

My cache adapter library has been updated to version 2.5.1. This one contains only minor changes which are:

  • Use of the Windows Azure Nuget package instead of the assemblies directly referenced.
  • Use of the ‘Put’ verb instead of ‘Add’ when using AppFabric to prevent errors when calling ‘Add’ more than once.
  • Minor updates to the example code to make it clearer that the code is indeed working as expected.

Thanks to contributor Darren Boon for forking and providing the Azure updates for this release.

Writing an ASP.Net Web based TFS Client
Tuesday, November 27, 2012 11:53 AM

So one of the things I needed to do was write an ASP.Net MVC based application for our senior execs to manage a set of arbitrary attributes against stories, bugs etc to be able to attribute whether the item was related to Research and Development, and if so, what kind.

We are using TFS Azure and don’t have the option of custom templates. I have decided on using a string based field within the template that is not very visible and which we don’t use to write a small set of custom which will determine the research and development association.

However, this string munging on the field is not very user friendly so we need a simple tool that can display attributes against items in a simple dropdown list or something similar.

Enter a custom web app that accesses our TFS items in Azure (Note: We are also using Visual Studio 2012)

Now TFS Azure uses your Live ID and it is not really possible to easily do this in a server based app where no interaction is available. Even if you capture the Live ID credentials yourself and try to submit them to TFS Azure, it wont work.

Bottom line is that it is not straightforward nor obvious what you have to do. In fact, it is a real pain to find and there are some answers out there which don’t appear to be answers at all given they didn’t work in my scenario.

So for anyone else who wants to do this, here is a simple breakdown on what you have to do:

  • Go here and get the “TFS Service Credential Viewer”. Install it, run it and connect to your TFS instance in azure and create a service account. Note the username and password exactly as it presents it to you. This is the magic identity that will allow unattended, programmatic access.
      • Without this step, don’t bother trying to do anything else.
  • In your MVC app, reference the following assemblies from “C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\ReferenceAssemblies\v2.0”:
    • Microsoft.TeamFoundation.Client.dll
    • Microsoft.TeamFoundation.Common.dll
    • Microsoft.TeamFoundation.VersionControl.Client.dll
    • Microsoft.TeamFoundation.VersionControl.Common.dll
    • Microsoft.TeamFoundation.WorkItemTracking.Client.DataStoreLoader.dll
    • Microsoft.TeamFoundation.WorkItemTracking.Client.dll
    • Microsoft.TeamFoundation.WorkItemTracking.Common.dll
  • If hosting this in Internet Information Server, for the application pool this app runs under, you will need to enable 32 Bit support.


  • You also have to allow the TFS client assemblies to store a cache of files on your system. If you don’t do this, you will authenticate fine, but then get an exception saying that it is unable to access the cache at some directory path when you query work items. You can set this up by adding the following to your web.config, in the <appSettings> element as shown below:
<appSettings> <!-- Add reference to TFS Client Cache -->
   <add key="WorkItemTrackingCacheRoot" value="C:\windows\temp" />
  • With all that in place, you can write the following code:

var token = new Microsoft.TeamFoundation.Client.SimpleWebTokenCredential("{you-service-account-name", "{your-service-acct-password}");
var clientCreds = new Microsoft.TeamFoundation.Client.TfsClientCredentials(token);
var currentCollection = new TfsTeamProjectCollection(new Uri(“https://{yourdomain}”), clientCreds);


In the above code, not the URL contains the “defaultcollection” at the end of the URL. Obviously replace {yourdomain} with whatever is defined for your TFS in Azure instance. In addition, make sure the service user account and password that was generated in the first step is substituted in here.

Note: If something is not right, the “EnsureAuthenticated()” call will throw an exception with the message being you are not authorised. If you forget the “defaultcollection” on the URL, it will still fail but with a message saying you are not authorised. That is, a similar but different exception message.

And that is it. You can then query the collection using something like:

var service = currentCollection.GetService<WorkItemStore>();

var proj = service.Projects[0];
var allQueries = proj.StoredQueries;
for (int qcnt = 0; qcnt < allQueries.Count; qcnt++)
    var query = allQueries[qcnt];
    var queryDesc = string.format(“Query found named: {0}”,query.Name);

You get the idea.

If you search around, you will find references to the ServiceIdentityCredentialProvider which is referenced in this article. I had no luck with this method and it all looked too hard since it required an extra KB article and other magic sauce.

So I hope that helps. This article certainly would have helped me save a boat load of time and frustration.

by Glav | with no comments
CacheAdapter 2.5–Memcached revised
Thursday, May 17, 2012 10:14 AM

Note: For more information around the CacheAdapter, see my previous posts here, here, here and here

You may have noticed a number of updates to the CacheAdapter package on nuget as of late. These are all related to performance and stability improvements for the memcached component of the library.

However it became apparent that I needed more performance for the memcached support to make the library truly useful and perform as best as can be expected.

I started playing with optimising serialisation processes and really optimising the socket connections used to communicate with the memcached instance. As anybody who has worked with sockets before may tell you, you quickly start to look at pooling your connections to ensure you do not have to go about recreating a connection every time you want to communicate, especially if we are storing or retrieving items in a cache as this can be very frequent.

So I started implementing a pooling mechanism to increase the performance. I got to improving it to a reasonable extent, then found I would hit a hurdle where performance could be increased further but it was becoming harder and more complex to retain stability. I became lost in trying to fix the problem when a solution had already been in place for a long time.

It was about then I decided it best to simply take a dependency on the most excellent Enyim memcached client. It is fast. Really fast, and blew away everything I had done in terms of performance. So that is the essence of this update. The memcached support in the cache adapter comes courtesy of Enyim. Enjoy the extra speed that comes with it.

Note: There are no other updates to any of the other caching implementations.

CacheAdapter 2.4 – Bug fixes and minor functional update
Thursday, March 29, 2012 7:22 AM

Note: If you are unfamiliar with the CacheAdapter library and what it does, you can read all about its awesome ability to utilise memory, Asp.Net Web, Windows Azure AppFabric and memcached caching implementations via a single unified, simple to use API from here and here..

The CacheAdapter library is receiving an update to version 2.4 and is currently available on Nuget here.

Update: The CacheAdapter has actualy just had a minor revision to 2.4.1. This significantly increases the performance and reliability in memcached scenario under more extreme loads. General to moderate usage wont see any noticeable difference though.


This latest version fixes a big that is only present in the memcached implementation and is only seen in rare, intermittent times (making i particularly hard to find). The bug is where a cache node would be removed from the farm when errors in deserialization of cached objects would occur due to serialised data not being read from the stream in entirety.

The code also contains enhancements to better surface serialization exceptions to aid in the debugging process. This is also specifically targeted at the memcached implementation. This is important when moving from something like memory or Asp.Web caching mechanisms to memcached where the serialization rules are not as lenient.

There are a few other minor bug fixes, code cleanup and a little refactoring.

Minor feature addition

In addition to this bug fix, many people have asked for a single setting to either enable or disable the cache.In this version, you can disable the cache by setting the IsCacheEnabled flag to false in the application configuration file. Something like the example below:

    <setting name="CacheToUse" serializeAs="String">
    <setting name="DistributedCacheServers" serializeAs="String">
    <setting name="IsCacheEnabled" serializeAs="String">

Your reasons to use this feature may vary (perhaps some performance testing or problem diagnosis). At any rate, disabling the cache will cause every attempt to retrieve data from the cache, resulting in a cache miss and returning null. If you are using the ICacheProvider with the delegate/Func<T> syntax to populate the cache, this delegate method will get executed every single time. For example, when the cache is disabled, the following delegate/Func<T> code will be executed every time:

var data1 = cacheProvider.Get<SomeData>("cache-key", DateTime.Now.AddHours(1), () =>
    // With the cache disabled, this data access code is executed every attempt to
    // get this data via the CacheProvider.
    var someData = new SomeData() { SomeText = "cache example1", SomeNumber = 1 };
    return someData;

One final note: If you access the cache directly via the ICache instance, instead of the higher level ICacheProvider API, you bypass this setting and still access the underlying cache implementation. Only the ICacheProvider instance observes the IsCacheEnabled setting.

Thanks to those individuals who have used this library and provided feedback. Ifyou have any suggestions or ideas, please submit them to the issue register on bitbucket (which is where you can grab all the source code from too)

ASP.NET Web Api–Request/Response/Usage Logging
Sunday, February 26, 2012 8:26 PM

For part 1 of this series of blog posts on ASP.Net Web Api – see here.


In Part 2 of this blog post series, we deal with the common requirement of logging, or recording the usage of your Web Api. That is, recording when an Api call was made, what information was passed in via the Api call, and what response was issued to the consumer.

My new Api is awesome – but is it being used?

So you have a shiny new Api all built on the latest bits from Microsoft, the ASP.Net Web Api that shipped with the Beta of MVC 4. A common need for a lot of Api’s is to log usage of each call, to record what came in as part of the request, to also record what response was sent to the consumer. This kind of information is really handy for debugging.Not just for you, but also for your customers as well. Being able to backtrack over the history of Api calls to determine the full context of some problem for a consumer can save a lot of time and guesswork.

So, how do we do this with the Web Api?

Glad you asked.

Determining what kind of injection point to utilise

We have a few options when it comes to determining where to inject our custom classes/code to best intercept incoming and outgoing data. To log all incoming and outgoing data, the most applicable interception point is a System.Net.Http.DelegatingHandler. These classes are message handlers that apply to all messages for all requests and can be chained together to have multiple handlers registered within the message handling pipeline. For example, in addition to Api usage logging, you may want to provide a generic authentication handler that checks for the presence of some authentication key. I could have chosen to use filters however these are typically more scoped to the action itself. I could potentially use model binders but these are relatively late in the processing cycle and not generic enough (plus it would take some potentially unintuitive code to make it work as we would want)..

Enough with theory, show me some code

So, our initial Api usage logger will inherit from the DelegatingHandler class (in the System.Net.Http namespace) and provide its own implementation.

public class ApiUsageLogger : DelegatingHandler
    protected override System.Threading.Tasks.Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
        //your stuff goes here...

We only really need to override one method, the ‘SendAsync' method which returns a Task. Task objects play heavily in the new Web Api and allow asynchronous processing to be used as the primary processing paradigm allowing better scalability and processing utilisation.

Since everything is asynchronous via the use of Tasks, we need to ensure we play by the right rules and utilise the asynchronous capabilities of the Task class. A more fleshed out version of our method is shown below:

protected override System.Threading.Tasks.Task<HttpResponseMessage> SendAsync(HttpRequestMessage request, System.Threading.CancellationToken cancellationToken)
    // Extract the request logging information
    var requestLoggingInfo = ExtractLoggingInfoFromRequest(request);

    // Execute the request, this does not block
    var response = base.SendAsync(request, cancellationToken);

    // Log the incoming data to the database

    // Once the response is processed asynchronously, log the response data
    // to the database
    response.ContinueWith((responseMsg) =>
        // Extract the response logging info then persist the information
        var responseLoggingInfo = ExtractResponseLoggingInfo(requestLoggingInfo, responseMsg.Result);
    return response;

To step through the code:

  1. First, we extract relevant information from the request message.
  2. We execute the request asynchronously via the base class SendAsync method passing in what we received.
  3. We log the request information (perhaps to a database). Note that this operation could be performed asynchronously as well.
  4. We then define an anonymous function that the asynchronous request task should execute once it has completed.This is done using the ‘ContinueWith’ construct, and passing in an instance of the response message
  5. We return the ‘response’, which is really just a Task object that will process the request asynchronously and execute our response logging code once the request has completed processing. .

Registering our Handler

We have created our custom DelegatingHandler, so we need to tell the framework about it.

The registration is done at application startup through access to the GlobalConfiguration object provided by the framework. In the global.asax.cs file, you will find the Application_Start method looking something like this:

protected void Application_Start()


You can see the GlobalConfiguration.Configuration object (which is of type System.Web.Http.HttpConfiguration) being passed to the RegisterApis method. We then simply register our DelegatingHandler into the list of MessageHandler from the configuration object as in the following code:

public static void RegisterApis(HttpConfiguration config)
    config.MessageHandlers.Add(new ApiUsageLogger());

Note: If you have dependency injection setup for your Web Api project, you can either pass in the service resolver as a constructor argument when registering this handler (in the global.asax.cs for example ) like:

config.MessageHandlers.Add(new ApiUsageLogger(_myContainer));

or you can use that same resolver to resolve an instance of this DelegatingHandler when adding to the MessageHandler collection as in:


I shall address some of the nuances of DI in a separate post.

And that is it. Well, that is all that is required to register a new handler that will get called for each request coming into your Api. Actually logging the information requires a little more work to ensure we play correctly in this nice asynchronous world of the WebApi.

But wait, there’s more

Playing nice in the asynchronous world of the WebApi requires a little bit of care. For example, to extract out request information, particularly the body of the request message, you might write a method like the following:

private ApiLoggingInfo ExtractLoggingInfoFromRequest(HttpRequestMessage request)
    var info = new ApiLoggingInfo();
    info.HttpMethod = request.Method.Method;
    info.UriAccessed = request.RequestUri.AbsoluteUri;
    info.IPAddress = HttpContext.Current != null ? HttpContext.Current.Request.UserHostAddress : "";

    ExtractMessageHeadersIntoLoggingInfo(info, request.Headers.ToList());
    if (request.Content != null)
        var byteResponse = request.Content.ReadAsByteArrayAsync().Result;
        info.BodyContent = System.Text.UTF8Encoding.UTF8.GetString(byteResponse);
    return info;

Notice the line that reads:

var byteResponse = request.Content.ReadAsByteArrayAsync().Result;

Here we are accessing the Result property from an asynchronous task in an attempt to make this code procedural and work in a synchronous manner. It is an easy thing to do, and looks like it makes sense. However, do not do this.

Generally, you should never access the ‘Result’ property of a Task unless you know that the task has completed as this can cause deadlocks in ASP.Net. Yes it is true.Sometimes this code may work but, if you don’t want to waste hours debugging deadlock issues in ASP.Net, I would advise against it.

So how do we know when it is complete? With the ‘ContinueWith’ construct of course. So your code may change to look more like this:

private ApiLoggingInfo ExtractLoggingInfoFromRequest(HttpRequestMessage request)
    var info = new ApiLoggingInfo();
    info.HttpMethod = request.Method.Method;
    info.UriAccessed = request.RequestUri.AbsoluteUri;
    info.IPAddress = HttpContext.Current != null ? HttpContext.Current.Request.UserHostAddress : "";

    ExtractMessageHeadersIntoLoggingInfo(info, request.Headers.ToList());
    if (request.Content != null)
            .ContinueWith((task) =>
                                info.BodyContent = System.Text.UTF8Encoding.UTF8.GetString(task.Result);
                                return info;

    return info;

In the above example, we can safely use the ‘Result’ property of the task since the method within the ‘ContinueWith’ block is only executed once the initial task is completed.

Wrap up

The above code fragments provide a simple and basic way of registering a custom interception point within the WebApi pipeline to perform a custom task, in this case usage logging.

The next series of posts will look at items such as the use of model binding, security and much more.

MVC4 and Web Api– make an Api the way you always wanted–Part 1
Saturday, February 18, 2012 11:33 AM

Update: Part 2 of this series is available here.

ASP.NET MVC is a huge success as framework and just recently, ASP.NET MVC4 Beta was released. While there are many changes in this release, I want to specifically focus on the WebApi portion of this.

In previous flavours of MVC, many people wanted to develop REST Api’s and didn’t really want to use WCF for this purpose. The team at Microsoft was progressing along with a library called WcfWebApi. This library used a very WCF’esque way of defining an Api. This mean defining an interface and decorating the interface with the appropriate WCF attributes to constitute your Api endpoints.

However, a lot of people don’t like to use a WCF style of programming, and are really comfortable in the MVC world. Especially when you can construct similar REST endpoints in MVC with extreme ease. This is exactly what a lot of people who wanted a REST Api did. They simply used ASP.NET MVC to define a route and handled the payload themselves via standard MVC controllers.

What the WcfWebApi did quite well though was things like content negotiation (do you want XML or json?), auto help generation, message/payload inspection, transformation, parameter population and a lot of other things.

Microsoft have recognised all this and decided to mix it all together. Take the good bits of WcfWebApi and the Api development approach of MVC, and created an Api framework to easily expose your REST endpoints and retain the MVC ease of development. This is the WebApi and it supersedes the WcfWebApi (which will not continue to be developed).

So how do you make a REST sandwich now?

Well, best to start with the code.

Firstly, lets define a route for our REST Api.

In the Global.asax, we may have something like this:

public static void RegisterApis(HttpConfiguration config)
    config.MessageHandlers.Add(new ApiUsageLogger());

    config.Routes.MapHttpRoute("contacts-route", "contacts", new { controller = "contacts" });

Ignore the MessageHandler line for now, we will get back to that in a later post.

You can see we are defining/mapping a Http route. The first parameter is the unique route name, second is the route template to use for processing these route requests. This means that when I get a request like http://MyHost/Contacts, this route will get a match. The third parameter  the default properties. In this case, I am simply stating that the controller to use is the ContactsController. This is shown below.

public class ContactsController : ApiController
    public List<ContactSummary> GetContacts()
        // do stuff....            

You can see that we have a class that inherits from a new ApiController class. We have a simple controller action that returns a list of ContactSummary objects.

This is (while overly simplistic) a fully fledged REST Api that supports content negotiation and as such will respond with json or XML if it is requested by the client.

The WebApi fully supports your typical Http verbs such as GET, PUT, POST, DELETE etc… and in the example above, the fact that I have an ApiController action method prefixed with a ‘Get’ means it will support the GET verb. If I had a method prefixed with ‘Post’ then that action method, by convention, would support the POST verb. You can optionally decorate the methods with [System.Web.Http.HttpGet], [System.Web.Http.HttpPost] attributes to achieve the same effect.

And OData as well?

Want to support OData? Then simply return an IQueryable<T>. Your Web Api will support querying via the OData URL conventions.

So how do I access this as a consumer?

Accessing your shiny new WebApi can now be done via the HttpClient class which really emphasises everything asynchronous, and easily supports the popular Http verbs GET, PUT, POST, DELETE. Let’s take a look.

HttpClient client = new HttpClient();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/json"));
var responseMsg = client.GetAsync("http://SomeHost/Contacts").Result;

The responseMsg variable will contain a list of contacts as shown the in Api method described earlier. You can see we are requesting the data in JSON format. This could also be XML by using:

HttpClient client = new HttpClient();
client.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue("application/xml"));
var responseMsg = client.GetAsync("http://SomeHost/Contacts").Result;

The Http client has methods for using http verbs explicitly. Namely, GetAsync, PutAsync, PostAsync and DeleteAsync.


Microsoft have recognised the value of the MVC programming model and incorporated the best of WcfWebApi into MVC for a really nice way of exposing your Api. You get a lot of features for free making great Api’s a hell of a lot easier.

You can get further information on the overall MVC4 release from this post by Jon Galloway.

In the next part to this post, I will explore how you can do model binding, dependency injection, and insert custom handlers into the MVC/Api processing pipeline.

ScriptHelper now a Nuget package, and managing your Javascript
Saturday, January 21, 2012 1:45 PM

For a while now I have been looking at different ways of managing javascript inclusion in web pages, and also managing the dependencies that each script inclusion requires. Furthermore, working with ASP.NET MVC and partial views, working with the ever increasing number of dependencies as well as ensuring that each partial view has the script it requires, can be a little challenging. At the very least messy and tedious.

Ideally, I’d like every page or partial view to be able to express what scripts it requires and let the framework care about removing duplication, minification etc.

So, after looking at the needs of our application, as well as the needs of others I developed the ScriptHelper component which is now available as a Nuget package. I had released an initial version some time ago, details are here.

This latest version has the following features:

  • Support for RequiresScriptsDeferred and RenderDeferred methods to allow you to specify script requirements as many times as you like, where ever you like and to have these requirements only rendered to the page when the RenderDeferred method is called. This makes it easy to include the RenderDeferred at the bottom of your master page so all deferred scripts are rendered then. These scripts can be minified and combined at this time as well.

So you can do:


and maybe somewhere else in the page or in another partial view


Then when the:


is called, all the previously deferred scripts are combined, minified and rendered as one single file inclusion

  • Support for .Less so you can have variables and functions in your CSS. No need to use a .less extension as .Less is automatically invoked if it is enabled for the first time it is required.

Note: Although the project source is titled as an MVC script helper, it can also be used within Webforms without issue as it is a simple static method with no reliance on anything from MVC. I just happened to start coding it with an MVC project in mind.

Additionally, the helper fully supports CSS and .less CSS semantics and syntax.

There are other frameworks that do similar things but I created this one for a few reasons:

  • The code is pretty small and simple. Very easy to modify and extend to suit your own purposes. It currently uses the Microsoft Ajax minifier. If you dont like it, implement the “IScriptProcessingFilter” and replace the current minifier component with your own.
  • I liked the ability to express dependencies explicitly. The number of JS libraries to use grows every day and it can get tricky in large apps to easily see what is required and needed.
  • I didn’t find one that easily implemented the deferred loading scenario, took care of duplicates, and minification, and .less support. Maybe there is now, but I got started on it anyway. Or maybe there is one, but the implementation looked ugly. Either way, I wasn’t satisfied with the current offerings.
  • I like coding stuff Smile

Download the Nuget package from here.

Download the source from here.

For full documentation on ScriptHelper (minus the new features here), see this post.

A ReadMe.txt file is included in the package with all configuration details.

But .Net 4.5 will include a bundling facility to address some this. Why would I use this?

See Scott Gu’s blog post for more info on this feature.

Well of course you don't have to and if you were happy with the default way bundling works, it is probably worth sticking with that. I mean, you don’t want to include extra libraries if you don’t have to.

However the bundling supports expressing dependencies programmatically in code. it is ok, but I prefer the XML file. Easier to read and define IMO. Sure the code isn’t that hard to read, but it could be anywhere and takes a bit more analysis to see all of the dependencies any one component may need.

There is no support for .less files out of the box. I am sure this wont be far off as adding .less support is pretty trivial. However it is not in there by default.

Bundling will include what you tell it to when you tell it to. This is good, but I love the idea of views or pages being able to express their dependencies without worry of script duplication. The deferred script loading feature of the ScriptHelper was one of my main reasons for developing this. If you don’t use deferred script loading/inclusion, then ScriptHelper will assume you know what you are doing and include the script file immediately as per bundling.

The bundling feature will use .Net 4.5. If you are bound to lower versions of the framework, this is a blocker. ScriptHelper is compiled against .Net 4, however there is nothing that really relies on .Net 4 specifics. If you were targeting .Net 1, it could prove a little tricky as generics are used and a few other features, but migrating/compiling against lower versions should not be very hard.


Managing your Javascript

So how does this help manage your javascript in your projects?


Normally partial views, other pages either rely on a knowledge of what is included via the master page and what is also included on the content page itself in terms of scripts. A partial view may include its own scripts, provided that script is not included elsewhere to prevent duplicate script files being loaded.

Using ScriptHelper and DeferredScripts your partial view can include anything all its dependencies. So the partial view can use the “RequiresScriptsDeferred” method to express its dependencies. At some point in time, typically at the bottom of your master page/layout, you will call the “RenderDeferredScripts” where all the required scripts are rendered. Typically this is also used with script combination and minification so that only 1 include is rendered. All duplicate inclusions are ignored, all files are combined, minified, and .less filter is run over it if required. A single script include is generated and your done. If the master page/layout has already expressed dependencies (such as jQuery for example) and the partial view also expresses this dependencies, it is ignored.

This means each partial view, component or page can express all its dependencies that it requires without fear of duplication.

Script Dependencies

The way ScriptHelper works is by using a friendly name to group one or more script files together in the ScriptDependencies.xml file. The way this is all grouped is up to you. You can group by component (jQuery, validation, etc…) or even by page or anything else. There only needs to be 1 reference to the explicit name of your script, everything else is by friendly name so when a version number of a script changes, potentially changing the name of the script file, you only need to change the name of the file in one place.

In addition, the dependencies for scripts in the XML file are listed explicitly. There is no guesswork as to what script require what in terms of dependencies since it is listed for easy identification.

Finally, if you like having version identifiers on your query strings to assist with browser caching, this can also be expressed in the ScriptDependencies.xml file. This means you can have a script with http://host/script.js?v=123 where the v=123 is the version identifier. The ‘v’ and the ‘123’ are all declared in the XML file which is easily updated manually or via a build script.


So if you find yourself in Javascript inclusion and dependency hell, give this library a shot, it may help.

More Posts Next page »

This Blog