March 2011 - Posts

In this last post, I showed how the new WCF Web Apis could be integrated with AppFabric for pushing custom events to the AppFabric tracking database. A great thing about the monitoring infrastructure in AppFabric is that is uses ETW as mechanism for publishing the events, so your services are not hitting the database directly with all the performance penalties that database calls imply.  

If you look at how the monitoring database for AppFabric is implemented in SQL server, there is a view “ASWcfEvents” that we can use to get all the tracking information for the events generated by WCF (this also includes any custom event that we injected in the service). As part of those events, we can get trace information or also details about any exception generated in the service.


When looking at the data returned by the view, there are two columns we will find particularly interesting, “EventSourceId” and “E2EActivityId”. The first column identifies the service that generated the entry, and the second one the specific instance of that service.

This information is really valuable for developers when troubleshooting issues, but the thing is that they need  to get access to the server where the AppFabric database is hosted in order to retrieve the information they need. This is not always possible for many reasons (The service is hosted in a remote server or the devs don’t have permissions over this db for example). Something we can do to make the life easier for devs is to expose this data as a read-only OData feed that can be queried. I stick this implementation to WCF Data Services and not the new “QueryCompositon” support in the WCF Web Apis because WCF Data Services offers some useful features out of the box that we can use for this implementation. One of them is server paging, which is used to limit the number of entries that are returned on the OData feed when the client application does not specify any paging requirements (We don’t want to return the whole database with a simple query). 

The first thing we need to do is to create an EF model for exposing the database views we want to use.


I renamed “ASWcfEvents” to “TraceEvent”, and “ASEventSources” to “EventSource” (This last one contains information about the WCF service that generated the entry such as Name, Computer, Virtual Path where the service is hosted, etc).

Once we have the EF model, we can expose it as an OData feed with a WCF Data Service as it is showed bellow,

public class TraceDataService : DataService<TraceDataSource>
    public static void InitializeService(DataServiceConfiguration config)
        config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
        config.SetServiceOperationAccessRule("*", ServiceOperationRights.AllRead);
        config.SetEntitySetPageSize("TraceEvents", 50);
        config.DataServiceBehavior.AcceptCountRequests = true;
        config.DataServiceBehavior.AcceptProjectionRequests = true;
        config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
        config.UseVerboseErrors = true;
    public IQueryable<TraceEvent> all()
        var virtualPath = HostingEnvironment.ApplicationVirtualPath;
        var eventSources = this.CurrentDataSource.EventSources
            .Where(e => e.ApplicationVirtualPath.StartsWith(virtualPath))
            .Select(e => e.Id)
        var events = this.CurrentDataSource.TraceEvents
            .Where(t => eventSources.Contains(t.EventSourceId));
        return events;

As you can see, I configured the data service to make read-only all the entity sets and also set the server paging for “TraceEvents” to 50 records. I also implemented a service operation to filter the events for the services running in the same virtual directory as the data service.

So far so good, we have exposed all the AppFabric monitoring data as a OData Feed. We need to give developers with a way to correlate their service calls with the generated entries in the monitoring database. As I said before, the “E2EActivityId” is the column in the database is the one that identifies the service instance, and fortunately that identifier is easy to get from code by using the Trace Correlation Manager from the .NET Diagnostics Api.


We can now inject that identifier in any service response as an Http Header so developers can use it to query the OData feed and get the information they need.

[WebGet(UriTemplate = "")]
public IEnumerable<Order> Get(HttpResponseMessage response)
    this.logger.WriteInformation("Orders Requested", "All Orders");
    return repository.All;

AddActivityHeader in the code above is an extension method I added to the HttpResponseMessage class. That extension method only adds a custom http header into the response.

public static class HttpResponseExtensions
    public static void AddActivityHeader(this HttpResponseMessage response, string activityId)
        response.Headers.AddWithoutValidation("X-Trace-ActivityId", activityId);

The logger implementation is the same one I used in my previous post to inject custom events into the AppFabric monitoring database. I added a new property ActivityId to that implementation so we can easily test our services.

After running a service with that code, a new Http Header “X-Trace-ActivityId” will be added to the response,

HTTP/1.1 200 OK
Cache-Control: private
Content-Length: 609
Content-Type: text/xml
Server: Microsoft-IIS/7.5
X-Trace-ActivityId: 4e12a0d0-f722-4082-b09d-47219f3c43bf
X-AspNet-Version: 4.0.30319
Set-Cookie: ASP.NET_SessionId=kx5ey2pgsxr3qmm4fymkuxjg; path=/; HttpOnly
X-Powered-By: ASP.NET
Date: Fri, 18 Mar 2011 16:16:03 GMT

We can use that identifier against the OData Feed to retrieve all the events for that running instance as it showed bellow,

http://localhost/WebApisMonitoring/traces/TraceEvents()?$filter=E2EActivityId eq '4e12a0d0-f722-4082-b09d-47219f3c43bf'

The complete example is available to download from here.

Posted by cibrax
Filed under: , ,

There is no doubt that the MVVM pattern offers a clean separation of concern for building testable user interfaces with WPF and Silverlight.  This pattern relies on the data binding support in those two technologies for mapping an existing model class (the view model) to the different parts of the UI or view.

Someone would say, If you want to develop more testable and maintainable applications that can evolve in the long run, MVVM is definitely the way to go in Silverlight, but wait, is that a good decision to make when something that should take a couple of hours instead takes a day with MVVM ?.  

In my case, I always want to do the things right, so MVVM was the approach I decided to use in some outgoing developments with Silverlight. In used MVVM in the past with WPF with no problems, so I thought it will be the same thing on Silverlight.

After having worked around six months with the technology, I have to admit that I got frustrated with many of the limitations of I found for implementing MVVM with many of the existing controls. In the way I see it, many of them were definitely developed to support simple RAD scenarios with code behind, but not for scenarios with data templates and bindings.   

I wrote a couple of posts in the past with some workarounds to support data binding in some of the existing controls (TreeView and ContextMenu), which definitely were not something trivial to find and required investing many hours of research and testing to understand how the things worked under the hood. To give an example of how simple things can become really hard to accomplish with MVVM, I recently came across this thread about implementing MVVM in the Tab Control while trying to do the same on my side. Using a converter for binding a model to the control was not the answer I expected to hear, and sounded like a dirty hack to me. The same thing can be achieved in WPF with data templates, which is a more natural way to do things with this model.

The only thing I can say is that MVVM works well for simple scenarios and common controls like a dropdown list, but you will probably find a hard time trying to use this pattern in Silverlight unless you know exactly what the limitations are and the different workarounds you can use.

Posted by cibrax | 1 comment(s)
Filed under: ,

One of the key aspects of how the web works today is content negotiation. The idea of content negotiation is based on the fact that a single resource can have multiple representations, so user agents (or clients) and servers can work together to chose one of them.

The http specification defines several “Accept” headers that a client can use to negotiate content with a server, and among all those, there is one for restricting the set of natural languages that are preferred as a response to a request, “Accept-Language”. For example, a client can specify “es” in this header for specifying that he prefers to receive the content in spanish or “en” in english.

However, there are certain scenarios where the “Accept-Language” header is just not enough, and you might want to have a way to pass the “accepted” language as part of the resource url as an extension. For example, http://localhost/ProductCatalog/Products/” returns all the descriptions for the product with id “1” in spanish. This is useful for scenarios in which you want to embed the link somewhere, such a document, an email or a page. 

Supporting both scenarios, the header and the url extension, is really simple in the new WCF programming model. You only need to provide a processor implementation for any of them.

Let’s say I have a resource implementation as part of a product catalog I want to expose with the WCF web apis.

public class ProductResource
    IProductRepository repository;
    public ProductResource(IProductRepository repository)
        this.repository = repository;
    [WebGet(UriTemplate = "{id}")]
    public Product Get(string id, HttpResponseMessage response)
        var product = repository.GetById(int.Parse(id));
        if (product == null)
            response.StatusCode = HttpStatusCode.NotFound;
            response.Content = new StringContent(Messages.OrderNotFound);
        return product;

The Get method implementation in this resource assumes the desired culture will be attached to the current thread (Thread.CurrentThread.Culture). Another option is to pass the desired culture as an additional argument in the method, so my processor implementation will handle both options. This method is also using an auto-generated class for handling string resources, Messages, which is available in the different cultures that the service implementation supports. For example,

Messages.resx contains “OrderNotFound”: “Order Not Found” contains “OrderNotFound”: “No se encontro orden”

The processor implementation bellow tackles the first scenario, in which the desired language is passed as part of the “Accept-Language” header.

public class CultureProcessor : Processor<HttpRequestMessage, CultureInfo>
    string defaultLanguage = null;
    public CultureProcessor(string defaultLanguage = "en")
        this.defaultLanguage = defaultLanguage;
        this.InArguments[0].Name = HttpPipelineFormatter.ArgumentHttpRequestMessage;
        this.OutArguments[0].Name = "culture";
    public override ProcessorResult<CultureInfo> OnExecute(HttpRequestMessage request)
        CultureInfo culture = null;
        if (request.Headers.AcceptLanguage.Count > 0)
            var language = request.Headers.AcceptLanguage.First().Value;
            culture = new CultureInfo(language);
            culture = new CultureInfo(defaultLanguage);
        Thread.CurrentThread.CurrentCulture = culture;
        Messages.Culture = culture;
        return new ProcessorResult<CultureInfo>
            Output = culture
As you can see, the processor initializes a new CultureInfo instance with the value provided in the “Accept-Language” header, and set that instance to the current thread and the auto-generated resource class with all the messages. In addition, the CultureInfo instance is returned as an output argument called “culture”, making possible to receive that argument in any method implementation
The following code shows the implementation of the processor for handling languages as url extensions.
public class CultureExtensionProcessor : Processor<HttpRequestMessage, Uri>
    public CultureExtensionProcessor()
        this.OutArguments[0].Name = HttpPipelineFormatter.ArgumentUri;
    public override ProcessorResult<Uri> OnExecute(HttpRequestMessage httpRequestMessage)
        var requestUri = httpRequestMessage.RequestUri.OriginalString;
        var extensionPosition = requestUri.LastIndexOf(".");
        if (extensionPosition > -1)
            var extension = requestUri.Substring(extensionPosition + 1);
            var query = httpRequestMessage.RequestUri.Query;
            requestUri = string.Format("{0}?{1}", requestUri.Substring(0, extensionPosition), query); ;
            var uri = new Uri(requestUri);
            httpRequestMessage.Headers.AcceptLanguage.Add(new StringWithQualityHeaderValue(extension));
            var result = new ProcessorResult<Uri>();
            result.Output = uri;
            return result;
        return new ProcessorResult<Uri>();

The last step is to inject both processors as part of the service configuration as it is shown bellow,

public void RegisterRequestProcessorsForOperation(HttpOperationDescription operation, IList<Processor> processors, MediaTypeProcessorMode mode)
    processors.Insert(0, new CultureExtensionProcessor());
    processors.Add(new CultureProcessor());

Once you configured the two processors in the pipeline, your service will start speaking different languages :).

Note: Url extensions don’t seem to be working in the current bits when you are using Url extensions in a base address. As far as I could see, ASP.NET intercepts the request first and tries to route the request to a registered ASP.NET Http Handler with that extension. For example, “http://localhost/ProductCatalog/” does not work, but “http://localhost/ProductCatalog/products/” does.

Posted by cibrax
Filed under: , ,

I happy to announce today a new addition to our SO-Aware service repository toolset, SO-Aware Test Workbench, a WPF desktop application for doing functional and load testing against existing WCF Services.

This tool is completely integrated to the SO-Aware service repository, which makes configuring new load and functional tests for WCF Soap and REST services a breeze. From now on, the service repository can play a very important role in an organization by facilitating collaboration between developers and testers.


Developers can create and register new services in the repository with all the related artifacts like configuration. On the other hand, Testers can just pick one of the existing services in the repository and create functional or load tests from there, with no need to deal with specific details of the service implementation, location or configuration settings. Developers and Testers can later use the result of those tests to modify the services or adjust different settings on the tests or service configuration.

Gustavo Machado, one of the developers behind this project, has written an excellent post describing all the functionality that can find today in the tool. You can also see the tool in action in this Endpoint Tv episode with Jesus and Ron Jacobs.

Posted by cibrax
Filed under: , ,

The other day, Ron Jacobs made public a template in the Visual Studio Gallery for enabling monitoring capabilities to any existing WCF Http service hosted in Windows AppFabric. I thought it would be a cool idea to reuse some of that for doing the same thing on the new WCF Web Http stack.

Windows AppFabric provides a dashboard that you can use to dig into some metrics about the services usage, such as number of calls, errors or information about different events during a service call. Those events not only include information about the WCF pipeline, but also custom events that any developer can inject and make sense for troubleshooting issues.


This monitoring capabilities can be enabled on any specific IIS virtual directory by using the AppFabric configuration tool or adding the following configuration sections to your existing web app,

    <serviceHostingEnvironment aspNetCompatibilityEnabled="true" multipleSiteBindingsEnabled="true" />
    <diagnostics etwProviderId="3e99c707-3503-4f33-a62d-2289dfa40d41">
        <endToEndTracing propagateActivity="true" messageFlowTracing="true" />
            <behavior name="">
                <etwTracking profileName="EndToEndMonitoring Tracking Profile" />
        <default enabled="true" connectionStringName="ApplicationServerMonitoringConnectionString" monitoringLevel="EndToEndMonitoring" />

Bad news is that none of the configuration above can be easily set on code by using the new configuration model for WCF Web stack.  A good thing is that you easily disable it in the configuration when you no longer need it, and also uses ETW, a general-purpose and high-speed tracing facility provided by the operating system (it’s part of the windows kernel).

By adding that configuration section, AppFabric will start monitoring your service automatically and providing some basic event information about the service calls. You need some custom code for injecting custom events in the monitoring data.

What I did here is to copy and refactor the “WCFUserEventProvider” class provided as sample in the Ron’s template to make it more TDD friendly when using IoC. I created a simple interface “ILogger” that any service (or resource) can use to inject custom events or monitoring information in the AppFabric database.

public interface ILogger
    bool WriteError(string name, string format, params object[] args);
    bool WriteWarning(string name, string format, params object[] args);
    bool WriteInformation(string name, string format, params object[] args);

The “WCFUserEventProvider” class implements this interface by making possible to send the events to the AppFabric monitoring database. The service or resource implementation can receive an “ILogger” as part of the constructor.

public class OrderResource
    IOrderRepository repository;
    ILogger logger;
    public OrderResource(IOrderRepository repository, ILogger logger)
        this.repository = repository;
        this.logger = logger;
    [WebGet(UriTemplate = "{id}")]
    public Order Get(string id, HttpResponseMessage response)
        var order = this.repository.All.FirstOrDefault(o => o.OrderId == int.Parse(id, CultureInfo.InvariantCulture));
        if (order == null)
            response.StatusCode = HttpStatusCode.NotFound;
            response.Content = new StringContent("Order not found");
        this.logger.WriteInformation("Order Requested", "Order Id {0}", id); 
        return order;

The example above uses “MEF” as IoC for injecting a repository and the logger implementation into the service. You can also see how the logger is used to write an information event in the monitoring database.

The following image illustrates how the custom event is injected and the information becomes available for any user in the dashboard.


An issue that you might run into and I hope the WCF and AppFabric teams fixed soon is that any WCF service that uses friendly URLs with ASP.NET routing does not get listed as a available service in the WCF services tab in the AppFabric console.

The complete example is available to download from here.

Posted by cibrax | 2 comment(s)
Filed under: , , ,
More Posts