Apache Cordova is one of those projects that recently caught my attention for developing applications in mobile. This project previously known as PhoneGap was donated by Adobe to the Apache Foundation to be part of a new and attractive open source alternative for developing mobile applications.  If you haven’t heard of it before, it basically provides the required infrastructure to run native applications in different mobile platforms such as IOS, Android, and Windows Phone 7 using an hybrid approach with an embedded browser. There is a thin native layer that provides access to different native features in phone through an standard object model in JavaScript, so the developers can write their applications using HTML 5 and have access to different features through that model. Therefore, the two most important components in this project are the native layer and the JavaScript component model, which are supported across the different platforms.

The Microsoft Interoperability team collaborated as part of the project to support Win7 phone as another available platform, which was announced by Abu Obeida in December last year. Another major announcement these days was the support for a Metro theme in JQuery Mobile, which gives the ability to give applications written in HTML 5 for this platform the same look and feel as a native applications.

If you want to know more about the platform, there is an excellent introductory article written by Colin Eberhardt for the MSDN Magazine, “Develop HTML Windows Phone Apps with Apache Cordova”

One the things I noticed in the project, is that there is available a project template in Visual Studio for creating the initial application. However, you need to manually deploy that template in the right folder to make it available in Visual Studio, which might be an error prone task. I decided to make a little contribution to the project and write an Visual Studio extension (a vsix package) to automatically deploy the template, which I am not sure yet if it is going to be approved or not, but I think it is a nice thing to have for easing adoption. This is part of the fork associated to my user in case you want to use it.

Posted by cibrax | 3 comment(s)
Filed under: , ,

A common scenario for many web applications running in the cloud is to integrate with existing systems through web services (no matter the messaging style they use). Although in these scenarios, an SLA is typically used as an agreement between the two parties to assure certain level of availability, many things can still fail. Therefore, it is always a good idea to have a mechanism in place to handle any possible error condition and retry the execution when it is possible.

As example, you could have a web application that calls an online CRM system (like Salesforce.com or MS Dynamics) for allowing the users to report incidents. In that scenario, we can not assume any possible call to the CRM system will always succeed. On the other hand, this kind of call does not require an immediate response for the user, so it can be scheduled for later execution and retried if something unexpected happens.

A persistent storage for the messages like a queue is always a good choice for decoupling clients from services, and also accomplish the goal previously discussed. When you move to Windows Azure, there are two different queue offerings, Queues as part of the Storage service, and Queues (or Topics) as part of the Service Bus.

The Service Bus SDK provides a programming model on top of WCF for consuming or sending messages to the queues, which makes this solution very appealing for this scenario. The Service Bus also provides Topics, which are an specific kind of queues that supports subscriptions.

By using Service Bus Queues, the calls to any third party service could be eventually wrapped up in WCF services that are invoked by the web application. The following image illustrates the possible architecture,

queues

One of the core classes in the WCF programming model for the Service Bus Queues is BrokeredMessage. This class represents a message that can be sent to/from an existing queue, and contains a lot of  standard properties such as MessageId, ReplyTo, SessionId or Label to name a few. It also contain a dictionary for custom properties that an application can assign to the message.

In the example above, the MessageId property can be used to correlate the input and out messages in the queue and update the corresponding result in the web application. The WCF service can also use the ReplyTo property, which represents the queue name in which the response should go. Sessions is another concept supported in the Service Bus Queues, which are useful for correlating a set of messages as a batch for processing.   This is something optional and requires the queue to support sessions when they are created.

In WCF, the BrokeredMessage is assigned to the service call with a message property BrokeredMessageProperty as it is illustrated bellow,

var channelFactory = new ChannelFactory<ICRMServiceClient>("crminput");
var clientChannel = channelFactory.CreateChannel();
 
// Use the OperationContextScope to create a block within which to access the current OperationScope
using (var scope = new OperationContextScope((IContextChannel)clientChannel))
{
    // Create a new BrokeredMessageProperty object
    var property = new BrokeredMessageProperty();
 
    // Use the BrokeredMessageProperty object to set the BrokeredMessage properties
    property.Label = "Incident";
    property.MessageId = Guid.NewGuid().ToString();
    property.ReplyTo = "sb://xxx.servicebus.windows.net/input";
 
    // Add BrokeredMessageProperty to the OutgoingMessageProperties bag provided 
    // by the current Operation Context 
    OperationContext.Current.OutgoingMessageProperties.Add(BrokeredMessageProperty.Name, property);
    
    //Do the service call here
 
 
}

On the service side, the BrokeredMessageProperty instance can be retrieved in a similar way

var incomingProperties = OperationContext.Current.IncomingMessageProperties;
var property = incomingProperties[BrokeredMessageProperty.Name] as BrokeredMessageProperty;

Another important feature in the programming model for supporting execution retries in our example is the ReceiveContext. By decorating the WCF service contract with the ReceiveContextEnabled attribute, we can manually specify whether the operation was successfully completed or not. If the operation was not completed, the message will remain in the queue and the operation will be executed again next time.

[ServiceContract()]
public interface ICRMService
{
    [OperationContract(IsOneWay=true)]
    [ReceiveContextEnabled(ManualControl = true)]
    void CreateCustomer(Customer customer);
}
 
The following code shows how the operation implementation looks like,
 
var incomingProperties = OperationContext.Current.IncomingMessageProperties;
var property = incomingProperties[BrokeredMessageProperty.Name] as BrokeredMessageProperty;
 
//Complete the Message
ReceiveContext receiveContext;
if (ReceiveContext.TryGet(incomingProperties, out receiveContext))
{
   //Do Something                
   receiveContext.Complete(TimeSpan.FromSeconds(10.0d));
}
else
{
   throw new InvalidOperationException("...");
}

As you can see, the ReceiveContext instance is marked as complete or implicitly set as not complete when an exception is thrown.

The assemblies for the Service Bus SDK are available as part of a Nuget package “Windows Azure Service Bus”. As part of the package registration, all the required configuration extensions for the WCF such as custom bindings and behaviors are also added in the application configuration file.

NetMessagingBinding is the one you need to use for sending or receiving messages from a queue. That binding is constantly polling the queues for detecting new messages, and activating the WCF service when a new message arrives. For that reason, you need to keep the web application App pool running all the time. This can be accomplished in Windows Azure with a simple approach as this one mentioned by Christian Weyer in this post.

Posted by cibrax
Filed under: , ,

Moving to the cloud can represent a big challenge for many organizations when it comes to reusing existing infrastructure. For applications that drive existing business processes in the organization, reusing IT assets like active directory represent good part of that challenge. For example, a new web mobile application that sales representatives can use for interacting with an existing CRM system in the organization.

In the case of Windows Azure, the Access Control Service (ACS) already provides some integration with ADFS through WS-Federation. That means any organization can create a new trust relationship between the STS running in the ACS and the STS running in ADFS. As the following image illustrates, the ADFS running in the organization should be somehow exposed out of network boundaries to talk to the ACS. This is usually accomplish through an ADFS proxy running in a DMZ.

ActiveDirectoryForCloud1

This is the official story for authenticating existing domain users with the ACS.  Getting an ADFS up and running in the organization, which talks to a proxy and also trust the ACS could represent a painful experience. It basically requires  advance knowledge of ADSF and exhaustive testing to get everything right. 

However, if you want to get an infrastructure ready for authenticating your domain users in the cloud in a matter of minutes, you will probably want to take a look at the sample I wrote for talking to an existing Active Directory using a regular WCF service through the Service Bus Relay Binding.

You can use the WCF ability for self hosting the authentication service within a any program running in the domain (a Windows service typically). The service will not require opening any port as it is opening an outbound connection to the cloud through the Relay Service. In addition, the service will be protected from being invoked by any unauthorized party with the ACS, which will act as a firewall between any client and the service. In that way, we can get a very safe solution up and running almost immediately.

To make the solution even more convenient, I implemented an STS in the cloud that internally invokes the service running on premises for authenticating the users. Any existing web application in the cloud can just establish a trust relationship with this STS, and authenticate the users via WS-Federation passive profile with regular http calls, which makes this very attractive for web mobile for example.

ActiveDirectoryForCloud2

This is how the WCF service running on premises looks like,

[ServiceBehavior(Namespace = "http://agilesight.com/active_directory/agent")]
public class ProxyService : IAuthenticationService
{
    IUserFinder userFinder;
    IUserAuthenticator userAuthenticator;
 
    public ProxyService()
        : this(new UserFinder(), new UserAuthenticator())
    {
    }
 
    public ProxyService(IUserFinder userFinder, IUserAuthenticator userAuthenticator)
    {
        this.userFinder = userFinder;
        this.userAuthenticator = userAuthenticator;
    }
 
    public AuthenticationResponse Authenticate(AuthenticationRequest request)
    {
        if (userAuthenticator.Authenticate(request.Username, request.Password))
        {
            return new AuthenticationResponse
            {
                Result = true,
                Attributes = this.userFinder.GetAttributes(request.Username)
            };    
        }
 
        return new AuthenticationResponse { Result = false };
    }
}

Two external dependencies are used by this service for authenticating users (IUserAuthenticator) and for retrieving user attributes from the user’s directory (IUserFinder). The UserAuthenticator implementation is just a wrapper around the LogonUser Win Api.

The UserFinder implementation relies on Directory Services in .NET for searching the user attributes in an existing directory service like Active Directory or the local user store.

public UserAttribute[] GetAttributes(string username)
{
    var attributes = new List<UserAttribute>();
 
    var identity = UserPrincipal.FindByIdentity(new PrincipalContext(this.contextType, this.server, this.container), IdentityType.SamAccountName, username);
    if (identity != null)
    {
        var groups = identity.GetGroups();
        
        foreach(var group in groups)
        {
            attributes.Add(new UserAttribute { Name = "Group", Value = group.Name });
        }
        
        if(!string.IsNullOrEmpty(identity.DisplayName))
            attributes.Add(new UserAttribute { Name = "DisplayName", Value = identity.DisplayName });
        
        if(!string.IsNullOrEmpty(identity.EmailAddress))
            attributes.Add(new UserAttribute { Name = "EmailAddress", Value = identity.EmailAddress });
    }
 
    return attributes.ToArray();
}
As you can see, the code is simple and uses all the existing infrastructure in Azure to simplify a problem that looks very complex at first glance with ADFS.

All the source code for this sample is available to download (or change) in this GitHub repository,

https://github.com/AgileSight/ActiveDirectoryForCloud

Posted by cibrax

In case you are developing a new web application with Node.js for Windows Azure, you might notice there is no easy way to debug the application unless you are developing in an integrated IDE like Cloud9. For those that develop applications locally using a text editor (or WebMatrix) and Windows Azure Powershell for Node.js, it requires some steps not documented anywhere for the moment.

I spent a few hours on this the other day I practically got nowhere until I received some help from Tomek and the rest of them. The IISNode version that currently ships with the Windows Azure for Node.js SDK does not support debugging by default, so you need to install the IISNode full version available in the github repository

Once you have installed the full version, you need to enable debugging for the web application by modifying the web.config file

<iisnode 
     debuggingEnabled="true"
     loggingEnabled="true"
     devErrorsEnabled="true"
   />

The xml above needs to be inserted within the existing “<system.webServer/>” section.

The last step is to open a WebKit browser (e.g. Chrome) and navigate to the URL where your application is hosted but adding the segment “/debug” to  the end. The full URL to the node.js application must be used, for example, http://localhost:81/myserver.js/debug

That should open a new instance of Node inspector on the browser, so you can debug the application from there.

Enjoy!!

Posted by cibrax
Filed under: , ,

Hypermedia is one of those concepts really hard to grasp when building Http aware APIs (or Web API’s). As human beings, we are constantly dealing with hypermedia in the existing web by following links or posting data from some forms that take us to a next level.

We typically remember the entry point or URL for the retrieving the home page of web site, and we can move from there on to different sections using hypermedia artifacts. Those URL usually tend to be nice and easy to remember, but they don’t have to be as it is not something http by itself mandates. We could rely on a search engine like Google or Bing to find those URLs for us. You can even use some cryptographic URLs for the web pages in your website, and still provide a nice experience for the user by proving the right links to browse the content.

When you build Http API’s for being consumed for other systems, the history is not any different. There is no any reason for client applications to remember all the possible resources and their location (URLs) if the server can provide those. Hardcoding URL’s on the client side is a bad thing for two main reasons,

  1. The server can change the location for an specific resource, so all the clients having a hardcoded URL for that resource will break.
  2. It pushes some knowledge about the application state workflow to the client side. What happens if an action in a resource is only available for a given state, shall we put that logic in any possible API consumer ?. Definitely not, the server should always mandate what can be done or not with a resource. For example, if the state of a purchase order is canceled, the client application application shouldn’t be allowed to submit that PO. If we have an user in front of an UI using that API, he shouldn’t see the submit button enabled (That logic for enabling or disabling the button could be driven by the server using links).

This is one of the gray areas that typically differentiates a regular Web API’s from a RESTful API, but there are some other constraints that also applies, so this discussing about RESTful services probably don’t make sense in most cases. What matters in the end is that the API uses HTTP correctly as application protocol and leverage hypermedia when it is possible. By enabling Hypermedia, you can create self-discoverable APIs, which are not an excuse for not providing documentation as I usually hear, but are more flexible in terms of updatability.

Many of the media types we use nowadays for building Web API’s such as JSON or XML don’t have a built-in concept for representing hypermedia links as HTML does with links and forms. You can leverage those media types by defining a way to express hypermedia, but again, it requires client to understand how the hypermedia semantics are defined on top of those. Other media types like XHtml or ATOM already support some of the hypermedia concepts like links or forms.

By moving forward with this idea of Hypermedia API on top of JSON or XML, Mike Kelly wrote a draft for an specific media type called “HAL”.   HAL extends JSON and XML with all the semantics required to express resources and their links. This fragment from the spec draft illustrates how an order can be modeled using the xml variant of HAL,

<resource href="/orders">
  <link rel="next" href="/orders?page=2" />
  <link rel="search" href="/orders?id={order_id}" />
  <resource rel="order" href="/orders/123">
    <link rel="customer" href="/customer/bob" title="Bob Jones &lt;bob@jones.com>" />
    <resource rel="basket" href="/orders/123/basket">
      <item>
        <sku>ABC123</sku>
        <quantity>2</sku>
        <price>9.50</price>
      </item>
      <item>
        <sku>GFZ111</sku>
        <quantity>1</quantity>
        <price>11.00</price>
      </item>
    </resource>
    <total>30.00</total>
    <currency>USD</currency>
    <status>shipped</status>
    <placed>2011-01-16</placed>
  </resource>
  <resource rel="order" href="/orders/124">
    <link rel="customer" href="/customer/jen" title="Jen Harris &lt;jen@internet.com>" />    
    <resource rel="basket" href="/orders/124/basket">
      <item>
        <sku>KLM222</sku>
        <quantity>1</sku>
        <price>9.00</price>
      </item>
      <item>
        <sku>HHI50</sku>
        <quantity>1</quantity>
        <price>11.00</price>
      </item>
    </resource>
    <total>20.00</total>
    <currency>USD</currency<status>processing</status>
    <placed>2011-01-16</placed>
  </resource>
</resource>

You can see the different entities in your system can be modeled and represented as resources, which are linked, and attributes like “rel” or “href” are used to express the role of the entity and it’s location. Steve Michelotti created an specific formatter for HAL in WCF Web API available here, but it hasn’t be updated to the latest ASP.NET Web API version yet.

I have been able to see two different kinds of hypermedia APIs over the years. APIs returning links for representing state transitions or linked resources that put some out of band assumptions on the consumer for how the transition should be executed. For example, consider the following example with the representation of a PO.

<purchaseOrder>
  <id>90</id>
  <sku>AJJ34</sku>  
  <link rel="customer" href="/customers/foo"/>
  <link rel="approve" href="/po/90/approve"/>
  <link rel="cancel" href="/po/90/cancel"/>
</purchaseOrder>

The representation contains a link for retrieving the representation of the associated customer (which assumes the client should send an HTTP to that URL for getting the representation), and two additional links for approving or canceling the PO (which also assumes the client knows how to post a message to those URLs). You see, the consumer is still coupled to the Web API implementation with some out of band details, but still is self-discoverable and the consumer can determine which actions can be executed over that resource (See the associated customer details, approve it or cancel it).

There are also other kind of APIs that use Forms as you would find them in the HTML or XHTML media types. Good part of these APIs are driven by forms submissions. Let’s see an example to illustrate the concept.

<ul id="purchaseOrder">
  <li id="id">90</li>
  <li id="sku">AJJ34</li>  
</ul>
<a href="/customers/foo" rel="customer"/>
<a href="/po/90/approve_form" rel="approve"/>
<a href="/po/90/cancel_form" rel="cancel"/>

The purchase order is returned this time as an XHTML representation and contains links which basically supports the HTTP GET semantics. An HTTP GET to the “customer” link would return the associated customer representation. However, an HTTP GET to the “approve” or “cancel” links would return an http form this time for approving or canceling the order.

<form action="/po/90/approve" method="POST">
  <input type="hidden" id="id">90</input>
  <input type="hidden" id="___forgeryToken">XXXXXXXX</input>
</form>

The consumer now has been decoupled of certain details for approving the order for example. It only needs to submit this form using an HTTP POST to the URL specified in the action attribute. The server can also includes additional information in the form as a forgery token for example to avoid “Cross-site Request Forgery” (CSRF) attacks.

Posted by cibrax
Filed under: , ,

ASP.NET Web API provides a very similar model to MVC for resolving dependencies using a service locator pattern. What you basically do is to provide the implementation of that service locator to return any of the requested dependencies, and that implementation is typically tied to a DI container. 

The service locator can be injected into the Web API runtime using the ServiceResolver entry in the global configuration object (GlobalConfiguration.Configuration.ServiceResolver), which basically supports different overloads.

public void SetResolver(IDependencyResolver resolver);
public void SetResolver(object commonServiceLocator);
public void SetResolver(Func<Type, object> getService, Func<Type, IEnumerable<object>> getServices);

The first overload receives an instance of a IDependencyResolver implementation, which provides two methods for resolving one or multiple dependencies.

public interface IDependencyResolver
{
    object GetService(Type serviceType);
    IEnumerable<object> GetServices(Type serviceType);
}

GetService should return null if the dependency can not resolved. GetServices should return an empty IEnumerable if the same thing happens.

The second overload uses reflection for invoking the same two methods, GetService and GetServices.

The third overload receives a set of Func delegates for doing the same thing.

Autofac already provides a very smooth integration with ASP.NET Web API through a set of assemblies available in the “Autofac ASP.NET MVC integration” nuget package. The bad news is that you can not use the DepedencyResolver implementation in that package as it is not compatible with the one required by ASP.NET Web API.

The AutofacDepedencyResolver class in that package implements System.Web.Mvc.IDependencyResolver, while ASP.NET Web API expects a System.Web.Http.Services.IDependencyResolver implementation. Nothing that we can not fix with  a simple few lines of code. We can reuse the existing implementation through the third overload that receives a set of delegates.

The following code snippet illustrates how you can register all dependencies in the Autofac container and use that for resolving all the Web API dependencies,

ContainerBuilder builder = new ContainerBuilder();
 
builder.RegisterType<ContactRepository>()
    .As<IContactRepository>()
    .InstancePerHttpRequest();
 
builder.RegisterApiControllers(Assembly.GetExecutingAssembly()); 
IContainer container = builder.Build(); 
 
var resolver = new AutofacDependencyResolver(container);
 
GlobalConfiguration.Configuration.ServiceResolver.SetResolver(
    t => resolver.GetService(t),
    t => resolver.GetServices(t));

One the things you can do with Autofac is to set the lifetime of the dependencies instances to be equal to a http request lifetime by using the “InstancePerHttpRequest” method. I also created a simple extension method “RegisterApiControllers” to automatically register all the existing Api controllers in the the project into the DI container.

public static class RegistrationExtensions
{
    public static IRegistrationBuilder<object, ScanningActivatorData, DynamicRegistrationStyle> RegisterApiControllers(this ContainerBuilder builder, params Assembly[] controllerAssemblies)
    {
        return
            from t in builder.RegisterAssemblyTypes(controllerAssemblies)
            where typeof(IHttpController).IsAssignableFrom(t) && t.Name.EndsWith("Controller")
            select t;
    }
}

Using the code above, you should be able to use a API controller that looks like this (The contact repository is injected as a depedency)

public class ContactController : ApiController
{
    IContactRepository repository;
 
    public ContactController(IContactRepository repository)
    {
        this.repository = repository;
    }
Posted by cibrax
Filed under: , ,

The Http status codes for reporting errors to clients can mainly be categorized on two groups, client errors and server errors. Any status code under 500 is considered an issue generated by something wrong on the request message sent by the client. For example, 404 for resource not found, 400 for bad request (some invalid data in the request message) or 403 for forbidden (an unauthorized operation) are some of the most well know client errors.  On the hand, any other code over 500 is considered as a problem on the server side such as 500 for internal server error or 503 for server unavailable. This kind of error means that something unexpected happened on the server side while processing the request but it is not the client fault.

It’s always a good practice when implementing a Web Api to use the correct http status codes for every situation. While it’s relatively easy to return a response with an specific status code in a controller action using the new HttpResponseMessage class, you might end up with a lot of repetitive code for handling all the possible exceptions in the different execution branches. All the dependencies for a controller like repositories or domain services are usually unaware of http details. For example, you might call a method in a repository that throws an exception when something is wrong, but it would be responsibility of the calling code in the Web Api controller to map that exception to an http status code.

As any cross cutting concern, exception handling can also be implemented in a centralized manner using a filter. This was the way it was implemented in MVC as well in the HandleErrorAttribute filter. ASP.NET Web API is not any different in that aspect, and you can also implement a custom filter for mapping an exception to an http status code.

This is how the filter implementation looks like,

public class ExceptionHandlerFilter : ExceptionFilterAttribute
{
    public ExceptionHandlerFilter()
    {
        this.Mappings = new Dictionary<Type, HttpStatusCode>();
        this.Mappings.Add(typeof(ArgumentNullException), HttpStatusCode.BadRequest);
        this.Mappings.Add(typeof(ArgumentException), HttpStatusCode.BadRequest);
    }
 
    public IDictionary<Type, HttpStatusCode> Mappings
    {
        get;
        private set;
    }
 
    public override void OnException(HttpActionExecutedContext actionExecutedContext)
    {
        if (actionExecutedContext.Exception != null)
        {
            var exception = actionExecutedContext.Exception;
 
            if (actionExecutedContext.Exception is HttpException)
            {
                var httpException = (HttpException)exception;
                actionExecutedContext.Result = new HttpResponseMessage<Error>(
                    new Error { Message = exception.Message },
                    (HttpStatusCode)httpException.GetHttpCode());
            }
            else if (this.Mappings.ContainsKey(exception.GetType()))
            {
                var httpStatusCode = this.Mappings[exception.GetType()];
                actionExecutedContext.Result = new HttpResponseMessage<Error>(
                    new Error { Message = exception.Message }, httpStatusCode);
            }
            else
            {
                actionExecutedContext.Result = new HttpResponseMessage<Error>(
                    new Error { Message = exception.Message }, HttpStatusCode.InternalServerError);
            }
        }
    }
}

As you can see, the filter derives from a built-in class ExceptionFilterAttribute that provides a virtual method “OnException” for implementing our exception handling code. The ExceptionFilterAttribute does not nothing by default.

The exception handling logic in this implementation mainly address three different scenarios,

  1. If an HttpException was raised anywhere in the Web API controller code, the status code and message in that exception will be reused and set in the response message.
  2. If the exception type is associated to an specific status code using a custom mapping, that status code will be used. For example, the filter automatically maps the ArgumentNullException to a “Bad Request” status code. The developer can customize this mapping when the filter is registered.
  3. Any other exception not found in the mapping is considered a server error and the generic “InternalServerError” status code is used.

In all the cases, a model for representing the exception is set in the response message so the framework will take care of serializing that using the wire format expected by the client. The HttpResponseMessage also contains a string property “ReasonPhrase”, which could be used to sent the exception message back to the client. However, this property does not seem to be sent correctly when everything is hosted in IIS.

Let see a few different cases of how this filter works in action.

public Contact Get(int id)
{
    var contact = repository.Get(id);
    if (contact == null)
        throw new HttpException((int)HttpStatusCode.NotFound, "Contact not found");
 
    //Do stuff
 
    return contact;
}

A contact was not found, so a new HttpException with status code 404 is thrown in the controller action. This will be automatically mapped to a response message with 404 in the exception filter.

public void Delete(int id)
{
    bool canBeDeleted = this.repository.CanDelete(id);
 
    if (!canBeDeleted)
    {
        throw new NotAuthorizedException("The contact can not be deleted");
    }
 
    this.repository.Delete(id);
}

Assuming the client did have permissions to delete an existing contact, a custom exception “NotAuthorizedException” was thrown in the controller action. The default behavior in the filter will be to set an internal server error (500) in the response message. However, that logic can be overriding by adding a new mapping for that exception when the filter is registered.

var exceptionHandler = new ExceptionHandlerFilter();
 
exceptionHandler.Mappings.Add(typeof(NotAuthorizedException), HttpStatusCode.Forbidden);
 
GlobalConfiguration.Configuration.Filters.Add(exceptionHandler);

In that way, the client will receive a more meaningful status code representing a forbidden action.

Posted by cibrax | 8 comment(s)
Filed under: , ,

One of the nice things about having a single extensibility model between ASP.NET MVC and Web API is that you can get many of the great MVC features for free. Model binding and validation is one of them.

A simple action in Web API controller is typically associated to a Http verb and can optionally receive or return a model (or a message if you want to have better control over the http messaging details). For example, the following implementation illustrates a simple case for creating a contact when an Http POST is received,

public class ContactController : ApiController
{
    IContactRepository repository;
 
    public ContactController(IContactRepository repository)
    {
        this.repository = repository;
    }
 
    public HttpResponseMessage Post(Contact contact)
    {
        int id = this.repository.Create(contact);
 
        var response = new HttpResponseMessage(HttpStatusCode.Created);
        response.Headers.Location = new Uri("/api/Contact/" + id);
 
        return response;
    }
}

All the details for deserializing the request message into a Contact model is automatically handled by the Web API infrastructure. It actually uses the model binding feature found in MVC but it also adds content negotiation on top of it.

Content negotiation in Web API is implemented through formatters that know how to serialize/deserialize an specific payload associated to a content type (json or html for example) into a model.

As you probably know, the model binding infrastructure in MVC also allows validations on model classes decorated with data annotations. That means you can decorate your models with different validation attributes as it shown bellow,

public class Contact
{
    [Required]
    public string FullName { get; set; }
 
    [Email]
    public string Email { get; set; }
}

The model binding infrastructure will validate the model but Web API will not do anything with it by default. If you want to reject a message based on the result of the validations, you need to implement a custom filter in the current bits.

That can be done by extending an ActionFilterAttribute and overriding the OnActionExecuting method.

public class ValidationActionFilter : ActionFilterAttribute
{
    public override void OnActionExecuting(System.Web.Http.Controllers.HttpActionContext actionContext)
    {
        if (!actionContext.ModelState.IsValid)
        {
            var errors = actionContext.ModelState
                .Where(e => e.Value.Errors.Count > 0)
                .Select(e => new Error
                {
                    Name = e.Key,
                    Message = e.Value.Errors.First().ErrorMessage
                }).ToArray();
 
            actionContext.Response = new HttpResponseMessage<Error[]>(errors, HttpStatusCode.BadRequest);
        }
    }
}

As you can see, the implementation is very simple. It only checks whether the model state is valid and return a list of validation errors. The execution pipeline will be automatically interrupted and response will be sent back to the client with the list of errors. I am also using a model for representing an error so Web API can send the expected wire format to the client application (If the client is expecting json, it will receive a list of errors formatted as json for example).

This validation attribute can be injecting into the execution pipeline by either decorating the api controller with it or by adding it to the global filters collection.

public static void RegisterGlobalFilters(GlobalFilterCollection filters)
{
    filters.Add(new HandleErrorAttribute());
    filters.Add(new ValidationActionFilter());
}

Now, let’s see how this works in action with Fiddler. If we send a json message with an invalid email,

POST http://localhost:16913/api/Contact HTTP/1.1

Content-Type: application/json

Host: localhost:16913

Content-Length: 30

{"fullname":"re","email":"re"}

We will receive a response also formatted as json,

HTTP/1.1 400 Bad Request

Content-Type: application/json; charset=utf-8

[{"Message":"The Email field is not a valid e-mail address.","Name":"Email"}]

 

If we change the request message a little bit to expect xml as response

POST http://localhost:16913/api/Contact HTTP/1.1

Accept: text/xml

Content-Type: application/json

Host: localhost:16913

Content-Length: 31

{"fullname":"re","email":"foo"}

We will get a list of errors formatted as xml this time

HTTP/1.1 400 Bad Request

Content-Type: text/xml; charset=utf-8

<?xml version="1.0" encoding="utf-8"?><ArrayOfError xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema"><Error><Name>Email</Name><Message>The Email field is not a valid e-mail address.</Message></Error></ArrayOfError>

Posted by cibrax | 5 comment(s)
Filed under: , ,

In case you did not see the latest news, what we used to know as WCF Web API was recently rebranded and included in ASP.NET MVC 4 as ASP.NET Web API. While both frameworks are similar in essence with focus on HTTP, the latter was primarily designed for building HTTP services that don’t typically require an user intervention. For example, some AJAX endpoints or a Web API for a mobile application. While you could use ASP.NET MVC for implementing those kind of services, that would require some extra work for implementing things right like content-negotiation, documentation, versioning, etc. What really matter is that both framework share many of the extensibility points like model binders, filters or routing to name a few.

If you don’t know what Backbone.js is, this quote from the main page in the framework website will give you some idea,

“Backbone.js gives structure to web applications by providing models with key-value binding and custom events, collections with a rich API of enumerable functions, views with declarative event handling, and connects it all to your existing API over a RESTful JSON interface. “

You see, the framework will help you to organize your javascript code for the client side using an MVC pattern, and it will also provide an infrastructure for connecting your models with an existing Web API. (I wouldn’t use the term RESTful here as it usually implies your services adhere to all the tenets discussed in the Roy Fielding’s dissertation about REST). 

That means you should be able to connect the backbone.js models with your MVC Web API in a more natural way. As part of this post, I will use the “Todos” example included as part of the backbone.js code, but I will connect that to a Web API built using the new ASP.NET Web API framework. 

On the client side, a model is defined for representing a Todo item.

// Our basic **Todo** model has  `id`, `text`, `order`, and `done` attributes.
window.Todo = Backbone.Model.extend({
 
    idAttribute: 'Id',
 
    // The rest of model definition goes here
});

There is also a collection for representing the Todo list,

// The collection of todos is backed by *localStorage* instead of a remote
// server.
window.TodoList = Backbone.Collection.extend({
 
    // Reference to this collection's model.
    model: Todo,
 
    url: function () {
        return 'api/todos';
    },
 
    // The rest of the collection definition goes here
});

I modified the code for this collection to override the “url” setting. This new url will point to the route of our Web API controller. Backbone will try to synchronize all the changes in the models by sending http requests with the corresponding verb (GET, POST, PUT or DELETE) to that url.

Now, the interesting part is how we can define the Web API controller using the new ASP.NET Web Api framework.

public class TodosController : ApiController
{
    public IEnumerable<TodoItem> Get()
    {
        return TodoRepository.Items;
    }
 
    public IdModel Post(TodoItem item)
    {
        item.Id = Guid.NewGuid().ToString();
        
        TodoRepository.Items.Add(item);
 
        return new IdModel{ Id = item.Id };
    }
 
    public HttpResponseMessage Put(string id, TodoItem item)
    {
        var existingItem = TodoRepository.Items.FirstOrDefault(i => i.Id == id);
        if (existingItem == null)
        {
            return new HttpResponseMessage(HttpStatusCode.NotFound);
        }
 
        existingItem.Text = item.Text;
        existingItem.Done = item.Done;
 
        return new HttpResponseMessage(HttpStatusCode.OK);
    }
 
    public void Delete(string id)
    {
        TodoRepository.Items.RemoveAll(i => i.Id == id);
    }
}

As you can see, the code for implementing the controller is very clean and neat. A name convention based on the Http Verbs can be used to route the requests to the right controller method. Therefore, I have different methods for retrieving the list of items (GET), creating a new item (POST), updating an existing item (PUT) or just delete it (DELETE)

A few things to comment about the code above,

  1. I haven’t specified the wire format anywhere in the code. The framework will be smart enough to do content negotiation based on the headers sent by the client side (In this case text/json).
  2. You can return any serializable class or a HttpResponseMessage if you want to have better control over the returned message (set the status code for example).
  3. The framework will take care of serialize or deserialize the message on the wire to the right model (TodoItem in this case)
  4. A hardcoded repository is being used, which is not a good practice at all, but you can easily inject any dependency into the controllers as you would do with the traditional ASP.NET MVC controllers.

Here is the class definition for the TodoItem model,

public class TodoItem : IdModel
{
    public int Order { get; set; }
    public string Text { get; set; }
    public bool Done { get; set; }
}

The IdModel is just a workaround for returning only an Id property to the client side when a new item is created. This is what backbone needs to associate an id to a new model. A more elegant solution would be to return an anonymous object with the id (such as new { Id = “xxx” }), but the default serializer for Json in the current bits (DataContractJsonSerializer) can not serialize them. Extending the built-in formatter to use a different serializer is something I am going to show in a different post.

public class IdModel
{
   public string Id { get; set; }
}

The default routing rule in the global.asax file is configured like this,

routes.MapHttpRoute(
    name: "DefaultApi",
    routeTemplate: "api/{controller}/{id}",
    defaults: new { id = RouteParameter.Optional }
);

All the calls to a API controller must be prefixed with “api”. That’s why I set the url to “api/todos” in the backbone model.

Posted by cibrax | 2 comment(s)

Although JQuery provides a very good support for caching responses from AJAX calls in the browser, it is always good to know how you can use http as protocol for making an effective use of it.

The first thing you need to do on the server side is to supports HTTP GETs, and identify your resources with different URLs for retrieving the data (The resource in this case could be just a MVC action). If you use the same URL for retrieving different resource representations, you are doing it wrong. An HTTP POST will not work either as the response can not cached. Many devs typically use HTTP POSTs for two reason, they want to make explicit the data can not be cached or they use it as workaround for avoiding JSON hijacking attacks, which can be avoided anyways in HTTP GETs by returning a JSON array.

The ajax method in the JQuery global object provides a few options for supporting caching and conditional gets,

$.ajax({
    ifModified: [true|false],
    cache: [true|false],
});

The “ifModified” flag specifies whether we want to support conditional gets in the ajax calls. JQuery will automatically handle everything for us by picking the last received “Last-Modified” header from the server, and sending that as “If-Modified-Since” in all the subsequent requests. This requires that our MVC controller implements conditional gets. A conditional get in the context of http caching is used to revalidate an expired entry in the cache. If JQuery determines an entry is expired, it will be first try to revalidate that entry using a conditional get against the server. If the response returns an status code 304 (Not modified), JQuery will reuse the entry in the cache. In that way, we can save some of the bandwidth required to download the complete payload associated to that entry from the server.

The “cache” option basically overrides all the caching settings sent by the server as http headers. By setting this flag to false, JQuery will add an auto generated timestamp in the end of the URL to make it different from any previous used URL, so the browser will not know how to cache the responses.

Let’s analyze a couple scenarios.

The server sets a No-Cache header on the response

The server is the king. If the server explicitly states the response can not be cached, JQuery will honor that. The “cache” option on the ajax call will be completely ignored.

$('#nocache').click(function () {
    $.ajax({
        url: '/Home/NoCache',
        ifModified: false,
        cache: true,
        success: function (data, status, xhr) {
            $('#content').html(data.count);
        }
    });
});
public ActionResult NoCache()
{
   Response.Cache.SetCacheability(HttpCacheability.NoCache);
   return Json(new { count = Count++ }, JsonRequestBehavior.AllowGet);
}

The server sets an Expiration header on the response

Again, the server is always is the one in condition for setting an expiration time for the data it returns. The entry will cached on the client side using that expiration setting.

$('#expires').click(function () {
    $.ajax({
        url: '/Home/Expires',
        ifModified: false,
        cache: true,
        success: function (data, status, xhr) {
            $('#content').html(data.count);
        }
    });
});
public ActionResult Expires()
{
    Response.Cache.SetExpires(DateTime.Now.AddSeconds(5));
    return Json(new { count = Count++ }, JsonRequestBehavior.AllowGet);
}

The client never caches the data

The client side specifically states the data must be always fresh and the cache not be used. This means the “cache” option is set to false. No matter what the server specifies, JQuery will always generate an unique URL so that will be impossible to cache.

$('#expires_nocache').click(function () {
    $.ajax({
        url: '/Home/Expires',
        ifModified: false,
        cache: false,
        success: function (data, status, xhr) {
            $('#content').html(data.count);
        }
    });
});
public ActionResult Expires()
{
    Response.Cache.SetExpires(DateTime.Now.AddSeconds(5));
    return Json(new { count = Count++ }, JsonRequestBehavior.AllowGet);
}

The client and server use conditional gets for validating the cached data.

The client puts a new entry in the cache, which will be validated after its expiration.  The server side must implement a conditional GET using either ETags or the last modified header.

$('#expires_conditional').click(function () {
    $.ajax({
        url: '/Home/ExpiresWithConditional',
        ifModified: true,
        cache: true,
        success: function (data, status, xhr) {
            $('#content').html(data.count);
        }
    });
});
public ActionResult ExpiresWithConditional()
{
    if (Request.Headers["If-Modified-Since"] != null && Count % 2 == 0)
    {
        return new HttpStatusCodeResult((int)HttpStatusCode.NotModified);
    }
    
    Response.Cache.SetExpires(DateTime.Now.AddSeconds(5));
    Response.Cache.SetLastModified(DateTime.Now);
 
    return Json(new { count = Count++ }, JsonRequestBehavior.AllowGet);
}

The MVC action in the example above is only an example. In a real implementation, the server should be able to know whether the data has changed since the last time it was served or not.

Posted by cibrax | 2 comment(s)
Filed under: , ,
More Posts « Previous page - Next page »