March 2012 - Posts - Jon Galloway

March 2012 - Posts

Thoughts on ASP.NET MVC, ASP.NET Web API, and ASP.NET Web Pages (Razor) open source announcements

I'm really excited by the big announcements earlier this week:

  • The source for ASP.NET Web API and ASP.NET Web Pages (Razor) were released under Apache v2 license
  • ASP.NET Web API, ASP.NET Web Pages, and ASP.NET MVC (which was already released under MsPL license) will accept community contributions

I'm a big fan of both ASP.NET and open source, so it's exciting to see two of my favorite techy things hanging out at the same party. By now, you've likely read a few post with the actual details, very likely by one or more person named Scott, which contains some actual facts and some of the history. So here are a few recommended posts, followed by some additional commentary.

Must Read Posts

Scott Guthrie: ASP.NET MVC, Web API, Razor and Open Source

Scott Guthrie's announcement explains what's been released and how to get involved.

Scott Hansleman: ASP.NET MVC 4, ASP.NET Web API and ASP.NET Web Pages v2 (Razor) now all open source with contributions

Scott Hanselman's post is worth reading just for the animated GIF's, but he also describes some of the history and answers some common / likely questions, like

  • Why are you doing this?
  • Are you going to open source more things in ASP.NET?
  • Why isn’t ASP.NET Web Forms open sourced?
  • What about Mono?
  • Why not on GitHub?

Phil Haack: ASP.NET MVC Now Accepting Pull Requests

Phil gives some good background on what was involved in making it possible for these products to accept external contribution. More on that later.

Jimmy Bogard: ASP.NET MVC, Web API, Razor and Open Source and what it means

I really like Jimmy's post. He starts off by clarifying that open source has never required accepting external contributions, then asks and answers some good questions:

  • Will the quality of the product be adversely affected because the team accepts contributions?
  • Will some yahoo be able to sidestep my current feedback channel and take the product in some other (wrong) direction?
  • Will my pull request get accepted?
  • Will someone else’s changes be supported?
  • Are we in store for human sacrifice, dogs and cats living together, mass hysteria?
  • Can we trust these products now?
  • Is this a Good Thing?
  • Why CodePlex and not GitHub?

I'm going to assume you're familiar with the above for the rest of this post. So here's what I think about this - and as a reminder, these are my own opinions, not Microsoft position or policy or whatever.

Source visibility - What's really changed

.NET framework has never been obfuscated. We all take that for granted, but it's been really useful to me throughout my .NET development career. Come to think of it, the Visual Basic 6 (and previous versions) source code wasn't available to me - and even if it had been, it wasn't written in Visual Basic so I wouldn't have been able to read it. So there are two nice things there:

  • The fact that .NET and the frameworks built on top of it have always been available to decompile
  • Just about all of .NET and frameworks, such as ASP.NET assemblies, are written in .NET so that they can be decompiled into something .NET developers can readily understand

Several times I week, I pop open ILSpy to either figure out or verify something about how ASP.NET works, often to answer a question that'd be really difficult otherwise. It's not that there isn't plenty of documentation available - there is - but often the best way to learn how code works is to look at the code.

2012-03-29 22h29_25

While these are best disassemblers out there have been third party (e.g. Reflector, ILSpy, ), Microsoft's provided ildasm.exe from back in the .NET 1.1 days (maybe earlier?).

We've had symbols available, too, so we could debug directly into .NET source code as needed. You can point Visual Studio at Microsoft's public symbol servers and debug right into .NET source code in a couple of minutes.

2012-03-30 00h02_14

And that doesn't even get into what IntelliTrace does if you've got a Visual Studio Ultimate license.

So, the point is that we've always been able to see the source code for pretty much anything most developers would care about. ASP.NET MVC moved things along by actually releasing the ASP.NET MVC code under an open source license, but other than maybe the Mono project, I'm not aware of anyone really taking heavy advantage of it. I'd personally be really happy to hear that I'm wrong here, but just haven't heard of people doing things like building custom versions of ASP.NET MVC.

So what's the point, then?

For me, as a consumer, code that's released under an open license gives me options and peace of mind. It lets me know that, regardless of what happens to the project, the source is available for me and other users to carry it forward. Several of my favorite open source applications are forks, and that's something I consider when I invest my time in learning a new framework or application.

So, back to the earlier question... if we've always been able to view the code, what's really changed?

Two big things:

  1. Much more visibility into the development process
  2. Acceptance of community contributions

Instant Visibility

You might think that, as a Microsoft employee, I have free run of the Microsoft source code. Or at least the .NET framework code. Certainly the ASP.NET code repository, right? No, I don't. I subscribe to the source control check-in e-mails, but I've historically kept up with what was going on by talking with the team and keeping up with announcements on the ASP.NET Insiders list.

But over time, that's become less of a thing. As Scott Koon said after the last MVP Summit, he hadn't really heard a single big surprise on what the ASP.NET team was working on... and that's a very good thing. The team's been very open as to what they're up to, what they're thinking about doing in the future, etc.

And this new open source change takes things to a new level there, because they're working in a public repository. So rather than waiting on big releases - or even pre-releases - anyone can watch the checkins as they happen. That's a big deal.

Accepting community contributions

Phil Haack's written some interesting posts recently on open source. His post listed earlier highlighted the difference between code that's released under an open source license and an open source project. As Jimmy Bogard points out, there's nothing about any accepted open source definition that requires accepting community contributions. Nobody would dispute that there are huge potential benefits to code and product quality if you can accept community contributions, but a decision that needs to be made by those who run the project.

But many of the ASP.NET devs run or participate in open source proects, so why wouldn't they just open things up?

Oh, it's stupid company politics and blasted lawyers. Well, that's the simplistic thinking, anyhow. Okay, fine, the devs would like things to be open, but pointy headed bosses ruin it. Oh, and it takes a long time to change minds and steer big ships or something. It's not that simple, though.

[Disclaimer: Personal opinions following. My whole blog is generally my personal opinions unless I say otherwise, but I want to make that really clear.]

Stepping back a bit - and speaking completely for myself, and not Microsoft or any future or previous employers here - businesses that sell some kind of intellectual property (as in they paid someone's salary to create something like digital media and software) need to be careful in giving things away, in ways you wouldn't expect. I have several relatives and close friends who make or have made their livings as musicians, and have explained that it's possible to do something wrong and suddenly your hit becomes public domain and your kids don't go to college. It's the same kind of thing with software - companies that want to give things away need to navigate a complex (bizarre?) legal system that handles intellectual property in non-intuitive ways.

Phil Haack's blog post (ASP.NET MVC Now Accepting Pull Requests) spells this out well:

I also want to take a moment and credit the lawyers, who are often vilified, for their work in making this happen.

One of my favorite bits of wisdom Scott Guthrie taught me is that the lawyers’ job is to protect the company and reduce risk. If lawyers had their way, we wouldn’t do anything because that’s the safest choice.

But it turns out that the biggest threat to a company’s long term well-being is doing nothing. Or being paralyzed by fear. And fortunately, there are some lawyers at Microsoft who get that. And rather than looking for reasons to say NO, they looked for reasons to say YES! And looked for ways to convince their colleagues.

I spent a lot of time with these lawyers poring over tons of legal documents and such. Learning more about copyright and patent law than I ever wanted to. But united with a goal of making this happen.

These are the type of lawyers you want to work with.

Five years ago (long before I worked at Microsoft) I wrote a blog post titled Why Microsoft can't ship open source code which talks about some of the risks a company takes on in shipping code they didn't write:

To understand the code pedigree problem, let's talk about the nightmare scenario. Let's say Microsoft took my advice and shipped Paint.NET as a Windows Vista Ultimate Extra. Unbeknownst to Microsoft - or even the Paint.NET project leads - a project contributor had copied some GPL code and included it in a patch submission (either out of ignorance or as with malice aforethought). Two years later, a competitor runs a binary scan for GPL code and serves Microsoft with a lawsuit for copyright infringement. Microsoft is forced to pay eleventy bajillion dollars and damages. Perhaps even worse, they're hit with an injunction which prevents selling the offending application, which requires recalling shrinkwrapped boxes and working with computer vendors who've got the software pre-installed on computers in their inventory. All for shipping a simple paint program.

So, the risk is too great to justify the small reward.

The point is that, while there are obvious benefits to a company in shipping source code they didn't write, there are risks and costs. There's a good chance that your company doesn't open source all (or any?) of the software you write for these reasons. It's not necessarily because the company leadership / bosses / lawyers are stupid or bad, it's because it's a hard problem.

This doesn't even get into all the other costs, like the time and effort required to support (or continually explain why you don't support) unreleased code, review pull requests, move from systems that work really well internally to systems that are externally visible and accessible, move from source practices that have been optimized for a centralized team to an open source model, etc.

So I for one am profoundly thankful to all those who have done the work to get things worked out legally to both technically.

Where Next?

Lurking

The simplest way to take advantage of this is to watch changes on the live repository. It's easy to see what's being worked on by watching public pages:

Using

If you're interested in using code from the live repository, there are some directions here: http://aspnetwebstack.codeplex.com/documentation. A couple notes:

  • The directions show how to get the source code using a git client, which is a good idea if you're going to contribute back. If you just want to use the code, you can download a zip of the latest source (or any changeset) using the Download link on the Source Control tab.
  • Make sure you note the SkipStrongNames step which lets you run unit tests against delay-signed binaries.
  • If you're using Visual Studio 2010 SP1, the Runtime.sln handles package restore. If not, you'll need to follow the build.cmd directions to handle that.

Contributing

The first thing that often comes to mind when talking about contributing to an open source project is with surprise deliveries of random code. I've found - after having participating in a lot of open source projects - that is not the best way to get involved in any project. The team's listed some good ways to get started, including bug reports and contributing tests. I try to do this kind of thing on projects I'm getting started with - it's a good way to get familiar with the source code and the way the project's run, which is an important first step.

Miguel de Icaza has a great post on Open Source Contribution Etiquette which is great background on how to get started in contributing to a project.

If you want to contribute code, the team has listed some important steps. The whole list is important, but I want to point out one that thing - the Contributor License Agreement. It's a pretty short form (11-12 blanks to fill in, including date and signature) that essentially (I am not a lawyer) says who you are and that you're granting the rights to the code you're contributing.

Oh, and if you're interested in contributing to Mono, they're looking for some help in integrating this new code.

Exciting times!

ASP.NET Web API - Screencast series Part 6: Authorization

We're concluding a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team.

In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools.

In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods.

In Part 3, we modified data on the server using DELETE and POST methods.

In Part 4, we extended on our simple querying methods form Part 2, adding in support for paging and querying.

In Part 5, we added support for Data Annotation based validation using an Action Filter.

In Part 6, we'll require authentication using the built-in Authorization Filter.

[Video and code on the ASP.NET site]

Requiring Authorization using the Authorization Filter

In Part 5, we use a custom Global Action Filter to enforce validation on every Action. In this (final) part, we'll use the built-in Authorize Filter. As with other Filters, they can be applied at the Action, Controller, or Global levels. In this case, we'll add it at the controller level by adding the AuthorizationFilter attribute.

[Authorize] 
public class CommentsController : ApiController  
{ 
    ...
}

That's it from the server side. If a client makes an unauthorized request, the AuthorizationFilter does the only thing that makes sense for an HTTP API - it returns an HTTP Status Code 401, Authorization Required. Again, we're back to the value of using HTTP for an API - we don't need to arrange anything, any client on any platform will know what an HTTP 401 response means.

Handling Redirection on the Client

Many websites (and web frameworks) handle authorization by doing a server-side redirection to a login page. I wrote an in-depth post about how ASP.NET MVC handles authorization redirection - internally an HttpUnauthorizedResult (HTTP 401) is intercepted by the FormsAuthenticationModule, which redirects to Login URL specified in web.config.

None of that makes sense from an HTTP API perspective, though. HTTP API's return HTTP Responses to clients, which include things like Status Codes, Response Body, and Headers. It's up to the client to decide what to do when they get a 401. In this JavaScript / browser based sample, we'll just redirect to the login page on the client.

$(function () { 
    $("#getCommentsFormsAuth").click(function () { 
        viewModel.comments([]); 
        $.ajax({ url: "/api/comments", 
            accepts: "application/json", 
            cache: false, 
            statusCode: { 
                200: function(data) { 
                    viewModel.comments(data); 
                }, 
                401: function(jqXHR, textStatus, errorThrown) { 
                    self.location = '/Account/Login/'; 
                } 
            } 
        }); 
    }); 
});

In this case, logging in gives you a valid forms auth cookie, so your next request will pass authorization.

What about other validation scenarios?

If you want to do something different with validation in an ASP.NET Web API controller, usually the best approach is to create a custom Authorization Filter which derives from the base AuthorizationFilterAttribute and overrides the OnAuthorization method. Here a few blog posts showing how to extend authentication in ASP.NET Web API:

On the client, it's up to you. You may want to show a login form in a desktop application, handle things programmatically when accessing the service via code, etc. You'll follow the same pattern, though - handle the HTTP 401 status code and login to the server, either by posting to a login action or following the service's documented login API.

The End... and Where To Next

That wraps up this series. Some ideas of where to go next:

Posted by Jon Galloway | 4 comment(s)
Filed under: ,

ASP.NET Web API - Screencast series Part 5: Custom Validation

We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team.

In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools.

In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods.

In Part 3, we modified data on the server using DELETE and POST methods.

In Part 4, we extended on our simple querying methods form Part 2, adding in support for paging and querying.

In Part 5, we'll add on support for Data Annotation based validation using an Action Filter.

[Video and code on the ASP.NET site]

Adding Validation Rules using Data Annotations

To start with, we add some validation rules to our model class using Data Annotations - Required and StringLength in this case.

[Required] 
public string Text { get; set; } 
 
[Required] 
[StringLength(10, ErrorMessage = "Author is too long! This was validated on the server.")] 
public string Author { get; set; } 
 
[Required] 
public string Email { get; set; } 

Writing an Action Filter that Enforces Validation Rules

In ASP.NET MVC, that would be it - the validation rules are enforced in controller actions, and automatically passed along as HTML5 data- attributes where they're handled via unobtrusive jQuery validation in the browser. ASP.NET Web API doesn't directly enforce those validation rules without a little more work, though. I think that might be because handling validation errors in an HTTP API isn't something you'd want to do automatically. Who knows, maybe they'll add that in later just to make me look dumb(er). But for now, it takes a bit of work to enforce those validation rules.

Fortunately, by a little more work I mean about 10-15 lines of code. It's easy to hook this kind of thing up using an ASP.NET MVC action filter. Action filters are really powerful, as they allow you to modify how controller actions work using the following methods:

  • OnActionExecuting – This method is called before a controller action is executed.
  • OnActionExecuted – This method is called after a controller action is executed.
  • OnResultExecuting – This method is called before a controller action result is executed.
  • OnResultExecuted – This method is called after a controller action result is executed.

You can apply an action filter attribute on specific actions, on an entire controller class, or globally. There are a few built-in action filters to handle common cases like custom authorization or error handling, and you can extend them if you need to handle a custom scenario.

ASP.NET Web API uses that same extensibility model via Filters. If you want ASP.NET Web API to do something that's not built in, the hook for that is very often to use an action filter. As with ASP.NET Action Filters, you can either extend on any of the in-built filters ( ResultLimit, AuthorizationFilter, ExceptionFilter) or write a custom ActionFilterAttribute.

public class ValidationActionFilter : ActionFilterAttribute 
{ 
    public override void OnActionExecuting(HttpActionContext context) 
    { 
        var modelState = context.ModelState; 
        if (!modelState.IsValid) 
        { 
            dynamic errors = new JsonObject(); 
            foreach (var key in modelState.Keys) 
            { 
                var state = modelState[key]; 
                if (state.Errors.Any()) 
                { 
                    errors[key] = state.Errors.First().ErrorMessage; 
                } 
            } 

            context.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); 
        } 
    } 
}

This is an OnActionExecuting filter, so it executes before the Action. It takes in the context (note - this is a System.Web.Http.Controllers.HttpActionContext, not an System.Web.HttpContext), which gives it access to ModelState as well as other contextual information about the request. It runs any custom logic, and has the option of directly returning the response - as it does in case validation fails.

So, at a high level, we just need to check if the model fails validation, and if so, return an appropriate response.

1. Checking Data Annotation validation rules

As mentioned earlier in the series, ASP.NET Web API uses the same model binding system that's been in ASP.NET MVC for a while, so it already knows how to check validation rules from data annotations. That makes this step really easy - we just check the context.ModelState to see if it's valid. If it is, we're done - let the Action do its work and return the result. If it fails, go on to step 2.

2. Packaging up validation errors for the client

A single request can fail multiple validation rules, so to return useful information to the client we need to package up the error information. We could build up a structured, custom error results object, but we'd be serializing it to JSON when we were done, so why not just start there? ASP.NET Web API includes some very useful JSON classes, including JsonObject. JsonObject allows us to work with a C# dynamic that will be serialized as a JSON object very easily.

3. Returning a useful error error response

Finally, when we're done wrapping up all the validation errors, we return a response to the client. We can use an HttpResponseMessage<JsonValue> to both return the results and set the HTTP Status Code (HTTP 400 Bad Request) in one line:

context.Response = new HttpResponseMessage<JsonValue>(errors, HttpStatusCode.BadRequest); 

You'll remember from earlier in the series that HTTP Status Codes are very important to HTTP APIs, and that by always setting the appropriate status code our clients (be they JavaScript, .NET desktop clients, iOS devices, or aliens who natively speak HTTP) will be able to understand responses without reading a bunch of API documentation. If a client makes a bad request, we'll tell them it was a bad request, and send along validation errors in JSON format in case they want specifics.

Registering a Global Filter

As with ASP.NET MVC filters, ASP.NET Web API filters can be applied at whatever level of granularity you'd like - action, controller, or globally. In this case, we'd like the rules to be enforced on all actions in the application. We can do that by registering a global filter. That's a one-line change - we add a call in our Global.asax.cs Configure method to register the new filter:

public static void Configure(HttpConfiguration config) 
{ 
    config.Filters.Add(new ValidationActionFilter()); 
 
    var kernel = new StandardKernel(); 
    kernel.Bind<ICommentRepository>().ToConstant(new InitialData()); 
    config.ServiceResolver.SetResolver( 
        t => kernel.TryGet(t), 
        t => kernel.GetAll(t)); 
} 

Handling Validation Responses in a JavaScript Client

In this sample, we're working with a JavaScript client. Just for the sake of beating a dead horse a bit more, I'll remind you that JavaScript is a browser is just one possible client.

To add handle validation failures, we need to include a case for HTTP Status Code 400 in our $.ajax() jQuery call.

$.ajax({ 
    url: '/api/comments', 
    cache: false, 
    type: 'POST', 
    data: json, 
    contentType: 'application/json; charset=utf-8', 
    statusCode: { 
        201 /*Created*/: function (data) { 
            viewModel.comments.push(data); 
        }, 
        400 /* BadRequest */: function (jqxhr) { 
            var validationResult = $.parseJSON(jqxhr.responseText); 
            $.validator.unobtrusive.revalidate(form, validationResult); 
        } 
    } 
}); 

Break the rules? Talk to the HTTP 400.

So here's how this looks in action - filling out the form with an Author name that exceeds 10 characters now gets an HTTP 400 response.

2012-03-23 15h40_12

The error message shown above - "Author is too long! This was validated on the server." - is perhaps a little too smug, but is derived from the JSON error information in the request body:

2012-03-23 15h42_51

Onward!

That wraps up our look at supporting Data Annotation based validation using an Action Filter. In Part 6, we'll finish off the series with a look at Authorization using the built-in Authorize filter.

Posted by Jon Galloway | 4 comment(s)
Filed under: ,

ASP.NET Web API - Screencast series Part 4: Paging and Querying

We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team.

In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools.

In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods.

In Part 3, we modified data on the server using DELETE and POST methods.

In Part 4, we'll extend on our simple querying methods form Part 2, adding in support for paging and querying.

[Video and code on the ASP.NET site]

This part shows two approaches to querying data (paging really just being a specific querying case) - you can do it yourself using parameters passed in via querystring (as well as headers, other route parameters, cookies, etc.). You're welcome to do that if you'd like.

What I think is more interesting here is that Web API actions that return IQueryable automatically support OData query syntax, making it really easy to support some common query use cases like paging and filtering. A few important things to note:

  • This is just support for OData query syntax - you're not getting back data in OData format. The screencast demonstrates this by showing the GET methods are continuing to return the same JSON they did previously. So you don't have to "buy in" to the whole OData thing, you're just able to use the query syntax if you'd like.
  • This isn't full OData query support - full OData query syntax includes a lot of operations and features - but it is a pretty good subset: filter, orderby, skip, and top.
  • All you have to do to enable this OData query syntax is return an IQueryable rather than an IEnumerable. Often, that could be as simple as using the AsQueryable() extension method on your IEnumerable.
  • Query composition support lets you layer queries intelligently. If, for instance, you had an action that showed products by category using a query in your repository, you could also support paging on top of that. The result is an expression tree that's evaluated on-demand and includes both the Web API query and the underlying query.

So with all those bullet points and big words, you'd think this would be hard to hook up. Nope, all I did was change the return type from IEnumerable<Comment> to IQueryable<Comment> and convert the Get() method's IEnumerable result using the .AsQueryable() extension method.

public IQueryable<Comment> GetComments() 
{ 
    return repository.Get().AsQueryable(); 
} 

You still need to build up the query to provide the $top and $skip on the client, but you'd need to do that regardless. Here's how that looks:

$(function () { 
    //--------------------------------------------------------- 
    // Using Queryable to page 
    //--------------------------------------------------------- 
    $("#getCommentsQueryable").click(function () { 
        viewModel.comments([]); 
 
        var pageSize = $('#pageSize').val(); 
        var pageIndex = $('#pageIndex').val(); 
 
        var url = "/api/comments?$top=" + pageSize + '&$skip=' + (pageIndex * pageSize); 
 
        $.getJSON(url, function (data) { 
            // Update the Knockout model (and thus the UI) with the comments received back  
            // from the Web API call. 
            viewModel.comments(data); 
        }); 
 
        return false; 
    }); 
});

And the neat thing is that - without any modification to our server-side code - we can modify the above jQuery call to request the comments be sorted by author:

$(function () { 
    //--------------------------------------------------------- 
    // Using Queryable to page 
    //--------------------------------------------------------- 
    $("#getCommentsQueryable").click(function () { 
        viewModel.comments([]); 
 
        var pageSize = $('#pageSize').val(); 
        var pageIndex = $('#pageIndex').val(); 
 
        var url = "/api/comments?$top=" + pageSize + '&$skip=' + (pageIndex * pageSize) + '&$orderby=Author'; 
 
        $.getJSON(url, function (data) { 
            // Update the Knockout model (and thus the UI) with the comments received back  
            // from the Web API call. 
            viewModel.comments(data); 
        }); 
 
        return false; 
    }); 
});

So if you want to make use of OData query syntax, you can. If you don't like it, you're free to hook up your filtering and paging however you think is best. Neat.

In Part 5, we'll add on support for Data Annotation based validation using an Action Filter.

Posted by Jon Galloway | with no comments
Filed under: ,

ASP.NET Web API - Screencast series Part 3: Delete and Update

We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team.

In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools.

In Part 2 we started to build up a sample that returns data from a repository in JSON format via GET methods.

In Part 3, we'll start to modify data on the server using DELETE and POST methods.

[Video and code on the ASP.NET site]

So far we've been looking at GET requests, and the difference between standard browsing in a web browser and navigating an HTTP API isn't quite as clear. Delete is where the difference becomes more obvious. With a "traditional" web page, to delete something'd probably have a form that POSTs a request back to a controller that needs to know that it's really supposed to be deleting something even though POST was really designed to create things, so it does the work and then returns some HTML back to the client that says whether or not the delete succeeded. There's a good amount of plumbing involved in communicating between client and server.

That gets a lot easier when we just work with the standard HTTP DELETE verb. Here's how the server side code works:

public Comment DeleteComment(int id)  
{ 
    Comment comment; 
    if (!repository.TryGet(id, out comment)) 
        throw new HttpResponseException(HttpStatusCode.NotFound); 
    repository.Delete(id); 
    return comment; 
} 

If you look back at the GET /api/comments code in Part 2, you'll see that they start the exact same because the use cases are kind of similar - we're looking up an item by id and either displaying it or deleting it. So the only difference is that this method deletes the comment once it finds it. We don't need to do anything special to handle cases where the id isn't found, as the same HTTP 404 handling works fine here, too.

Pretty much all "traditional" browsing uses just two HTTP verbs: GET and POST, so you might not be all that used to DELETE requests and think they're hard. Not so! Here's the jQuery method that calls the /api/comments with the DELETE verb:

$(function() { 
    $("a.delete").live('click', function () { 
        var id = $(this).data('comment-id'); 
 
        $.ajax({ 
            url: "/api/comments/" + id, 
            type: 'DELETE', 
            cache: false, 
            statusCode: { 
                200: function(data) { 
                    viewModel.comments.remove( 
                        function(comment) {  
                            return comment.ID == data.ID;  
                        } 
                    ); 
                } 
            } 
        }); 
 
        return false; 
    }); 
});

So in order to use the DELETE verb instead of GET, we're just using $.ajax() and setting the type to DELETE. Not hard.

But what's that statusCode business? Well, an HTTP status code of 200 is an OK response. Unless our Web API method sets another status (such as by throwing the Not Found exception we saw earlier), the default response status code is HTTP 200 - OK. That makes the jQuery code pretty simple - it calls the Delete action, and if it gets back an HTTP 200, the server-side delete was successful so the comment can be deleted.

Adding a new comment uses the POST verb. It starts out looking like an MVC controller action, using model binding to get the new comment from JSON data into a c# model object to add to repository, but there are some interesting differences.

public HttpResponseMessage<Comment> PostComment(Comment comment)  
{ 
    comment = repository.Add(comment); 
    var response = new HttpResponseMessage<Comment>(comment, HttpStatusCode.Created); 
    response.Headers.Location = new Uri(Request.RequestUri, "/api/comments/" + comment.ID.ToString()); 
    return response; 
} 

First off, the POST method is returning an HttpResponseMessage<Comment>. In the GET methods earlier, we were just returning a JSON payload with an HTTP 200 OK, so we could just return the  model object and Web API would wrap it up in an HttpResponseMessage with that HTTP 200 for us (much as ASP.NET MVC controller actions can return strings, and they'll be automatically wrapped in a ContentResult). When we're creating a new comment, though, we want to follow standard REST practices and return the URL that points to the newly created comment in the Location header, and we can do that by explicitly creating that HttpResposeMessage and then setting the header information.

And here's a key point - by using HTTP standard status codes and headers, our response payload doesn't need to explain any context - the client can see from the status code that the POST succeeded, the location header tells it where to get it, and all it needs in the JSON payload is the actual content.

Note: This is a simplified sample. Among other things, you'll need to consider security and authorization in your Web API's, and especially in methods that allow creating or deleting data. We'll look at authorization in Part 6. As for security, you'll want to consider things like mass assignment if binding directly to model objects, etc.

In Part 4, we'll extend on our simple querying methods form Part 2, adding in support for paging and querying.

Posted by Jon Galloway | with no comments

ASP.NET Web API - Screencast series Part 2: Getting Data

We're continuing a six part series on ASP.NET Web API that accompanies the getting started screencast series. This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team.

In Part 1 we looked at what ASP.NET Web API is, why you'd care, did the File / New Project thing, and did some basic HTTP testing using browser F12 developer tools.

This second screencast starts to build out the Comments example - a JSON API that's accessed via jQuery. This sample uses a simple in-memory repository. At this early stage, the GET /api/values/ just returns an IEnumerable<Comment>. In part 4 we'll add on paging and filtering, and it gets more interesting.

   

[Video and code on the ASP.NET site]

The get by id (e.g. GET /api/values/5) case is a little more interesting. The method just returns a Comment if the Comment ID is valid, but if it's not found we throw an HttpResponseException with the correct HTTP status code (HTTP 404 Not Found). This is an important thing to get - HTTP defines common response status codes, so there's no need to implement any custom messaging here - we tell the requestor that the resource the requested wasn't there. 

public Comment GetComment(int id) 
{ 
    Comment comment; 
    if (!repository.TryGet(id, out comment)) 
        throw new HttpResponseException(HttpStatusCode.NotFound); 
    return comment; 
} 

This is great because it's standard, and any client should know how to handle it. There's no need to invent custom messaging here, and we can talk to any client that understands HTTP - not just jQuery, and not just browsers.

But it's crazy easy to consume an HTTP API that returns JSON via jQuery. The example uses Knockout to bind the JSON values to HTML elements, but the thing to notice is that calling into this /api/coments is really simple, and the return from the $.get() method is just JSON data, which is really easy to work with in JavaScript (since JSON stands for JavaScript Object Notation and is the native serialization format in Javascript).

$(function() { 
    $("#getComments").click(function () { 
        // We're using a Knockout model. This clears out the existing comments. 
        viewModel.comments([]); 
 
        $.get('/api/comments', function (data) { 
            // Update the Knockout model (and thus the UI) with the comments received back  
            // from the Web API call. 
            viewModel.comments(data); 
        }); 
 
    }); 
});

That's it! Easy, huh? In Part 3, we'll start modifying data on the server using POST and DELETE.

Posted by Jon Galloway | 2 comment(s)
Filed under: ,

ASP.NET Web API - Screencast series with downloadable sample code - Part 1

There's a lot of great ASP.NET Web API content on the ASP.NET website at http://asp.net/web-api. I mentioned my screencast series in original announcement post, but we've since added the sample code so I thought it was worth pointing the series out specifically.

This is an introductory screencast series that walks through from File / New Project to some more advanced scenarios like Custom Validation and Authorization. The screencast videos are all short (3-5 minutes) and the sample code for the series is both available for download and browsable online. I did the screencasts, but the samples were written by the ASP.NET Web API team.

So - let's watch them together! Grab some popcorn and pay attention, because these are short. After each video, I'll talk about what I thought was important. I'm embedding the videos using HTML5 (MP4) with Silverlight fallback, but if something goes wrong or your browser / device / whatever doesn't support them, I'll include the link to where the videos are more professionally hosted on the ASP.NET site. Note also if you're following along with the samples that, since Part 1 just looks at the File / New Project step, the screencast part numbers are one ahead of the sample part numbers - so screencast 4 matches with sample code demo 3.

Note: I started this as one long post for all 6 parts, but as it grew over 2000 words I figured it'd be better to break it up.

Part 1: Your First Web API

[Video and code on the ASP.NET site]

This screencast starts with an overview of why you'd want to use ASP.NET Web API:

  • Reach more clients (thinking beyond the browser to mobile clients, other applications, etc.)
  • Scale (who doesn't love the cloud?!)
  • Embrace HTTP (a focus on HTTP both on client and server really simplifies and focuses service interactions)

Next, I start a new ASP.NET Web API application and show some of the basics of the ApiController. We don't write any new code in this first step, just look at the example controller that's created by File / New Project.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Net.Http;
using System.Web.Http;

namespace NewProject_Mvc4BetaWebApi.Controllers
{
    public class ValuesController : ApiController
    {
        // GET /api/values
        public IEnumerable<string> Get()
        {
            return new string[] { "value1", "value2" };
        }

        // GET /api/values/5
        public string Get(int id)
        {
            return "value";
        }

        // POST /api/values
        public void Post(string value)
        {
        }

        // PUT /api/values/5
        public void Put(int id, string value)
        {
        }

        // DELETE /api/values/5
        public void Delete(int id)
        {
        }
    }
}

Finally, we walk through testing the output of this API controller using browser tools. There are several ways you can test API output, including Fiddler (as described by Scott Hanselman in this post) and built-in developer tools available in all modern browsers. For simplicity I used Internet Explorer 9 F12 developer tools, but you're of course welcome to use whatever you'd like.

A few important things to note:

  • This class derives from an ApiController base class, not the standard ASP.NET MVC Controller base class. They're similar in places where API's and HTML returning controller uses are similar, and different where API and HTML use differ.
  • A good example of where those things are different is in the routing conventions. In an HTTP controller, there's no need for an "action" to be specified, since the HTTP verbs are the actions. We don't need to do anything to map verbs to actions; when a request comes in to /api/values/5 with the DELETE HTTP verb, it'll automatically be handled by the Delete method in an ApiController.

The comments above the API methods show sample URL's and HTTP verbs, so we can test out the first two GET methods by browsing to the site in IE9, hitting F12 to bring up the tools, and entering /api/values in the URL:

2012-03-12 16h02_28

That sample action returns a list of values. To get just one value back, we'd browse to /values/5:

2012-03-12 16h04_22

That's it for Part 1. In Part 2 we'll look at getting data (beyond hardcoded strings) and start building out a sample application.

Posted by Jon Galloway | 10 comment(s)
Filed under: ,
More Posts