After recently migrating an important new website to use Windows Azure “Web Roles” I wanted an easier way to deploy new versions to the Azure Staging environment as well as a reliable process to rollback deployments to a certain “known good” source control commit checkpoint.  By configuring our JetBrains’ TeamCity CI server to utilize Windows Azure PowerShell cmdlets to create new automated deployments, I’ll show you how to take control of your Azure publish process.

Step 0: Configuring your Azure Project in Visual Studio

Before we can start looking at automating the deployment, we should make sure manual deployments from Visual Studio are working properly.  Detailed information for setting up deployments can be found at http://msdn.microsoft.com/en-us/library/windowsazure/ff683672.aspx#PublishAzure or by doing some quick Googling, but the basics are as follows:

  1. Install the prerequisite Windows Azure SDK
  2. Create an Azure project by right-clicking on your web project and choosing “Add Windows Azure Cloud Service Project” (or by manually adding that project type)
  3. Configure your Role and Service Configuration/Definition as desired
  4. Right-click on your azure project and choose “Publish,” create a publish profile, and push to your web role

You don’t actually have to do step #4 and create a publish profile, but it’s a good exercise to make sure everything is working properly.  Once your Windows Azure project is setup correctly, we are ready to move on to understanding the Azure Publish process.

Understanding the Azure Publish Process

The actual Windows Azure project is fairly simple at its core—it builds your dependent roles (in our case, a web role) against a specific service and build configuration, and outputs two files:

  • ServiceConfiguration.Cloud.cscfg: This is just the file containing your package configuration info, for example Instance Count, OsFamily, ConnectionString and other Setting information.
  • ProjectName.Azure.cspkg: This is the package file that contains the guts of your deployment, including all deployable files.

When you package your Azure project, these two files will be created within the directory ./[ProjectName].Azure/bin/[ConfigName]/app.publish/.  If you want to build your Azure Project from the command line, it’s as simple as calling MSBuild on the “Publish” target:

msbuild.exe /target:Publish

Windows Azure PowerShell Cmdlets

The last pieces of the puzzle that make CI automation possible are the Azure PowerShell Cmdlets (http://msdn.microsoft.com/en-us/library/windowsazure/jj156055.aspx).  These cmdlets are what will let us create deployments without Visual Studio or other user intervention.

Preparing TeamCity for Azure Deployments

Now we are ready to get our TeamCity server setup so it can build and deploy Windows Azure projects, which we now know requires the Azure SDK and the Windows Azure PowerShell Cmdlets.

Once this SDK is installed, I recommend running a test build to make sure your project is building correctly.  You’ll want to setup your build step using MSBuild with the “Publish” target against your solution file.  Mine looks like this:

image

Assuming the build was successful, you will now have the two *.cspkg and *cscfg files within your build directory.  If the build was red (failed), take a look at the build logs and keep an eye out for “unsupported project type” or other build errors, which will need to be addressed before the CI deployment can be completed.

With a successful build we are now ready to install and configure the Windows Azure PowerShell Cmdlets:

  • Follow the instructions at http://msdn.microsoft.com/en-us/library/windowsazure/jj554332 to install the Cmdlets and configure PowerShell
  • After installing the Cmdlets, you’ll need to get your Azure Subscription Info using the Get-AzurePublishSettingsFile command. Store the resulting *.publishsettings file somewhere you can get to easily, like C:\TeamCity, because you will need to reference it later from your deploy script.

Scripting the CI Deploy Process

Now that the cmdlets are installed on our TeamCity server, we are ready to script the actual deployment using a TeamCity “PowerShell” build runner.  Before we look at any code, here’s a breakdown of our deployment algorithm:

  1. Setup your variables, including the location of the *.cspkg and *cscfg files produced in the earlier MSBuild step (remember, the folder is something like [ProjectName].Azure/bin/[ConfigName]/app.publish/
  2. Import the Windows Azure PowerShell Cmdlets
  3. Import and set your Azure Subscription information (this is basically your authentication/authorization step, so protect your settings file
  4. Now look for a current deployment, and if you find one Upgrade it, else Create a new deployment

Pretty simple and straightforward.  Now let’s look at the code (also available as a gist here: https://gist.github.com/3694398):

$subscription = "[Your Subscription Name]"
$service = "[Your Azure Service Name]"
$slot = "staging" #staging or production
$package = "[ProjectName]\bin\[BuildConfigName]\app.publish\[ProjectName].cspkg"
$configuration = "[ProjectName]\bin\[BuildConfigName]\app.publish\ServiceConfiguration.Cloud.cscfg"
$timeStampFormat = "g"
$deploymentLabel = "ContinuousDeploy to $service v%build.number%"
 
Write-Output "Running Azure Imports"
Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\*.psd1"
Import-AzurePublishSettingsFile "C:\TeamCity\[PSFileName].publishsettings"
Set-AzureSubscription -CurrentStorageAccount $service -SubscriptionName $subscription
 
function Publish(){
 $deployment = Get-AzureDeployment -ServiceName $service -Slot $slot -ErrorVariable a -ErrorAction silentlycontinue 
 
 if ($a[0] -ne $null) {
    Write-Output "$(Get-Date -f $timeStampFormat) - No deployment is detected. Creating a new deployment. "
 }
 
 if ($deployment.Name -ne $null) {
    #Update deployment inplace (usually faster, cheaper, won't destroy VIP)
    Write-Output "$(Get-Date -f $timeStampFormat) - Deployment exists in $servicename.  Upgrading deployment."
    UpgradeDeployment
 } else {
    CreateNewDeployment
 }
}
 
function CreateNewDeployment()
{
    write-progress -id 3 -activity "Creating New Deployment" -Status "In progress"
    Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: In progress"
 
    $opstat = New-AzureDeployment -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service
 
    $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot
    $completeDeploymentID = $completeDeployment.deploymentid
 
    write-progress -id 3 -activity "Creating New Deployment" -completed -Status "Complete"
    Write-Output "$(Get-Date -f $timeStampFormat) - Creating New Deployment: Complete, Deployment ID: $completeDeploymentID"
}
 
function UpgradeDeployment()
{
    write-progress -id 3 -activity "Upgrading Deployment" -Status "In progress"
    Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: In progress"
 
    # perform Update-Deployment
    $setdeployment = Set-AzureDeployment -Upgrade -Slot $slot -Package $package -Configuration $configuration -label $deploymentLabel -ServiceName $service -Force
 
    $completeDeployment = Get-AzureDeployment -ServiceName $service -Slot $slot
    $completeDeploymentID = $completeDeployment.deploymentid
 
    write-progress -id 3 -activity "Upgrading Deployment" -completed -Status "Complete"
    Write-Output "$(Get-Date -f $timeStampFormat) - Upgrading Deployment: Complete, Deployment ID: $completeDeploymentID"
}
 
Write-Output "Create Azure Deployment"
Publish

 

Creating the TeamCity Build Step

The only thing left is to create a second build step, after your MSBuild “Publish” step, with the build runner type “PowerShell”.  Then set your script to “Source Code,” the script execution mode to “Put script into PowerShell stdin with “-Command” arguments” and then copy/paste in the above script (replacing the placeholder sections with your values).  This should look like the following:

image

 

Wrap Up

After combining the MSBuild /target:Publish step (which creates the necessary Windows Azure *.cspkg and *.cscfg files) and a PowerShell script step which utilizes the Azure PowerShell Cmdlets, we have a fully deployable build configuration in TeamCity.  You can configure this step to run whenever you’d like using build triggers – for example, you could even deploy whenever a new master branch deploy comes in and passes all required tests.

In the script I’ve hardcoded that every deployment goes to the Staging environment on Azure, but you could deploy straight to Production if you want to, or even setup a deployment configuration variable and set it as desired.

After your TeamCity Build Configuration is complete, you’ll see something that looks like this:

image

Whenever you click the “Run” button, all of your code will be compiled, published, and deployed to Windows Azure!

One additional enormous benefit of automating the process this way is that you can easily deploy any specific source control changeset by clicking the little ellipsis button next to "Run.”  This will bring up a dialog like the one below, where you can select the last change to use for your deployment.  Since Azure Web Role deployments don’t have any rollback functionality, this is a critical feature.

image

 

Enjoy!

Windows Azure recently introduced their “Websites” concept which allows quick and easy deployment of ASPNET, Node.js and PHP web-sites to the Azure cloud.  One of the most exciting developments was the ability to deploy to web-sites rapidly using “Git Publishing,” meaning if you are using Git you just need to add a new remote and do “git push azure” to deploy your code on demand.

As I began to utilize this feature I stumbled quickly upon a problem – how do we configure web.config “secrets,” like app settings and connection string passwords?  Since git publishing just pushes your current repository directly to Azure, I was worried that I would have to check my secrets into source control, which is an obviously undesirable solution, especially on collaborative teams.

It turns out there are a few aspects of Azure which allow you to keep configuration secrets and still deploy your web-site using the awesome git publishing method.  In this blog post I’m going to show you how easy it is to configure appSettings, and I’ll show you a workaround for keeping your connectionStrings and other information secret as well.

AppSettings are easy!

Configuring AppSettings are very simple thanks to the Windows Azure “Portal” (http://windows.azure.com).  All you have to do is go into the Dashboard for your web-site and click "Configure.” 

image

Now scroll down and you will see a section for app settings and connection strings.

image

Any app setting key-value pair you set here will either overwrite an existing value in your web.config or add a new value if there is no match.  This is the expected behavior, and it makes it very easy to setup your configuration from the web portal. 

The app settings here will persist across deployments, so you can keep your local web.config checked into source control and free of secret configuration info.

ConnectionStrings Cannot Be Modified

Unfortunately, as you can see in the above picture, connection strings cannot be set through the admin interface.  Every time we deploy, the values in your web.config (possibly merged with your transformation file) will be used.  At this point it doesn’t seem like we can store connection string information outside of source control, however, with a little knowledge of the deployment process, you will see this is not an insurmountable issue.

The Git Publishing Process

The engine behind the git deployment process in Windows Azure is a project called “Kudu,” which is an open source project hosted on GitHub.  The relevant details we need to know about Kudu are:

  1. Currently Kudu will run the build configuration “Release” for any deployment
  2. Kudu will NOT remove a file if there is no matching file in your deployment

Point #2 here is the key—if you have a file sitting on the server, let’s say “robots.txt,” and then you remove that file from your git repository and deploy using your latest change, that robots.txt file will still exist on the remote.

With this in mind, and knowing that Kudu uses the “Release” build configuration, let’s try to keep our secret information outside of source control and attempt a deployment.

ConnectionStrings and Other Web.Config Transformations

First we have to make sure web.release.config is not part of our repository.  Add web.release.config to your .gitignore file and, if necessary, use “git rm” to remove the file from your repository.  Now that web.release.config will not be sent up as part of our deployment, we just have to figure out how to get our secret web.release.config file onto the server, knowing that Kudu will not replace the file.

Here our old friend FTP comes to the rescue. Under your website in the Azure portal, go to “Dashboard” and down the right side you will see some basic information about your setup, including “FTP HOSTNAME.”

image

We are going to use that FTP site to drop in our desired web.release.config file, and when we do future deployments it will be used to transform our base web.config with whatever magical values we chose.

I’m just going to use windows explorer for FTP to keep things simple.  On connecting you get the following prompt:

image

Make sure to enter your username with the dashboard’s “DEPLOYMENT USER” text, which is ‘{sitename}\{username}.’

Once you are in, you’ll see two folders, “Log Files” and “site.” You’ll want to choose “site” and then “repository” so the full path is “ftp://[yourftp].ftp.azurewebsites.windows.net/site/repository/"

Within this folder, navigate to your web.config location and drop in your web.release.config file.  This file will persist across deployments and automatically transform your web.config.

image

As you can see above, towards the middle of this deployment Kudu reported to us “Transformed Web.config using Web.Release.confg.” This is exactly what we were trying to accomplish, and now we have the ability to put anything we want into our Web.Release.config, keep that information outside of source control, and have the transformations persist across deployments.

Conclusion

It is very simple to configure “AppSettings” using the Azure Portal, but setting other web.config values require a bit more effort, especially when you don’t want your configuration values to be stored and versioned in source control.  Using FTP, we’ve shown that we can directly place our configuration transform file and have it automatically merge on deployment, allowing us to set virtual any option we desire.

Enjoy!

Many git users may not be aware that you can setup git aliases for either providing shortcuts to existing commands (like typing ‘git co’ instead of ‘git checkout’) or even for creating entirely new commands (‘git hist’ for printing your git history formatted how you like it).  Git aliases can be local to a single repository, or (more commonly) global.

To manage git aliases you need to use the ‘git config’ command (see http://git-scm.com/docs/git-config), and for making your aliases global use ‘git config –global’.  Let’s make a quick alias now:

git config --global alias.co 'checkout'

This command tells git to create a global alias ‘co’ which will fire off the ‘checkout’ command when typed.  This saves a few keystrokes, so now ‘git co master’ does a checkout of master, or ‘git co mybranch’ will checkout the branch ‘mybranch’.

My Favorite Aliases

Here’s a list of my favorite aliases that I make sure to add to any new git install.

git config --global alias.go 'checkout'
git config --global alias.in 'commit -a'
git config --global alias.up 'push origin'
git config --global alias.down 'pull origin'

I particularly like these aliases since they reduce typing on the most common commands, but also because they make me feel just a little bit like James Brown singing:

uh uh git down! git in!  huh, yea.  git up!  ooohhh owwwww, git in now, yea git up.

Delusional rock fantasy aside, the above is actually a regular git flow I  use every day, using the above aliases to pull down the latest, check in changes, then push back up to origin.

Other Useful Aliases

Here’s a couple of other useful aliases to save you some finger and brain strain:

git config --global alias.st 'status'
git config --global alias.br 'checkout -b' //For creating branches
git config --global alias.a 'add .' //Shortcut to add all changes to the index
git config --global alias.hist = 'log --pretty --graph --date=short'

You can get pretty complex with aliases, having them call out to shell scripts or running multiple commands, but I’ll stick with the basics here & point you at some advanced usage examples on other pages at the end of this post.

Viewing Your Stored Aliases

Your aliases are stored in two places: global aliases are stored in ~/.gitconfig (.gitconfig in your home directory) , and local aliases are stored in .git/config (the config file within your local git directory).  However, you don’t need to ever look in those files—instead use one of the two the following commands:

git config --get-regex alias //Shows all of your aliases
git config --global --get-regex alias //Just global aliases

Now that you have a list of your aliases, you can add more or remove/replace using ‘git config’ (for example, to remove alias.test, use ‘git config –unset alias.test’)

Conclusion and Links

Git aliases can save you lots of typing and can prevent dangerous typos of difficult commands (for example, you could alias rebases or fetch/merge).  They are incredibly flexible, powerful, and easy to setup (and, since they live in your .gitconfig, quite portable). 

I’ll leave you with some links to useful git alias information:

Enjoy!

Now that I have my source code being checked into GitHub and have TeamCity doing automatic builds (and running tests), I thought it was about time to take the last big step and automatically deploy the latest version of an application to a live site (either for testing or just straight to production, it’s up to you) whenever a successful build has taken place.

The best resource I’ve found for deploying directly from TeamCity (note: the steps would be the same for any CI server) can be found in Troy Hunt’s excellent 5 part blog series here: http://www.troyhunt.com/2010/11/you-deploying-it-wrong-teamcity.html.  It is very comprehensive and well worth a read, but I had some issues configuring deployment permissions and wanted to detail them here in case it helps someone else down the road.

An Overview Of the CI Deployment Process

Here’s a basic idea of how to get started—if you’ve ever used the Visual Studio 2010 “Publish” wizard then you are 1/2 way there.  If you have any questions on these steps, refer to the blog series mentioned above or Google with Bing for help.

  • Install Microsoft Web Deploy 2.0 from http://www.iis.net/download/webdeploy on the server you wish to publish to.
  • Setup Web.Config transformations for your desired build configurations (see http://msdn.microsoft.com/en-us/library/dd465318.aspx)
  • Inside Visual Studio, right click –> properties on your Web project, and under Package/Publish Web setup the IIS Web Application name to match your IIS server
    • This is the virtual directory inside IIS that your deployment will be pushed to.  Set it up for each desired build configuration.
  • Setup Permissions on your IIS Server to allow web deployment from your CI agent or service account (detailed below).
  • Craft an MSBuild command that will build AND deploy your web application, and run it during your CI Server build process (also detailed below).

Deploying Via MSBuild

That last point requires a bit of explanation as it’s the crux of the problem (other than permissions, which I’ll get to later).  What we want MSBuild to do here is to build your project/solution using the desired build configuration (like Debug/Release) and then to use the MS Web Deploy service to push your auto-created deployment package to IIS.  Here’s an example of MSBuild parameters from Troy’s blog that will accomplish this:

/P:Configuration=%env.Configuration%
/P:DeployOnBuild=True
/P:DeployTarget=MSDeployPublish
/P:MsDeployServiceUrl=https://%env.TargetServer%/MsDeploy.axd
/P:AllowUntrustedCertificate=True
/P:MSDeployPublishMethod=WMSvc
/P:CreatePackageOnPublish=True
/P:UserName=AutoDeploy\Administrator
/P:Password=Passw0rd

Basically the above parameters use the provided env.Configuration (Debug/Release) to set the build configuration and then tell MSBulid to deploy using WMSvc (web management service) to the service url located at env.TargetServer (you can set the defaults on your CI server).

This is pretty handy, but I unfortunately for me (and maybe for you) there are two problems here:

#1: The user (AutoDeploy\Administrator) needs to be an admin on your IIS, and you need to have allowed Administrators to bypass rules in the “Management Service Delegation” area of IIS (see ScottGu’s post here: http://weblogs.asp.net/scottgu/archive/2010/09/13/automating-deployment-with-microsoft-web-deploy.aspx).  For me, this is too much by itself.

#2: The username and password as passed in plain text through your configuration, as shown above. Not good.

The rest of this post will detail how to solve these issues, and in the end we will setup an service account that has non-admin access and which doesn’t require passing credentials manually.

Setting Up the Service Account

The first step is to create a non-admin service account that will be used on both your build server and IIS boxes. I’ll be using a no-login domain account called “mydomain\builderservice”, but any account would work.

On your CI server (in my case, TeamCity), you’ll want to run your build agent as your new service account, as shown here:

image

Note you might also have to give this service access to certain files. For example, in TeamCity the build agent needs access to C:\TeamCity\buildAgent to do its work.

Changing the MSBuild Script to Run As “Current User”

This part was fairly tricky since simply removing the UserName/Password parameters from the msbuild command will not work as expected.  Instead you must specify the AuthType to be NTLM (or it will default to Basic) and the UserName=[blank] so it will impersonate the current user.  The relevant parameters are /P:UserName= /P:AuthType=NTLM.  This makes the final MSBuild parameters:

/P:Configuration=%env.Configuration%
/P:DeployOnBuild=True 
/P:DeployTarget=MSDeployPublish 
/P:MsDeployServiceUrl=https://%env.TargetServer%/MsDeploy.axd
/P:AllowUntrustedCertificate=True 
/P:MsDeployPublishMethod=WMSvc 
/P:CreatePackageOnPublish=True 
/P:UserName= 
/P:AuthType=NTLM

Now that we have TeamCity running MSBuild with these parameters while running under a service account, we need to grant permission on the IIS server that has MS Deploy installed (%env.TargetServer%).

IIS Server Permission For Deployment

Your mileage may vary, but in order to get a successful web deployment from a non-administrator account, I had to perform the following steps, each of which I will explain in detail.

  1. Enable Windows Authentication for the Web Management Server (WMSvc)
  2. Grant Management Service Delegation rights at the Site level to the necessary providers
  3. Grant IIS Manager Permissions to your virtual application
  4. Grant file/folder permissions to the web directory

#1: Enable Windows Authentication for the Web Management Server (WMSvc)

In order to get the WMSvc to accept my build agent credentials, I had to enable Windows Authentication inside the IIS Management Service. You can do this either from the UI or Windows Registry as described in this stackoverflow post.  For convenience, here is the regex key (make sure to restart the WMSvc after making the change): ‘reg add HKLM\Software\Microsoft\WebManagement\Server /v WindowsAuthenticationEnabled /t REG_DWORD /d 1’

#2: Grant Management Service Delegation rights at the Site level to the necessary providers

Next you must Delegate to your service account the rights to perform certain actions through the management service providers. If you forget any of these you will get a helpful error message saying that you don’t have rights to run a certain provider (ex: “createApp”) so you’d then need to come back in and add those.  I needed the following providers: “contentPath, iisApp, createApp, setAcl.”

To set a delegation rule go to your site root in IIS and under Management choose “Management Service Delegation.”  Now choose “Add Rule” in the upper-right and choose the “Deploy Applications with Content” template (this has the best defaults) and then add in your desired providers. Mine looked like this:

image

Click ok, and now that the rule is defined you just have to select the newly created rule and choose “Add User To Role.”  In the ensuing dialog enter the name of your service account and press OK.

image

#3: Grant IIS Manager Permissions to your virtual application

Now that you granted site-level provider access within the {userScope} path, you must give the account IIS permission on the site/application that you will be deploying to.  Choose your site and then under Management choose “IIS Manager Permissions.”  Now click “Allow User” and enter the account name again to grant access.

image

#4: Grant file/folder permissions to the web directory

This one is pretty obvious, but you still have to remember to give full control to your service account at the OS-folder level (ex: C:\inetpub\SiteName).  Right-click –> Properties –> Security –> Edit, and then choose Add.  Now type in your service account one more time, grant it full control, and you are finished.

image

Celebrate!

That was certainly a lot of info, and I hope it makes someone’s deployment configuration easier.  Integrating your build, test, and deployments with a CI Server like TeamCity has many advantages, including the ability to handle “deploy automatically only when all tests pass” and “oops, deploy the version from last week” scenarios.

May all your Deployments be green.

image

Enjoy!

Last week I gave a presentations to the 2011 UC Davis IT Security Symposium that covered input validation features in HTML5.  I mostly discussed the following three topics:

  1. New Html5 Input Types (like <input type=”email” />)
  2. Html5 Constraints (like <input type=”text” required maxlength=”8” />)
  3. Polyfills

The slides only cover part of the story since there are a few “live demos.”  You can find all of the demo code on my github repository https://github.com/srkirkland/ITSecuritySymposium.  You’ll need ASP.NET Mvc 3 installed to run them.

The slides are also available in my GitHub repository, but I’ve also added them to slideshare as well because that’s what the cool kids do: http://www.slideshare.net/srkirkland/data-validation-in-web-applications.

I believe the presentation was well received and most people learned something, so I just wanted to share.  When loading up the Html5 demo just click on the Html5 tab and go through each example. Enjoy!

 

[Examples from the Slides and Demos]

 

image

image

image

One great benefit of having a Continuous Integration server like TeamCity (http://www.jetbrains.com/teamcity/) building your code is that you can hook into the build process to have it also handle tedious or time-consuming tasks for you, such as running all of your unit tests, code coverage analyses, etc.

Now that NuGet (http://nuget.org/) has arrived and simplified package management in .NET, wouldn’t it be nice to have your CI server build an updated NuGet package for you along with every build?  An why not go one step further and have your build process automatically push the newest version of your package to the central NuGet server (or even your own hosted NuGet server)?

It turns out this is actually very easy to do – let’s see how.

Super-Quick NuGet Package Introduction

If you would like to familiarize yourself with what NuGet is and how to use it, take a look at the project homepage at http://nuget.codeplex.com/

For this post it’s really only necessary to know that a NuGet package starts out as a specification file (*.nupec) and usually some deliverable content (Dlls, script files, etc).  Once you have a specification file, you use the NuGet.exe command line utility to turn it into a package (*.nupkg) that can then be uploaded to the masses via http://nuget.org/.  [Full details are available at http://nuget.codeplex.com/documentation?title=Creating%20a%20Package].

Building and Deploying Your Package

You’ll need to have your *.nuspec file checked into source control, and I also find it useful to check in the nuget.exe command-line utility as well (although this is not necessary as long as it is on your build server).  I tend to put my specifications in a /Build/NuGet folder, so it’s outside of the main /src/ folder.

Note: The following steps are TeamCity specific, but any build process with the ability to run a batch file should be able to do the same thing.

1. Open up your project’s build configuration and select #3, “Build Steps.”

image

2. Click “Add Build Step.”

3. Under Runner Type choose “Command Line,” and then under Run choose “Custom Script.”  You can optionally specify a working directory to have your script run in that directory, which I find to be quite helpful.

image

4.  Now you are ready to write your custom script, which should be able to perform the following steps: (1) cleanup old *.nupack files, (2) create your newest package with the correct version number, and (3) push that package up to nuget.org or another nuget server.

My script is as follows:

del *.nupkg
 
NuGet.exe pack Project\Project.nuspec -Version %system.build.number%
 
forfiles /m *.nupkg /c "cmd /c NuGet.exe push @FILE <your-key>"

5. That’s all there is to it, just replace <your-key> with your access key from nuget.org (look under MyAccount)

image

Script Breakdown:

On line two notice how I use TeamCity’s %system.build.number% to inject the build number into the generated package.  This is very important because NuGet pays close attention to your package version number (as David Ebbo describes in detail in his NuGet versioning blog series http://blog.davidebbo.com/2011/01/nuget-versioning-part-3-unification-via.html).

The third line is pretty fun—basically it is saying for every file that ends in *.nupkg, call ‘NuGet.exe push <filename> <your-key>’.  I really like this approach, and it could even be adapted to build and push multiple NuGet packages during every build run.

Wrap Up

Overall there wasn’t too much work to do, we just created a command line script build runner and added a few lines of code to automatically build and push versioned NuGet packages.

Now that I’m using NuGet a lot more, with one OSS project on nuget.org (http://dataannotationsextensions.org/) and several hosted on my own internal company NuGet server, I find this automatic build and deploy process the perfect way to keep my packages up to date. 

Enjoy!

I was writing conventions for FluentNHibernate the other day and I ran into the need to pluralize a given string and immediately thought of the ruby on rails Inflector.  It turns out there is a .NET library out there also capable of doing word inflection, originally written (I believe) by Andrew Peters, though the link I had no longer works.  The entire Inflector class is only a little over 200 lines long and can be easily included into any project, and contains the Pluralize() method along with a few other helpful methods (like Singularize(), Camelize(), Capitalize(), etc).

The Inflector class is available in its entirety from my github repository https://github.com/srkirkland/Inflector.  In addition to the Inflector.cs class I added tests for every single method available so you can gain an understanding of what each method does.  Also, if you are wondering about a specific test case feel free to fork my project and add your own test cases to ensure Inflector does what you expect.

Here is an example of some test cases for pluralize:

TestData.Add("quiz", "quizzes");
TestData.Add("perspective", "perspectives");
TestData.Add("ox", "oxen");
TestData.Add("buffalo", "buffaloes");
TestData.Add("tomato", "tomatoes");
TestData.Add("dwarf", "dwarves");
TestData.Add("elf", "elves");
TestData.Add("mouse", "mice");
 
TestData.Add("octopus", "octopi");
TestData.Add("vertex", "vertices");
TestData.Add("matrix", "matrices");
 
TestData.Add("rice", "rice");
TestData.Add("shoe", "shoes");

Pretty smart stuff.

ASP.NET MVC 3 includes a new unobtrusive validation strategy that utilizes HTML5 data-* attributes to decorate form elements.  Using a combination of jQuery validation and an unobtrusive validation adapter script that comes with MVC 3, those attributes are then turned into client side validation rules.

A Quick Introduction to Unobtrusive Validation

To quickly show how this works in practice, assume you have the following Order.cs class (think Northwind) [If you are familiar with unobtrusive validation in MVC 3 you can skip to the next section]:

public class Order : DomainObject
{
    [DataType(DataType.Date)]
    public virtual DateTime OrderDate { get; set; }
 
    [Required]
    [StringLength(12)]
    public virtual string ShipAddress { get; set; }
 
    [Required]
    public virtual Customer OrderedBy { get; set; }
}

Note the System.ComponentModel.DataAnnotations attributes, which provide the validation and metadata information used by ASP.NET MVC 3 to determine how to render out these properties.  Now let’s assume we have a form which can edit this Order class, specifically let’s look at the ShipAddress property:

@Html.LabelFor(x => x.Order.ShipAddress)
@Html.EditorFor(x => x.Order.ShipAddress)
@Html.ValidationMessageFor(x => x.Order.ShipAddress)

Now the Html.EditorFor() method is smart enough to look at the ShipAddress attributes and write out the necessary unobtrusive validation html attributes.  Note we could have used Html.TextBoxFor() or even Html.TextBox() and still retained the same results.

If we view source on the input box generated by the Html.EditorFor() call, we get the following:

<input type="text" value="Rua do Paço, 67" name="Order.ShipAddress" id="Order_ShipAddress" 
data-val-required="The ShipAddress field is required." data-val-length-max="12" 
data-val-length="The field ShipAddress must be a string with a maximum length of 12." 
data-val="true" class="text-box single-line input-validation-error">

As you can see, we have data-val-* attributes for both required and length, along with the proper error messages and additional data as necessary (in this case, we have the length-max=”12”).

And of course, if we try to submit the form with an invalid value, we get an error on the client:

image

Working with MvcContrib’s Fluent Html

The MvcContrib project offers a fluent interface for creating Html elements which I find very expressive and useful, especially when it comes to creating select lists.  Let’s look at a few quick examples:

@this.TextBox(x => x.FirstName).Class("required").Label("First Name:")
@this.MultiSelect(x => x.UserId).Options(ViewModel.Users)
@this.CheckBox("enabled").LabelAfter("Enabled").Title("Click to enable.").Styles(vertical_align => "middle")
 
@(this.Select("Order.OrderedBy").Options(Model.Customers, x => x.Id, x => x.CompanyName)
                    .Selected(Model.Order.OrderedBy != null ? Model.Order.OrderedBy.Id : "")
                    .FirstOption(null, "--Select A Company--")
                    .HideFirstOptionWhen(Model.Order.OrderedBy != null)
                    .Label("Ordered By:"))

These fluent html helpers create the normal html you would expect, and I think they make life a lot easier and more readable when dealing with complex markup or select list data models (look ma: no anonymous objects for creating class names!).

Of course, the problem we have now is that MvcContrib’s fluent html helpers don’t know about ASP.NET MVC 3’s unobtrusive validation attributes and thus don’t take part in client validation on your page.  This is not ideal, so I wrote a quick helper method to extend fluent html with the knowledge of what unobtrusive validation attributes to include when they are rendered.

Extending MvcContrib’s Fluent Html

Before posting the code, there are just a few things you need to know.  The first is that all Fluent Html elements implement the IElement interface (MvcContrib.FluentHtml.Elements.IElement), and the second is that the base System.Web.Mvc.HtmlHelper has been extended with a method called GetUnobtrusiveValidationAttributes which we can use to determine the necessary attributes to include.  With this knowledge we can make quick work of extending fluent html:

public static class FluentHtmlExtensions
{
    public static T IncludeUnobtrusiveValidationAttributes<T>(this T element, HtmlHelper htmlHelper) 
        where T : MvcContrib.FluentHtml.Elements.IElement
    {
        IDictionary<string, object> validationAttributes = htmlHelper
            .GetUnobtrusiveValidationAttributes(element.GetAttr("name"));
 
        foreach (var validationAttribute in validationAttributes)
        {
            element.SetAttr(validationAttribute.Key, validationAttribute.Value);
        }
 
        return element;
    }
}

The code is pretty straight forward – basically we use a passed HtmlHelper to get a list of validation attributes for the current element and then add each of the returned attributes to the element to be rendered.

The Extension In Action

Now let’s get back to the earlier ShipAddress example and see what we’ve accomplished.  First we will use a fluent html helper to render out the ship address text input (this is the ‘before’ case):

@this.TextBox("Order.ShipAddress").Label("Ship Address:").Class("class-name")

And the resulting HTML:

<label id="Order_ShipAddress_Label" for="Order_ShipAddress">Ship Address:</label>
<input type="text" value="Rua do Paço, 67" name="Order.ShipAddress"
 id="Order_ShipAddress" class="class-name">

Now let’s do the same thing except here we’ll use the newly written extension method:

@this.TextBox("Order.ShipAddress").Label("Ship Address:")
.Class("class-name").IncludeUnobtrusiveValidationAttributes(Html)

And the resulting HTML:

<label id="Order_ShipAddress_Label" for="Order_ShipAddress">Ship Address:</label>
<input type="text" value="Rua do Paço, 67" name="Order.ShipAddress"
 id="Order_ShipAddress" data-val-required="The ShipAddress field is required."
 data-val-length-max="12"
 data-val-length="The field ShipAddress must be a string with a maximum length of 12."
 data-val="true" class="class-name">

Excellent!  Now we can continue to use unobtrusive validation and have the flexibility to use ASP.NET MVC’s Html helpers or MvcContrib’s fluent html helpers interchangeably, and every element will participate in client side validation.

image

Wrap Up

Overall I’m happy with this solution, although in the best case scenario MvcContrib would know about unobtrusive validation attributes and include them automatically (of course if it is enabled in the web.config file).  I know that MvcContrib allows you to author global behaviors, but that requires changing the base class of your views, which I am not willing to do.

Enjoy!

Validation of user input is integral to building a modern web application, and ASP.NET MVC offers us a way to enforce business rules on both the client and server using Model Validation.  The recent release of ASP.NET MVC 3 has improved these offerings on the client side by introducing an unobtrusive validation library built on top of jquery.validation.  Out of the box MVC comes with support for Data Annotations (that is, System.ComponentModel.DataAnnotations) and can be extended to support other frameworks.  Data Annotations Validation is becoming more popular and is being baked in to many other Microsoft offerings, including Entity Framework, though with MVC it only contains four validators: Range, Required, StringLength and Regular Expression.  The Data Annotations Extensions project attempts to augment these validators with additional attributes while maintaining the clean integration Data Annotations provides.

A Quick Word About Data Annotations Extensions

The Data Annotations Extensions project can be found at http://dataannotationsextensions.org/, and currently provides 11 additional validation attributes (ex: Email, EqualTo, Min/Max) on top of Data Annotations’ original 4.  You can find a current list of the validation attributes on the afore mentioned website.

The core library provides server-side validation attributes that can be used in any .NET 4.0 project (no MVC dependency). There is also an easily pluggable client-side validation library which can be used in ASP.NET MVC 3 projects using unobtrusive jquery validation (only MVC3 included javascript files are required).

On to the Preview

Let’s say you had the following “Customer” domain model (or view model, depending on your project structure) in an MVC 3 project:

public class Customer
{
    public string  Email { get; set; }
    public int Age { get; set; }
    public string ProfilePictureLocation { get; set; }
}

When it comes time to create/edit this Customer, you will probably have a CustomerController and a simple form that just uses one of the Html.EditorFor() methods that the ASP.NET MVC tooling generates for you (or you can write yourself).  It should look something like this:

image

With no validation, the customer can enter nonsense for an email address, and then can even report their age as a negative number!  With the built-in Data Annotations validation, I could do a bit better by adding a Range to the age, adding a RegularExpression for email (yuck!), and adding some required attributes.  However, I’d still be able to report my age as 10.75 years old, and my profile picture could still be any string.  Let’s use Data Annotations along with this project, Data Annotations Extensions, and see what we can get:

public class Customer
{
    [Email]
    [Required]
    public string  Email { get; set; }
 
    [Integer]
    [Min(1, ErrorMessage="Unless you are benjamin button you are lying.")]
    [Required]
    public int Age { get; set; }
 
    [FileExtensions("png|jpg|jpeg|gif")]
    public string ProfilePictureLocation { get; set; }
}

Now let’s try to put in some invalid values and see what happens:

image

That is very nice validation, all done on the client side (will also be validated on the server).  Also, the Customer class validation attributes are very easy to read and understand.

Another bonus: Since Data Annotations Extensions can integrate with MVC 3’s unobtrusive validation, no additional scripts are required!

Now that we’ve seen our target, let’s take a look at how to get there within a new MVC 3 project.

Adding Data Annotations Extensions To Your Project

First we will File->New Project and create an ASP.NET MVC 3 project.  I am going to use Razor for these examples, but any view engine can be used in practice. 

Now go into the NuGet Extension Manager (right click on references and select add Library Package Reference) and search for “DataAnnotationsExtensions.”  You should see the following two packages:

image

The first package is for server-side validation scenarios, but since we are using MVC 3 and would like comprehensive sever and client validation support, click on the DataAnnotationsExtensions.MVC3 project and then click Install.  This will install the Data Annotations Extensions server and client validation DLLs along with David Ebbo’s web activator (which enables the validation attributes to be registered with MVC 3).

Now that Data Annotations Extensions is installed you have all you need to start doing advanced model validation.  If you are already using Data Annotations in your project, just making use of the additional validation attributes will provide client and server validation automatically.  However, assuming you are starting with a blank project I’ll walk you through setting up a controller and model to test with.

Creating Your Model

In the Models folder, create a new User.cs file with a User class that you can use as a model.  To start with, I’ll use the following class:

public class User
{
    public string Email { get; set; }
    public string Password { get; set; }
    public string PasswordConfirm { get; set; }
    public string HomePage { get; set; }
    public int Age { get; set; }
}

Next, create a simple controller with at least a Create method, and then a matching Create view (note, you can do all of this via the MVC built-in tooling).  Your files will look something like this:

UserController.cs:

public class UserController : Controller
{
    public ActionResult Create()
    {
        return View(new User());
    }
 
    [HttpPost]
    public ActionResult Create(User user)
    {
        if (!ModelState.IsValid)
        {
            return View(user);
        }
 
        return Content("User valid!");
    }
}

Create.cshtml:

@model NuGetValidationTester.Models.User
 
@{
    ViewBag.Title = "Create";
}
 
<h2>Create</h2>
 
<script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script>
<script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script>
 
@using (Html.BeginForm()) {
    @Html.ValidationSummary(true)
    <fieldset>
        <legend>User</legend>
        
        @Html.EditorForModel()
        
        <p>
            <input type="submit" value="Create" />
        </p>
    </fieldset>
}

In the Create.cshtml view, note that we are referencing jquery validation and jquery unobtrusive (jquery is referenced in the layout page).  These MVC 3 included scripts are the only ones you need to enjoy both the basic Data Annotations validation as well as the validation additions available in Data Annotations Extensions.  These references are added by default when you use the MVC 3 “Add View” dialog on a modification template type.

Now when we go to /User/Create we should see a form for editing a User

image

Since we haven’t yet added any validation attributes, this form is valid as shown (including no password, email and an age of 0).  With the built-in Data Annotations attributes we can make some of the fields required, and we could use a range validator of maybe 1 to 110 on Age (of course we don’t want to leave out supercentenarians) but let’s go further and validate our input comprehensively using Data Annotations Extensions.  The new and improved User.cs model class.

{
    [Required]
    [Email]
    public string Email { get; set; }
 
    [Required]
    public string Password { get; set; }
 
    [Required]
    [EqualTo("Password")]
    public string PasswordConfirm { get; set; }
 
    [Url]
    public string HomePage { get; set; }
 
    [Integer]
    [Min(1)]
    public int Age { get; set; }
}

Now let’s re-run our form and try to use some invalid values:

image

All of the validation errors you see above occurred on the client, without ever even hitting submit.  The validation is also checked on the server, which is a good practice since client validation is easily bypassed.

That’s all you need to do to start a new project and include Data Annotations Extensions, and of course you can integrate it into an existing project just as easily.

Nitpickers Corner

ASP.NET MVC 3 futures defines four new data annotations attributes which this project has as well: CreditCard, Email, Url and EqualTo.  Unfortunately referencing MVC 3 futures necessitates taking an dependency on MVC 3 in your model layer, which may be unadvisable in a multi-tiered project.  Data Annotations Extensions keeps the server and client side libraries separate so using the project’s validation attributes don’t require you to take any additional dependencies in your model layer which still allowing for the rich client validation experience if you are using MVC 3.

Custom Error Message and Globalization: Since the Data Annotations Extensions are build on top of Data Annotations, you have the ability to define your own static error messages and even to use resource files for very customizable error messages.

Available Validators: Please see the project site at http://dataannotationsextensions.org/ for an up-to-date list of the new validators included in this project.  As of this post, the following validators are available:

  • CreditCard
  • Date
  • Digits
  • Email
  • EqualTo
  • FileExtensions
  • Integer
  • Max
  • Min
  • Numeric
  • Url

Conclusion

Hopefully I’ve illustrated how easy it is to add server and client validation to your MVC 3 projects, and how to easily you can extend the available validation options to meet real world needs.

The Data Annotations Extensions project is fully open source under the BSD license.  Any feedback would be greatly appreciated.  More information than you require, along with links to the source code, is available at http://dataannotationsextensions.org/.

Enjoy!

The System.ComponentModel.DataAnnotations namespace contains a validation attribute called DataTypeAttribute, which takes an enum specifying what data type the given property conforms to.  Here are a few quick examples:

public class DataTypeEntity
{
    [DataType(DataType.Date)]
    public DateTime DateTime { get; set; }
 
    [DataType(DataType.EmailAddress)]
    public string EmailAddress { get; set; }
}

This attribute comes in handy when using ASP.NET MVC, because the type you specify will determine what “template” MVC uses.  Thus, for the DateTime property if you create a partial in Views/[loc]/EditorTemplates/Date.ascx (or cshtml for razor), that view will be used to render the property when using any of the Html.EditorFor() methods.

One thing that the DataType() validation attribute does not do is any actual validation.  To see this, let’s take a look at the EmailAddress property above.  It turns out that regardless of the value you provide, the entity will be considered valid:

//valid
new DataTypeEntity {EmailAddress = "Foo"};

image

Hmmm.  Since DataType() doesn’t validate, that leaves us with two options: (1) Create our own attributes for each datatype to validate, like [Date], or (2) add validation into the DataType attribute directly. 

In this post, I will show you how to hookup client-side validation to the existing DataType() attribute for a desired type.  From there adding server-side validation would be a breeze and even writing a custom validation attribute would be simple (more on that in future posts).

Validation All The Way Down

Our goal will be to leave our DataTypeEntity class (from above) untouched, requiring no reference to System.Web.Mvc.  Then we will make an ASP.NET MVC project that allows us to create a new DataTypeEntity and hookup automatic client-side date validation using the suggested “out-of-the-box” jquery.validate bits that are included with ASP.NET MVC 3.  For simplicity I’m going to focus on the only DateTime field, but the concept is generally the same for any other DataType.

image

Building a DataTypeAttribute Adapter

To start we will need to build a new validation adapter that we can register using ASP.NET MVC’s DataAnnotationsModelValidatorProvider.RegisterAdapter() method.  This method takes two Type parameters; The first is the attribute we are looking to validate with and the second is an adapter that should subclass System.Web.Mvc.ModelValidator.

Since we are extending DataAnnotations we can use the subclass of ModelValidator called DataAnnotationsModelValidator<>.  This takes a generic argument of type DataAnnotations.ValidationAttribute, which lucky for us means the DataTypeAttribute will fit in nicely.

So starting from there and implementing the required constructor, we get:

public class DataTypeAttributeAdapter : DataAnnotationsModelValidator<DataTypeAttribute>
{
    public DataTypeAttributeAdapter(ModelMetadata metadata, ControllerContext context, DataTypeAttribute attribute)
        : base(metadata, context, attribute) { }
}

Now you have a full-fledged validation adapter, although it doesn’t do anything yet.  There are two methods you can override to add functionality, IEnumerable<ModelValidationResult> Validate(object container) and IEnumerable<ModelClientValidationRule> GetClientValidationRules().  Adding logic to the server-side Validate() method is pretty straightforward, and for this post I’m going to focus on GetClientValidationRules().

Adding a Client Validation Rule

Adding client validation is now incredibly easy because jquery.validate is very powerful and already comes with a ton of validators (including date and regular expressions for our email example).  Teamed with the new unobtrusive validation javascript support we can make short work of our ModelClientValidationDateRule:

public class ModelClientValidationDateRule : ModelClientValidationRule
{
    public ModelClientValidationDateRule(string errorMessage)
    {
        ErrorMessage = errorMessage;
        ValidationType = "date";
    }
}

If your validation has additional parameters you can the ValidationParameters IDictionary<string,object> to include them.  There is a little bit of conventions magic going on here, but the distilled version is that we are defining a “date” validation type, which will be included as html5 data-* attributes (specifically data-val-date).  Then jquery.validate.unobtrusive takes this attribute and basically passes it along to jquery.validate, which knows how to handle date validation.

Finishing our DataTypeAttribute Adapter

Now that we have a model client validation rule, we can return it in the GetClientValidationRules() method of our DataTypeAttributeAdapter created above.  Basically I want to say if DataType.Date was provided, then return the date rule with a given error message (using ValidationAttribute.FormatErrorMessage()).  The entire adapter is below:

public class DataTypeAttributeAdapter : DataAnnotationsModelValidator<DataTypeAttribute>
{
    public DataTypeAttributeAdapter(ModelMetadata metadata, ControllerContext context, DataTypeAttribute attribute)
        : base(metadata, context, attribute) { }
 
    public override System.Collections.Generic.IEnumerable<ModelClientValidationRule> GetClientValidationRules()
    {
        if (Attribute.DataType == DataType.Date)
        {
            return new[] { new ModelClientValidationDateRule(Attribute.FormatErrorMessage(Metadata.GetDisplayName())) };
        }
 
        return base.GetClientValidationRules();
    }
}

Putting it all together

Now that we have an adapter for the DataTypeAttribute, we just need to tell ASP.NET MVC to use it.  The easiest way to do this is to use the built in DataAnnotationsModelValidatorProvider by calling RegisterAdapter() in your global.asax startup method.

DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(DataTypeAttribute), typeof(DataTypeAttributeAdapter));

Show and Tell

Let’s see this in action using a clean ASP.NET MVC 3 project.  First make sure to reference the jquery, jquery.vaidate and jquery.validate.unobtrusive scripts that you will need for client validation.

Next, let’s make a model class (note we are using the same built-in DataType() attribute that comes with System.ComponentModel.DataAnnotations).

public class DataTypeEntity
{
    [DataType(DataType.Date, ErrorMessage = "Please enter a valid date (ex: 2/14/2011)")]
    public DateTime DateTime { get; set; }
}

Then we make a create page with a strongly-typed DataTypeEntity model, the form section is shown below (notice we are just using EditorForModel):

@using (Html.BeginForm()) {
    @Html.ValidationSummary(true)
    <fieldset>
        <legend>Fields</legend>
 
        @Html.EditorForModel()
 
        <p>
            <input type="submit" value="Create" />
        </p>
    </fieldset>
}

The final step is to register the adapter in our global.asax file:

DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(DataTypeAttribute), typeof(DataTypeAttributeAdapter));

Now we are ready to run the page:

image

Looking at the datetime field’s html, we see that our adapter added some data-* validation attributes:

<input type="text" value="1/1/0001" name="DateTime" id="DateTime" 
   data-val-required="The DateTime field is required." 
   data-val-date="Please enter a valid date (ex: 2/14/2011)" data-val="true" 
   class="text-box single-line valid">

Here data-val-required was added automatically because DateTime is non-nullable, and data-val-date was added by our validation adapter.  Now if we try to add an invalid date:

image

Our custom error message is displayed via client-side validation as soon as we tab out of the box.  If we didn’t include a custom validation message, the default DataTypeAttribute “The field {0} is invalid” would have been shown (of course we can change the default as well).  Note we did not specify server-side validation, but in this case we don’t have to because an invalid date will cause a server-side error during model binding.

Conclusion

I really like how easy it is to register new data annotations model validators, whether they are your own or, as in this post, supplements to existing validation attributes.  I’m still debating about whether adding the validation directly in the DataType attribute is the correct place to put it versus creating a dedicated “Date” validation attribute, but it’s nice to know either option is available and, as we’ve seen, simple to implement.

I’m also working through the nascent stages of an open source project that will create validation attribute extensions to the existing data annotations providers using similar techniques as seen above (examples: Email, Url, EqualTo, Min, Max, CreditCard, etc).  Keep an eye on this blog and subscribe to my twitter feed (@srkirkland) if you are interested for announcements.

More Posts Next page »