MVC 3 Preview 1 was released earlier this week. It’s packed with new features such as dependency injection, global filters and new tooling to support multiple view engines, including the new Razor view engine for C# (see the end of this post for a list of related posts). Support for Visual Basic will be included in a future release of MVC. For now, our focus has been on getting as much of the C# support completed, including new project types, item templates and T4 files. In this post I will focus on the new tooling support we included for this release.

Adding Custom View Templates to a Project

We’ve retained the support we introduced in MVC 1 that allow users to customize the T4 templates used by the Add View dialog. You can add a new template for the ASPX (C#) view engine by doing the following:

  1. Create a new MVC 3 (ASPX) project.
  2. In your project, create a new folder named CodeTemplates.
  3. Create a folder named AddView under the CodeTemplates folder.
  4. Create a folder named AspxCSharp under the AddView folder. The name of this folder must match the name of the view engine. In MVC 1 and 2, this folder did not exist since we did not support multiple view engines. If you are converting a project from an earlier version of MVC and you had local T4 files, you will need to add an additional folder and move your files one level deeper.
  5. Place your T4 files in the AspxCSharp folder. Remember to clear the Custom Tool property of the T4 file. Right click on the template, select properties and delete the Custom Tool property.
  6. At this point your solution should look like the example below.


You can now use your custom template when adding a view by doing the following:

  1. Build your solution. This ensures that all the types in the solution become available when you want to create a strongly typed view.
  2. Right click on the Views folder, or any of its subfolders and select Add View from the context menu.
  3. Check the Create a strongly-typed view checkbox and select a view data class. This will enable the View content dropdown.
  4. Select your new template.


Global Templates

Chances are that if you’ve customized templates in one project, you may want to reuse the templates in other projects. Instead of placing the template within the CodeTemplates folder of a the project you can place your new template alongside the default files that MVC installs under C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\ItemTemplates\CSharp\Web\MVC 3\CodeTemplates\AddView\AspxCSharp

First-Class Support for Third Party View Engines

Apart from supporting the new Razor view engine, we also added support for other view engines such as Spark and NHaml to register their templates in MVC 3 projects. Although MVC does not ship templates for these view engines, we included some extensibility points to make it easier for developers to benefit from the tooling support we have. The support is still limited and we plan on expanding this in a future release, but we believe that we have a fairly solid foundation to work from right now.

Let’s look at an example on how you might add support for Spark. In this example, I will be making the view engine globally visible, but you can also limit the scope locally within a project.

  1. Create a new folder named SparkViewEngine under C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\ItemTemplates\CSharp\Web\MVC 3\CodeTemplates\AddView
  2. Place your T4 files for Spark inside the SparkViewEngine folder.
  3. Create an XML file in the SparkViewEngine folder named ViewEngine.xml that contains the following markup:
   1: <?xml version="1.0" encoding="utf-8" ?>
   2: <ViewEngine DisplayName="Spark" 
   3:             ViewFileExtension=".spark" 
   4:             DefaultLayoutPage="~/Views/Shared/Application.spark" 
   5:             PartialViewFileExtension=".spark" />

When you select Add View the dialog will now offer Spark as an option.


A few things to note about the markup:

  • The DisplayName attribute determines the value in the View engine dropdown. Recall that in step 1 you created a folder named SparkViewEngine. Note that the dropdown simply says Spark, not SparkViewEngine.
  • The ViewFileExtension attribute is required and will be used for normal views, partial views and layout pages.
  • The DefaultLayoutPage attribute is used to specify a default value for the master page textbox.
  • The PartialViewFileExtension attribute is used to specify a different extension if partial views require a different one. If this attribute is not specified then we use the value of the ViewFileExtension attribute.

Beyond Preview 1

The XML for the view engine definition is not 100% supported today. To fully support third party view engines we need to expose additional extensibility points. Below is a list of some areas that we are currently investigating.

  • Users need to be able to hook into the browse functionality to select a different layout (master) page and filter the selection based on the view engine’s supported file extensions.
  • The ContentPlaceHolder ID textbox is specific to ASPX pages. Spark for example, has a similar concept. To ensure that the templating host correctly identifies placeholders we need to enable authors of view engines to provide us with a parser that can extract the content sections from a layout page based on the view engine’s syntax.
  • We haven’t exposed all the attributes in the XML yet, since some of them tie directly into how layout pages are processed and also control the state of certain UI elements.

These are just ideas that we are considering and things may change significantly in the next release. Hopefully the functionality we provided in this preview is enough to get people started and have a better understanding of the direction we are taking with MVC 3 tooling support.

If you are developing a view engine or you wish to integrate into our tooling support, please leave some feedback so that we can start a discussion on what extensibility we should enable.


Other Resources

Even though we announced the release of MVC 2 (for Visual Studio 2008) on March 12, we are still working on getting a few more goodies out. And although not as awesome as canned unicorn meat (I don’t think anything can beat that), it is still very exciting. Today we released the source code of MVC 2 to the Microsoft source and symbol servers. Those of you using Visual Studio 2008 can now debug your applications and step into the MVC 2 RTM source code. If you are using one of the preview versions of MVC that shipped with prelease versions of Visual Studio 2010 then you will need to be a bit more patient until the final product ships since the assembly you are using needs to match the RTM version.

If you haven’t configured Visual Studio for source stepping then I suggest you take a look at Shawn Burke's post. It should only take you a few minutes to get up and running.

A full list of products that are on the reference servers can be found here. As some of you may recall, we had a small incident when we released the sources for MVC 1.0. Whenever you stepped into the source, the debugger would be off by two lines. The reason for this is that the copyright header injected by the tool we use to extract the source code from our source control system were placed at the top of each file, instead of the bottom. We’ve made sure not to repeat that mistake for MVC 2.

After configuring Visual Studio to use the Microsoft reference server for source stepping you can create a new MVC 2 application (or use an existing MVC 2 application) and set a breakpoint.


Hit F5 and wait for the debugger to hit the breakpoint, then open the Call Stack window.


If the entries for System.Web.Mvc are grayed out then right click and select Load Symbols. Once the entries light up, double click on one of the MVC methods in the call stack to view the source. It might take a while the first time to download the source, but once it’s done, Visual Studio will cache it locally on your machine.




But Wait, There’s More…

Over the last couple of weeks we also wrapped up the localized releases of MVC 2 for Japanese, German and French and we continue to work on the other official VS languages (Spanish, Italian, Russian, Korean and Chinese).


The localized versions of MVC will only be available in Visual Studio 2010 and includes localized versions of intellisense (including jQuery), tooling, templates (except T4) and the runtime. Since the runtime component of MVC 2 is shared between 2008 and 2010, you will be able to make use of the localized runtime resources in your application when developing in Visual Studio 2008 if you have a side-by-side installation of Visual Studio 2008 and a localized version of Visual Studio 2010.


Posted by jeloff | 1 comment(s)
Filed under: , , ,

Today we announced the release candidate of ASP.NET MVC 2. Apart from some bug fixes, client validation has undergone some significant changes, amongst others, making it easier to work with custom validators. You can find more details about all the changes in the readme. As usual, Phil Haack also made an announcement.

Having spent some time in the electronic funds transfer (EFT) industry before joining Microsoft, I thought it might be fun combining my experience from both worlds by showing how to validate a credit card number using the validation support provided by MVC 2 RC.


Anatomy of a PAN

The number printed on your credit card is called a primary account number (PAN) and is based on the ISO/IEC 7812 standard. Let’s start by looking at a contrived PAN.


  • The first six digits of the PAN are called the bank identification number (BIN) or issuer identification number (IIN). The first digit of the IIN is called the major industry identifier (MII). For the above number, the BIN/IIN is 123456 and the MII is 1. The IIN is primarily used to route transactions from the acquirer to the card issuer. The acquirer is the owner of the device from which the transaction originated, such as an ATM or POS device.
  • All the digits following the BIN, except the last digit, is the account number that the issuer assigned to your card. The account number of the card holder in the above example is 781234567 and is the account that will be debited or credited as a result of a transaction.
  • The rightmost digit of the PAN is called the check digit. When an issuer creates a new card it concatenates the BIN and account number and then calculates the value of the check digit using the Luhn algorithm to form the PAN.
  • The maximum length of a PAN may not exceed 19 digits.


Luhn algorithm

The Luhn algorithm, also known as the modulo-10 algorithm, can be used to verify that the check digit used in a PAN is valid and can also be used to generate a check digit. The algorithm is fairly simple:

  1. Enumerate all the digits in the PAN (including the check digit) from right to left and calculate their sum as follows:
    • Every odd digit is taken as-is, starting with the check digit.
    • Multiply every second digit by 2. If the product is greater than 9, the individual digits of the product is added to the sum. You can simplify this by simply subtracting 9 from the product. For example, if the current digit being evaluated is 6, the product is 12 and the sum will be increased by 3 (1+2).
  2. If the final total is divisible by 10, the PAN is valid.

The sum for a PAN equal to 1234 5678 1234 5678 is calculated as: 8+(1+4)+6+(1+0)+4+6+2+2+8+(1+4)+6+(1+0)+4+6+2+2=68. Since 68 is not divisible by 10 the PAN is invalid.


PAN Validation on the Server

Given the ISO/IEC 7812 specification, we need to perform two checks in our application to verify the validity of a PAN:

  1. Check that the PAN does not exceed 19 characters and that it meets a predefined minimum length. The majority of card issuers use PANs that are between 13 and 19 digits long. Strictly speaking though, the minimum length is 8 digits: 6 digits for the BIN, 1 digit for the account number and 1 for the check digit. Of course, this does limit the issuer to only 10 accounts.
  2. Verify that the check digit is correct.

The model I will be using is very simple and for brevity only contains a single property.

public class CreditCardModel {
    [DisplayName("Credit Card Number")]
    public string CreditCardNumber {

The CreditCard attribute we applied to the CreditCardModel is responsible for doing the bulk of the work. The MinLength property defaults to 13 and specifies the minimum length of the PAN we’d like to validate. The validation work is divided into two parts. First, we use a regular expression to verify that the string only contains digits and meets the required minimum and maximum length criteria. If this check passes we proceed to apply the Luhn algorithm and verify the check digit.

public class CreditCardAttribute : ValidationAttribute {
    private const int _defaultMinLength = 13;
    private const int _maxLength = 19;
    private int _minLength = _defaultMinLength;
    private string _regex = @"^\d{{{0},{1}}}$";
    private const int _zero = '0';

    public CreditCardAttribute()
        : base() {
        ErrorMessage = "Please enter a valid credit card number";

    public int MinLength {
        get {
            return _minLength;
        set {
            if ((value < 8) || (value > _maxLength)) {
                _minLength = _defaultMinLength;
            else {
                _minLength = value;

    public override bool IsValid(object value) {
        string pan = value as string;

        if (String.IsNullOrEmpty(pan)) {
            return false;

        Regex panRegex = new Regex(String.Format(_regex, MinLength.ToString(CultureInfo.InvariantCulture.NumberFormat),
        if (!panRegex.IsMatch(pan)) {
            return false;

        // Validate the check digit using the Luhn algorithm
        string reversedPan = new string((pan.ToCharArray()).Reverse().ToArray());
        int sum = 0;
        int multiplier = 0;
        foreach (char ch in reversedPan) {
            int product = (ch - _zero) * (multiplier + 1);
            sum = sum + (product / 10) + (product % 10);
            multiplier = (multiplier + 1) % 2;

        return sum % 10 == 0;

We can write a simple view that allows a user to enter a credit card number.

    <% using (Html.BeginForm()) { %>
        <div class="editor-label">
          <%= Html.LabelFor(m => m.CreditCardNumber ) %>
        <div class="editor-field">
          <%= Html.TextBoxFor(m => m.CreditCardNumber) %>
          <%= Html.ValidationMessageFor(m => m.CreditCardNumber) %>
          <input type="submit" value="Place Order"/>
    <% } %>

If the user enters an invalid number and submits the form they are greeted with the server side response as shown below.


Client Side Support

The first step to support client side validation is to implement a ModelValidator that can be associated with the CreditCardAttribute.

public class CreditCardValidator : DataAnnotationsModelValidator<CreditCardAttribute> {

    public CreditCardValidator(ModelMetadata metadata,
        ControllerContext controllerContext, CreditCardAttribute attribute)
        : base(metadata, controllerContext, attribute) {

    public override IEnumerable<ModelClientValidationRule> GetClientValidationRules() {
        var rule = new ModelClientValidationRule {
            ValidationType = "creditcardnumber",
            ErrorMessage = Attribute.FormatErrorMessage(Metadata.PropertyName)
        rule.ValidationParameters["minLength"] = Attribute.MinLength;

        return new[] { rule };


Next, the validator is registered in Global.asax.

protected void Application_Start() {
    DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(CreditCardAttribute), typeof(CreditCardValidator));

We also need to provide a custom validation rule that can be used on the client to validate the card number. Note that we can access the MinLength property of the server attribute using the rule.ValidationParameters. This keeps the client and server side validation in sync.

<script type="text/javascript">
    Sys.Mvc.ValidatorRegistry.validators["creditcardnumber"] = function(rule) {
        var zero = "0".charCodeAt(0);

        return function(value, context) {
            pan = value.toString();
            re = new RegExp("^\\d{" + rule.ValidationParameters.minLength + ",19}$");
            var valid = re.test(pan);
            if (valid) {
                reversedPan = pan.split("").reverse().join("");
                var sum = 0;
                var multiplier = 0;
                for (i = 0; i < reversedPan.length; i++) {
                    product = (reversedPan.charCodeAt(i) - zero) * (multiplier + 1);
                    sum = sum + Math.floor(product / 10) + (product % 10);
                    multiplier = (multiplier + 1) % 2;
                valid = sum % 10 == 0;
            return valid;

All that’s left is to enable client validation by making a call to Html.EnableClientValidation in the view just before calling Html.BeginForm.


Further Extensions

Apart from using the IIN to route transactions, it can also be used to identify branded card. All the major card organizations and issuers such as Visa, American Express and MasterCard have predefined IIN ranges. These are more commonly known as BIN prefixes. By applying a mask of BIN prefixes you can limit you application to accept specific card types. For example, the PAN on Visa branded cards always begins with 4. Of course it would be prudent to check with all the organizations that you’d like to support to ensure you have the correct prefixes.

Happy validating!

Posted by jeloff | 197 comment(s)

Today, the release of ASP.NET MVC 2 Beta was announced. It’s packed with new features including client validation, an empty MVC project, and the asynchronous controller to name a few. You can visit the links below to learn more about the new features.

The Beta is only available for Visual Studio 2008. If you are using Visual Studio 2010 Beta 2 then you will need to be a bit more patient to explore these features. The Visual Studio 2010 release cycle is different from MVC and we only update the MVC bits when a new version of 2010 is released.

Apart from adding new features, we also try to improve existing ones when making a new prerelease version available. One such improvement we made in the Beta was around the behavior of TempData. TempData is used to store information that can be consumed in subsequent requests. Conceptually, TempData is MVC’s equivalent of the Flash in Ruby on Rails (RoR); barring a few behavioral differences that will be pointed out. Basic scenarios using the Post Redirect Get (PRG) pattern are well supported in MVC today, but a number of shortcomings were identified and addressed in the latest release. The code snippet below is an example of a scenario that’s supported in both MVC 1 and 2.

   1:  [AcceptVerbs(HttpVerbs.Post)]
   2:  public ActionResult Update(Person person) {
   3:      try {
   4:          /* Do some work */
   5:          TempData["Message"] = "Success";
   6:          return RedirectToAction("Result");
   7:      }
   8:      catch {
   9:          TempData["Message"] = "Update Failed";
  10:          return RedirectToAction("Result");
  11:      }
  12:  }
  14:  public ActionResult Result() {
  15:      return View();
  16:  }

Assuming that the Result view simply renders <%= TempData["Message"] %>. the message will disappear when the user hits F5 to refresh the view, irrespective of whether or not the Update action succeeded. No problem here; that’s the expected behavior.


Old Behavior

Before examining the problematic scenarios, let’s look at how TempDataDictionary behaved prior to MCV 2 Beta.

  1. When an action method is invoked, the controller calls TempData.Load() to retrieve the TempDataDictionary using the current provider (SessionStateTempDataProvider by default). All the initial keys that are present in the dictionary are stored in a HashSet, X.
  2. A separate HashSet, Y, is used by TempDataDictionary to track the keys of new items that are inserted into TempData. It also tracks existing items (items that were created in the previous request) when they are updated.
  3. The controller calls TempData.Save() at the end of the request once the action method has completed. Keys in X that are not in Y are removed and the dictionary is persisted to storage using the provider.

There are two problems with the aforementioned behavior. First, items can be removed from TempData before they are consumed. Second, it is possible that items are retained too long in the dictionary. The three scenario’s described below should help to clarify how these issues are manifested.

Scenario 1: PRG

This is similar to the PRG scenario described earlier, except that when an error occurs, the action method directly renders a view instead of using RedirectToAction.

   1:  [AcceptVerbs(HttpVerbs.Post)]
   2:  public ActionResult Update(Person person) {
   3:      try {
   4:          /* Do some work */
   5:          TempData["Message"] = "Success";
   6:          return RedirectToAction("Update");
   7:      }
   8:      catch {
   9:          TempData["Message"] = "Update Failed";
  10:          return View();
  11:      }
  12:  }

The Update view correctly displays the contents of TempData when an error occurs. However, refreshing the page results in the value being rendered for a second time.

Scenario 2: Multiple Redirects

This scenario relates to actions that perform multiple redirects. TempData[“Foo”] is set by Action1, but will be removed at the end of Action2. Consequently, the view rendered in Action3 will not be able to display the contents of TempData[“Foo”].

   1:  public ActionResult Action1() {
   2:      TempData["Foo"] = "Bar";
   3:      return RedirectToAction("Action2");
   4:  }
   6:  public ActionResult Action2() {
   7:      /* Do some more work */
   8:      return RedirectToAction("Action3");
   9:  }
  11:  public ActionResult Action3() {
  12:      return View();
  13:  }

Scenario 3: Interleaved Requests

This is a variation on the basic PRG scenario where another request is processed before an action redirects.

   1:  public ActionResult Action1() {
   2:      TempData["Foo"] = "Bar";
   3:      return RedirectToAction("Action2");
   4:  }
   6:  public ActionResult Action2() {
   7:      return View();
   8:  }

The expectation for the code above is that it should always work, but actually it can fail when another request is processed before the redirect in Action1 occurs. The rogue request could be the result of an AJAX call or something simple such as the user opening a new tab inside the browser (in which case SessionState is shared). The net result is that values in TempData can become lost.


TempData Changes

RoR provides a wrapper for the flash called now that addresses some of the problem scenarios highlighted earlier. Values written to can be read in the current action, but will not survive until the next request as illustrated by the snippet below:

   1:  1:[:floyd] = “Goodbye cruel world” # Only available in the current action   
   2:  2: flash[:arnie] = “I’ll be back”            # Next action can still access this value

Early during the design we considered adding a similar mechanism to TempDataDictionary. In the end we opted for something simpler that addressed the scenarios we wanted to solve while limiting the number of changes to the existing API. The outcome of the changes we made resulted in the following rules that govern how TempData operates:

  1. Items are only removed from TempData at the end of a request if they have been tagged for removal.
  2. Items are only tagged for removal when they are read.
  3. Items may be untagged by calling TempData.Keep(key).
  4. RedirectResult and RedirectToRouteResult always calls TempData.Keep().

API Changes

The only API change for TempDataDictionary was the introduction of the Keep() method and one overload.

public void Keep();
public void Keep(string key);

Calling Keep() from within an action method ensures that none of the items in TempData are removed at the end of the current request, even if they were read. The second overload can be used to retain specific items in TempData.

Side Effects

Beware when debugging an application and adding a watch that references an item in TempData. Reading a value from the dictionary will result in it being deleted at the end of the request, so it can potentially interfere with your debugging effort.


One of the many new features in the Beta release is the set of helper methods used to render actions. Actions that are executed using these helpers are considered to be child actions, so loading and saving TempData is deferred to the parent action. The child actions merely operate on the same instance of TempData that their parents have.


The default provider for TempData, SessionsStateTempDataProvider, can be replaced with a custom provider. Although doing this was supported since the introduction of the ITempDataProvider interface in MVC 1.0, we decided to make it a bit easier in MVC 2. We’ve introduced a new method in the Controller class that’s responsible for instantiating a provider; aptly named CreateTempDataProvider. For example, if you want to use the CookieTempDataProvider (part of MvcFutures), you only need to do the following in your controller:

   1:      public class CookieController : Controller
   2:      {
   3:          protected override ITempDataProvider CreateTempDataProvider() {
   4:              return new CookieTempDataProvider(HttpContext);
   5:          }
   6:      }



Have fun with the latest release of MVC. As always, comments and feedback are appreciated, so please visit the MVC forum if you have any questions or run into problems with the latest release.

Posted by jeloff | 13 comment(s)
Filed under: ,

Today, MVC for Visual Studio 2010 Beta 1 was announced. You can find more details about it on Phil’s post. For the better part of the last month I’ve done little except churn out new installers for MVC, so I’m really happy to finally see one of them going out the door instead of heading to my recycle bin. The first thing you will probably notice when running the new installer, apart from the version number being 1.1, is that it is now a self-extracting EXE as opposed to just being a single MSI.

This change came about as we started to integrate MVC into the setup code of Visual Studio 2010. Including MVC required us to make changes to our own setup code and as a result we ended up with two MSIs; one for the runtime and another for the tooling components. This solved the problem of including MVC in post Beta 1 versions, but we still had to provide an out of band (OOB) installer for Beta 1 while keeping the CTP releases for the next version of MVC in our sights as well.

Chaining two MSIs so that the first installer calls msiexec to install the second is impossible because Windows Installer prohibits running installs in parallel (at least on versions prior to 4.5). We examined various options to provide a simple user experience for installing the two MSIs on Beta1. Downloading separate MSIs was never really an option because as we start making CTP releases of the next version we don’t want users to end up with the runtime component of one preview and the tooling component of another release.

The first attempt produced a chained installer (bootstrapper) using the setupbld.exe utility provided by WiX 3.0. It works really well, but did not quite fit our needs. For those of you that are using WiX to author setup programs, the roadmap for version 3.5 includes a new utility called Burn that will provide rich functionality for creating chained setup programs. Even though we ended up throwing this installer away it did have some benefits and we upgraded our build scripts to the latest version of WiX.

Attempt number two saw the creation of a the self-extracting EXE using the GenerateBootstrapper task and relies on some of the ClickOnce functionality used to describe the products and packages that comprise the installation manifest. To keep things simple, our EXE does not download the individual MSIs from a server. Instead, we just included them within the EXE.

I could have completed the second installer much sooner. In December when we were wrapping up RC1 we began looking at servicing scenarios for MVC using Windows Update (WU) and Microsoft Update (MU). As part of the investigation I split the MSI and created a self-extracting EXE. Eventually the effort was abandoned since we felt confident that we could service MVC by releasing a patch should the need arise and it was a better fit for MVC’s OOBness. I packed the code away just in case something changed for RTM, but soon after 1.0 was released I deleted it to get my machine cleaned out in preparation for the next version of MVC (note to self: Never throw away code).

Because our installation methodology has changed, troubleshooting failed installations is slightly different compared to MVC 1.0, but I hope to provide some useful tips on where to find information when things do go wrong.

What does the new installer do?

When you run AspNetMVC1.1_VS2010.exe it will extract a number of files to a randomly named folder under %temp% (usually located at AppData\Local\Temp). In my case the files were extracted to AppData\Local\Temp\VSDC164.tmp\. Within this folder you will find subfolders that are created for each MSI in the bootstrapper’s manifest. The following files should be present in either the root folder or one of the subfolders:

  • AspNetMVC1.1.msi
  • VS2010ToolsMVC1.1.msi
  • Setup.exe
  • Eula.rtf

Once the files are extracted, Setup.exe simply installs each MSI sequentially by calling msiexec using the /i and /q options.

Crash, Boom, Bang

When the EXE fails you should see a dialog similar to the one below. Note that I expanded the Details section (and sanitized some information).


Take note of the log file being mentioned at the bottom of the dialog. This file is generated by the bootstrapper, not msiexec. The file is not too interesting, but does tell a sad tale about how msiexec is invoked:

Installing using command 'C:\Windows\SysWOW64\msiexec.exe' and parameters ' -I "C:\Users\*\AppData\Local\Temp\VSDC164.tmp\MvcRuntime\AspNetMVC1.1.msi" -q '

As you can see, the EXE simply performs a silent install and does not create a verbose log file. The default MSI log file generated by the bootstrapper contains very little information. In fact, if the install fails, you probably won’t see more than the excerpt below. This error was induced by running an unsigned installer on Windows Server 2008 and the installation failed when it tried to run NGEN on the unsigned System.Web.Mvc assembly.

Error 1937. An error occurred during the installation of assembly 'System.Web.Mvc,version="",culture="neutral",publicKeyToken="31BF3856AD364E35",processorArchitecture="MSIL"'. The signature or catalog could not be verified or is not valid. HRESULT: 0x80131045. assembly interface: IAssemblyCacheItem, function: Commit, component: {32E5FFC3-0CDC-43C1-A5F8-62CAF64F3064}
=== Logging stopped: 6/5/2009  10:03:51 ===

Can you get more verbose logs? Yes. To do that, simply add the entry below to your registry and run the EXE again. For more details on the various options you can consult this KB article.

Key: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\Installer

Value Name: Logging

Type: REG_SZ

Data: voicewarmupx

The MSI log file that’s produced by this registry entry is extremely verbose. Remember to remove the registry entry when you are done because Windows will continue to produce log files whenever you install a product and this may impact on install times for large products. One drawback of using this approach to create log files is that you end up with a totally random file name. For example, when I ran the EXE, the log that was produced was located in AppData\Local\Temp\MSI4ced1.LOG. The easiest way to find the log is to look at the timestamps of the files. For the more adventurous amongst us, delete all the MSI*.LOG files in %temp% before running the setup program.

Analyzing Install Logs

The Windows Installer 4.5 SDK provides a utility, wilogutl.exe, that can be used to analyze install logs. Simply launch the EXE, open the log file and click on the Analyze button. It won’t always solve your installation problems, but it’s always good to have another tool at the ready to troubleshoot problems. If nothing else, the tool can assist you to quickly step through a log and identify all the points of failure. The utility provided the following feedback for the log file from my failed installation.


Ultima Ratio

There is one final option available to troubleshoot a failed installation. Recall that when run, the EXE first extracts all the MSI files to your %temp% folder. Simply install the MSIs separately; runtime first, then the tooling components. Although this will not make any failures disappear, it should make it easier for you to determine which component (runtime or tools) is responsible for the installation failure.

Known Issues and Improvements

As much as I hate to admit it, there are some problems with the installer. While testing the installer it was discovered that even if you don’t have Visual Studio installed you will still get an entry under ARP for Microsoft Visual Studio 2010 Tools for ASP.NET MVC 1.1. The reason for this is that the bootstrapper does not check whether Visual Studio is installed. The MSI on the other hand does perform this check. However, when the MSI fails to detect a valid version of Visual Studio, it only disables some features inside the MSI, hence the entry under ARP still appears because the MSI does complete. Kudos to our testers for discovering this. We have a fix ready to address the problem, but schedules prohibited it from going into the build we released.

Noticed the double “Setup” in the error dialog I showed earlier? I’ve rewarded myself by opening a bug just for this.

I’m also not too happy about requiring users to tweak the registry to get verbose setup logs. Suffice it to say that I am looking at ways to introduce a static location, or at the very least a static log filename, so that users will not need to fiddle with the registry in upcoming CTP releases of MVC. My initial investigation seems to indicate that we might run into OS dependencies for this, but let’s see what can be done.

Have fun with the installer and let us know if you run into problems.

Posted by jeloff | 44 comment(s)
Filed under:

Once you are done writing your MVC application you will probably start looking at deploying it. In many instances, creating a simple web deployment project should suffice, but if you decide to distribute your application using an MSI then you will most likely need to determine whether MVC is installed on the target system. The simplest way to do this when using WIX is to specify a launch condition. For MVC 1.0 there are two options at your disposal to create a property for a launch condition.

Option 1: Use the registry

When MVC is installed it creates a key in the registry under HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\ASP.NET\ASP.NET MVC 1.0 called InstallPath. This value can be read using the <RegistrySearch> element and stored in a property. The key won’t be present unless the installation of MVC completed successfully.

All that remains to be done is to define a launch condition that uses this property. Remember that a launch condition specifies the condition under which an installation should continue, not the condition under which it should fail.

   1: <?xml version='1.0' encoding='windows-1252'?>
   2: <Wix xmlns=''>
   3:   <Product Name='Foo 1.0' Id='8A94F114-6F67-4728-8B5E-B4EC115AF3AF'
   4:     UpgradeCode='B60B3A2A-C7BB-47F7-97C4-7D04332519D3'
   5:     Language='1033' Codepage='1252' Version='1.0.0' Manufacturer='Bar'>
   7:     <Package Keywords='Installer' Description='Foo' Manufacturer='Bar'
   8:       InstallerVersion='100' Languages='1033' Compressed='yes' SummaryCodepage='1252' />
  10:     <Condition Message='ASP.NET MVC 1.0 is required to proceed with the installation.'>
  11:       Installed OR ASP_NET_MVC_1_0
  12:     </Condition>
  14:     <Property Id='ASP_NET_MVC_1_0'>
  15:       <RegistrySearch Id='MVC_InstallDir' Type='directory' Root='HKLM'
  16:                       Key='SOFTWARE\Microsoft\ASP.NET\ASP.NET MVC 1.0'
  17:                       Name='InstallPath'/>
  18:     </Property>    
  20:   </Product>
  21: </Wix>

Option 2: Use the assembly file version

Apart from installing into the GAC and creating a native image for System.Web.Mvc, we also drop the DLL along with the XML comments under Program Files\Microsoft ASP.NET\ASP.NET MVC 1.0\Assemblies. If you don’t want to use the registry option you can define your property using a <FileSearch> element. You can use the code from the first example and just replace the <Condition> and <Property> elements with the code below.

   1: <Condition Message='ASP.NET MVC 1.0 is required to proceed with the installation.'>
   2:   Installed OR ASP_NET_MVC_1_0_DLL
   3: </Condition>
   5: <Property Id='ASP_NET_MVC_1_0_DLL'>
   6:   <DirectorySearch Id='MVC_DLL_DIR' Path='[ProgramFilesFolder]\Microsoft ASP.NET\ASP.NET MVC 1.0\Assemblies'>
   7:     <FileSearch Id='MVC_DLL_FILE' Name='System.Web.Mvc.dll' MinVersion='1.0.40309'/>
   8:   </DirectorySearch>
   9: </Property>

VERY IMPORTANT: Notice that the code above is expecting a DLL with a minimum version of 1.0.40309. If you examine the properties of System.Web.Mvc.dll in Explorer you’ll notice that the version is in fact 1.0.40310. It’s just some weirdness in how the MinVersion attribute works. The explanation from MSDN states the following:


The minimum version of the file, with a language comparison. If this field is specified, then the file must have a version that is at least equal to MinVersion. If the file has an equal version to the MinVersion field value but the language specified in the Languages column differs, the file does not satisfy the signature filter criteria.
Note  The language specified in the Languages column is used in the comparison and there is no way to ignore language. If you want a file to meet the MinVersion field requirement regardless of language, you must enter a value in the MinVersion field that is one less than the actual value. For example, if the minimum version for the filter is 2.0.2600.1183, use 2.0.2600.1182 to find the file without matching the language information.


Compile and Run

The source code I’ve provided can be compiled and linked using candle and light. I’ve tested this under WiX 2.0, but haven’t tried it using WiX 3.0. You may also see the following ICE warnings when linking the wixobj file:

  • warning LGHT1076 : ICE40: Error Table is missing. Only numerical error messages will be generated.
  • warning LGHT1076 : ICE71: The Media table has no entries.

You can safely ignore these warnings. The installer code I provided is really stripped down to the bare minimum to produce an MSI that does absolutely nothing except trying to detect whether or not MVC 1.0 is installed. When you launch the MSI on a system without MVC you should see the message blow.



Posted by jeloff | 7 comment(s)
Filed under: ,

About six months ago Phil Haack wrote a post on how to use the DefaultModelBinder to bind a form to a list. He concluded by asking how this functionality would be used. In this post I'm going to show how a dynamic form that uses the model binder’s ability to work with lists can be created using MVC and jQuery. The example I’m going to use was inspired by an application I'm working on that provides a web interface to the bug tracking system we used during the development of MVC. The application is not intended to become an internal tool, but rather to explore the capabilities of MVC and hopefully identify areas that can be improved in future releases. It also afforded me a chance to explore jQuery.


The system we use for tracking bugs in MVC contains over 3000 databases spanning multiple products and product families. Apart from using the system to create and resolve bugs it allows users to create and save queries that are executed against a specific databases. For example, I could create a query to find all the work items that were assigned to me for RC1 of MVC. The interface provided by the application to accomplish this is fairly simple as shown below. jQuery and MVC provided all the necessary tools to design a web application that could mimic this behavior.


Sample Application

I’ve made the source code of the sample application available for download instead of posting everything here. Instead I’m just going to highlight some aspects of the application. The design can definitely be improved.

  • The JavaScript is embedded within the Create view. This avoids script code from being cached by the development server in Visual Studio (I got tired of remembering to hit Ctrl+F5 every time I launched the application when I modified the JavaScript). The downside of this is that it complicates script debugging in Visual Studio.
  • The only validation being performed by the controller is to check whether the form is empty (no rows were inserted by the user) and that any rows that are present have a value inside the corresponding textbox.


Once the page for the Create view has completed loading it immediately makes a request to the controller to retrieve a list of all the field definitions. The definitions are stored on the client side to avoid going to the server every time a new row is added to the form.

   1: var fieldDefinitions = null;
   3: $(document).ready(function() {
   4:     // Retrieve all the field definitions once the page is loaded.
   5:     $.getJSON("/Query/Fields", null, function(data) {
   6:         fieldDefinitions = data;
   7:     });
   8: });

The Fields action simply returns a list of definitions that have been created when the application starts. In a real application one might need to retrieve this data from a file, a web service or a database. The definitions are then serialized and consumed on the client when a new row is inserted into the form. If you go through the source code you’ll notice that the field definitions are stored inside a dictionary, hence the reference to the Values property in the code below. Using a dictionary makes accessing a field definition on the server easier during validation.

   1: public ActionResult Fields() {
   2:     return Json(FieldDefinitions.Fields.Values);
   3: }

The Create action takes two parameters. The first is a string that contains a comma separated list of field names that appear in the query and will be used during validation when the form needs to be generated again in case there were errors. The second is a list of fields with each entry representing a single row on the form that was submitted. The action doesn’t do much right now. It performs some basic validation and if successful, creates a string representation of the query the user created that is echoed back in the Results view.

   1: public ActionResult Create() {
   2:     return View();
   3: }
   5: [AcceptVerbs(HttpVerbs.Post)]
   6: public ActionResult Create(string queryFields, IList<Field> query) {
   7:     if (!ValidateQuery(query)) {
   8:         ViewData["queryFields"] = queryFields;
   9:         return View();
  10:     }
  12:     StringBuilder queryString = new StringBuilder();
  14:     foreach (Field field in query) {
  15:         queryString.AppendFormat("{0} {1} {2} {3}", field.AttachWith, field.Name, field.Operator, field.Value);
  16:         queryString.AppendLine();
  17:     }
  19:     ViewData["queryString"] = queryString.ToString();
  21:     return View("Results");
  22: }

The Field model used to represent a single query row is very simple.

   1: public class Field {
   2:     public string AttachWith {
   3:         get;
   4:         set;
   5:     }
   7:     public string Name {
   8:         get;
   9:         set;
  10:     }
  12:     public string Operator {
  13:         get;
  14:         set;
  15:     }
  17:     public string Value {
  18:         get;
  19:         set;
  20:     }
  21: }


Query View

Initially the user is confronted with a simple form that only contains two buttons; one to add a new row and another to submit the form.


I’ve used a table for the form layout since each row contains exactly the same elements and this makes it a bit easier to keep the form organized.

   1: <form action="/Query/Create" method="post"><input id="queryFields" name="queryFields" type="hidden" value="" />
   2:   <table id="queryTable">
   3:     <thead>
   4:       <tr>
   5:         <th>Attach With</th>
   6:         <th>Field</th>
   7:         <th>Operator</th>
   8:         <th>Value</th>
   9:       </tr>
  10:     </thead>
  11:     <tbody>
  12:     </tbody>
  13:   </table>
  14:   <p>
  15:     <input type="button" value="Add Field" onclick="addQueryField()" />    
  16:     <input type="submit" value="Submit Query" onclick="updateQueryFields()" />
  17:   </p>
  18: </form>

When a user clicks on the Add Field button the addQueryField function will insert a new row into the table. The row contains three dropdown lists, a text field and a button to remove the row from the form.


Since the form will be bound to an IList<Field> we need to ensure that the indices generated for the name attribute in the various HTML elements remain sequential. If we end up with non-sequential indices then the form fields will not be bound properly to our model. The HTML for the newly added row in the example above will look like this:


Determining the value of the index used by the various name attributes is quite easy using jQuery’s selectors as shown below on line 3.

   1: function addQueryField() {
   2:     // Determine the index of the next row to insert
   3:     var index = $("tr[id^=queryRow]").size();
   4:     // Create DOM element for table row
   5:     var oTr = $(document.createElement("tr")).attr("id", "queryRow" + index);
   6:     // Create DOM element for value textbox
   7:     var oValueTextBox = $(document.createElement("input")).attr("name", "query[" + index + "].Value").attr("id", "Value"+index).attr("type", "text");
   8:     // Create DOM element for Name select list
   9:     var oSelectListName = createSelectListForName(index);
  10:     // Create DOM element for Remove button to delete the row from the table
  11:     var oButtonRemove = $(document.createElement("input")).attr("type", "button").attr("value", "Remove").attr("id", "Remove"+index).click(function() {
  12:         removeRow(index);
  13:     });
  14:     // Create <td> elements
  15:     oTr.append($(document.createElement("td")).append(createSelectListForAttachWith(index)));
  16:     oTr.append($(document.createElement("td")).append(oSelectListName));
  17:     oTr.append($(document.createElement("td")).append(createSelectListForOperator(oSelectListName.val(), index)));
  18:     oTr.append($(document.createElement("td")).append(oValueTextBox).append(oButtonRemove));
  19:     // Insert the row into the table
  20:     $("#queryTable").append(oTr);
  21: }

On line 12 we bind the removeRow function to the onclick event of the the Remove button. This function is responsible for two tasks:

  1. It needs to remove the row from the table.
  2. It needs to update the remaining rows to keep the indices sequential.

Updating the rows is not a difficult task, but the syntax required by the DefaultModelBinder to specify a property and index for the model conflicts with the syntax used by the selectors in jQuery. The ‘[‘ and ‘]’ characters are used to specify attribute values inside a selector. To work around this the id attributes of the elements that are inserted into the DOM only contains alphanumerical characters. This allows the removeRow function to select and update each row using jQuery selectors. On lines 8, 9, 13, and 14 we need to unbind our functions before rebinding them since their index parameters have changed. Without doing the .unbind() jQuery will just chain the event handlers and you’ll end up with some really funny behavior.

   1: function removeRow(index) {
   2:     // Delete the row
   3:     $("#queryRow" + index).remove();
   4:     // Search through the table and update all the remaining rows so that indices remain sequential
   5:     $("tr[id^=queryRow]").each(function(i) {
   6:         $(this).attr("id", "queryRow" + i);
   7:         $("td select[id^=AttachWith]", $(this)).attr("name", "query[" + i + "].AttachWith").attr("id", "AttachWith" + i);
   8:         $("td select[id^=Name]", $(this)).attr("name", "query[" + i + "].Name").attr("id", "Name" + i).unbind("change").change(function() {
   9:             updateOperator(i);
  10:         });
  11:         $("td select[id^=Operator]", $(this)).attr("name", "query[" + i + "].Operator").attr("id", "Operator" + i);
  12:         $("td input[id^=Value]", $(this)).attr("name", "query[" + i + "].Value").attr("id", "Value" + i);
  13:         $("td input[id^=Remove]", $(this)).attr("id", "Remove" + i).unbind("click").click(function() {
  14:             removeRow(i);
  15:         });
  16:     });
  17: }


Performing client side validation on a dynamic form makes perfect sense and jQuery provides the necessary tools to do this. Depending on how your application works it’s reasonable to expect that some elements can only be validated on the server. The only problem that needs to be solved is displaying all the original fields that the user added when redirecting back to the form. To solve this, I included a hidden input named queryFields. When the user hits the submit button it executes a function to update the hidden input with a comma separated list of fields. When validation fails the string is placed into ViewData and the form the user submitted can be generated using the HTML helpers.

   1: <%
   2:    string queryFields = ViewData["queryFields"] as string;
   3:    if (!String.IsNullOrEmpty(queryFields)) {
   4:        int i = 0;
   5:        foreach (string field in queryFields.Split(new[] { ',' })) {
   6:            string queryPrefix = "query["+Convert.ToString(i)+"].";
   7:            string attachWithName = queryPrefix + "AttachWith";
   8:            string fieldName = queryPrefix + "Name";
   9:            string operatorName = queryPrefix + "Operator";
  10:            string valueName = queryPrefix+"Value";
  11:            string trId = "queryRow"+Convert.ToString(i); 
   3:            <tr id="<% =trId %>">
   4:              <td><%
   1:  =Html.DropDownList(attachWithName, FieldDefinitions.AttachWith, new { id = "AttachWith" + Convert.ToString(i) }) 
   5:              <td><%
   1:  =Html.DropDownList(fieldName, FieldDefinitions.FieldNames, new { id = "Name" + Convert.ToString(i), onchange="updateOperator("+Convert.ToString(i)+")"}) 
   6:              <td><%
   1:  =Html.DropDownList(operatorName, FieldDefinitions.Fields[field].Operators, new { id = "Operator" + Convert.ToString(i) }) 
   7:              <td>
   8:                <%
   1:  =Html.TextBox(valueName, null, new { id = "Value" + Convert.ToString(i) })
   9:                <input type="button" value="Remove" onclick="removeRow(<% =Convert.ToString(i) %>)" />
  10:                <%
   1:  =Html.ValidationMessage(valueName) 
  11:              </td>
  12:            </tr>
  13:            i++;
  14:        }
  15:    } %>

Once the system passes validation you should see a screen that echoes the query back to you. Consider the following query:


Hitting the Submit Query button will produce the result below.


IE8 Quirks

While working on the application I mentioned at the beginning of this post I discovered a small bug in the developer tools of IE8 (Open IE8 and hit F12 to open the tools). When creating a new element in the DOM and setting its name attribute the toolbar will display the attribute as propdescname instead of name. The DOM is still correct though and the problem does not occur if you specified the name attribute explicitly in the HTML.











When I wrote the first version of my application I avoided using jQuery. The result of that was that my code only worked in IE. Getting it to work in Safari and FireFox took another day. Now I understand why, when talking to JavaScript developers, you sometimes see little drops of blood welling up in their eyes. jQuery on the other hand takes care of all the browser compatibility issues. Apart from this using jQuery made the code much easier to maintain and modify. I’m going to attribute that to two things: selectors and chaining. If you have any suggestions on how to improve the JavaScript in the example I’ve given I’d love to hear from you.

Posted by jeloff | 11 comment(s)
Filed under:
More Posts