MVC's Html.DropDownList and "There is no ViewData item of type 'IEnumerable<SelectListItem>' that has the key '...'

ASP.NET MVC's HtmlHelper extension methods take out a lot of the HTML-by-hand drudgery to which MVC re-introduced us former WebForms programmers. Another thing to which MVC re-introduced us is poor documentation, after the excellent documentation for most of the rest of ASP.NET and the .NET Framework which I now realize I'd taken for granted.

I'd come to regard using HtmlHelper methods instead of writing HTML by hand as a best practice. When I upgraded a project from MVC 3 to MVC 4, several hidden fields with boolean values broke, because MVC 3 called ToString() on those values implicitly, and MVC 4 threw an exception until you called ToString() explicitly. Fields that used HtmlHelper weren't affected. I then went through dozens of views and manually replaced hidden inputs that had been coded by hand with Html.Hidden calls.

So for a dropdown list I was rendering on the initial page as empty, then populating via JavaScript after an AJAX call, I tried to use a HtmlHelper method:

@Html.DropDownList("myDropdown")

which threw an exception:

System.InvalidOperationException: There is no ViewData item of type 'IEnumerable<SelectListItem>' that has the key 'myDropdown'.

That's funny--I made no indication I wanted to use ViewData. Why was it looking there? Just render an empty select list for me. When I populated the list with items, it worked, but I didn't want to do that:

@Html.DropDownList("myDropdown", new List<SelectListItem>() { new SelectListItem() { Text = "", Value = "" } })

I removed this dummy item in JavaScript after the AJAX call, so this worked fine, but I shouldn't have to give it a list with a dummy item when what I really want is an empty select.

A bit of research with JetBrains dotPeek (helpfully recommended by Scott Hanselman) revealed the problem. Html.DropDownList requires some sort of data to render or it throws an error. The documentation hints at this but doesn't make it very clear. Behind the scenes, it checks if you've provided the DropDownList method any data. If you haven't, it looks in ViewData. If it's not there, you get the exception above.

In my case, the helper wasn't doing much for me anyway, so I reverted to writing the HTML by hand (I ain't scared), and amended my best practice: When an HTML control has an associated HtmlHelper method and you're populating that control with data on the initial view, use the HtmlHelper method instead of writing by hand.

Posted by pjohnson | 28 comment(s)
Filed under: , ,

How to create a new WCF/MVC/jQuery application from scratch

As a corporate developer by trade, I don't get much opportunity to create from-the-ground-up web sites; usually it's tweaks, fixes, and new functionality to existing sites. And with hobby sites, I often don't find the challenges I run into with enterprise systems; usually it's starting from Visual Studio's boilerplate project and adding whatever functionality I want to play around with, rarely deploying outside my own machine. So my experience creating a new enterprise-level site was a bit dated, and the technologies to do so have come a long way, and are much more ready to go out of the box. My intention with this post isn't so much to provide any groundbreaking insights, but to just tie together a lot of information in one place to make it easy to create a new site from scratch.

Architecture

One site I created earlier this year had an MVC 3 front end and a WCF 4-driven service layer. Using Visual Studio 2010, these project types are easy enough to add to a new solution. I created a third Class Library project to store common functionality the front end and services layers both needed to access, for example, the DataContract classes that the front end uses to call services in the service layer. By keeping DataContract classes in a separate project, I avoided the need for the front end to have an assembly/project reference directly to the services code, a bit cleaner and more flexible of an SOA implementation.

Consuming the service

Even by this point, VS has given you a lot. You have a working web site and a working service, neither of which do much but are great starting points. To wire up the front end and the services, I needed to create proxy classes and WCF client configuration information. I decided to use the SvcUtil.exe utility provided as part of the Windows SDK, which you should have installed if you installed VS. VS also provides an Add Service Reference command since the .NET 1.x ASMX days, which I've never really liked; it creates several .cs/.disco/etc. files, some of which contained hardcoded URL's, adding duplicate files (*1.cs, *2.cs, etc.) without doing a good job of cleaning up after itself. I've found SvcUtil much cleaner, as it outputs one C# file (containing several proxy classes) and a config file with settings, and it's easier to use to regenerate the proxy classes when the service changes, and to then maintain all your configuration in one place (your Web.config, instead of the Service Reference files). I provided it a reference to a copy of my common assembly so it doesn't try to recreate the data contract classes, had it use the type List<T> for collections, and modified the output files' names and .NET namespace, ending up with a command like:

svcutil.exe /l:cs /o:MyService.cs /config:MyService.config /r:MySite.Common.dll /ct:System.Collections.Generic.List`1 /n:*,MySite.Web.ServiceProxies http://localhost:59999/MyService.svc

I took the generated MyService.cs file and drop it in the web project, under a ServiceProxies folder, matching the namespace and keeping it separate from classes I coded manually. Integrating the config file took a little more work, but only needed to be done once as these settings didn't often change. A great thing Microsoft improved with WCF 4 is configuration; namely, you can use all the default settings and not have to specify them explicitly in your config file. Unfortunately, SvcUtil doesn't generate its config file this way. If you just copy & paste MyService.config's contents into your front end's Web.config, you'll copy a lot of settings you don't need, plus this will get unwieldy if you add more services in the future, each with its own custom binding. Really, as the only mandatory settings are the endpoint's ABC's (address, binding, and contract) you can get away with just this:

<system.serviceModel>
  <client>
    <endpoint address="http://localhost:59999/MyService.svc" binding="wsHttpBinding" contract="MySite.Web.ServiceProxies.IMyService" />
  </client>
</system.serviceModel>

By default, the services project uses basicHttpBinding. As you can see, I switched it to wsHttpBinding, a more modern standard. Using something like netTcpBinding would probably be faster and more efficient since the client & service are both written in .NET, but it requires additional server setup and open ports, whereas switching to wsHttpBinding is much simpler.

From an MVC controller action method, I instantiated the client, and invoked the method for my operation. As with any object that implements IDisposable, I wrapped it in C#'s using() statement, a tidy construct that ensures Dispose gets called no matter what, even if an exception occurs. Unfortunately there are problems with that, as WCF's ClientBase<TChannel> class doesn't implement Dispose according to Microsoft's own usage guidelines. I took an approach similar to Technology Toolbox's fix, except using partial classes instead of a wrapper class to extend the SvcUtil-generated proxy, making the fix more seamless from the controller's perspective, and theoretically, less code I have to change if and when Microsoft fixes this behavior.

User interface

The MVC 3 project template includes jQuery and some other common JavaScript libraries by default. I updated the ones I used to the latest versions using NuGet, available in VS via the Tools > Library Package Manager > Manage NuGet Packages for Solution... > Updates. I also used this dialog to remove packages I wasn't using. Given that it's smart enough to know the difference between the .js and .min.js files, I was hoping it would be smart enough to know which to include during build and publish operations, but this doesn't seem to be the case. I ended up using Cassette to perform the minification and bundling of my JavaScript and CSS files; ASP.NET 4.5 includes this functionality out of the box.

The web client to web server link via jQuery was easy enough. In my JavaScript function, unobtrusively wired up to a button's click event, I called $.ajax, corresponding to an action method that returns a JsonResult, accomplished by passing my model class to the Controller.Json() method, which jQuery helpfully translates from JSON to a JavaScript object.

$.ajax calls weren't perfectly straightforward. I tried using the simpler $.post method instead, but ran into trouble without specifying the contentType parameter, which $.post doesn't have. The url parameter is simple enough, though for flexibility in how the site is deployed, I used MVC's Url.Action method to get the URL, then sent this to JavaScript in a JavaScript string variable. If the request needed input data, I used the JSON.stringify function to convert a JavaScript object with the parameters into a JSON string, which MVC then parses into strongly-typed C# parameters. I also specified "json" for dataType, and "application/json; charset=utf-8" for contentType. For success and error, I provided my success and error handling functions, though success is a bit hairier. "Success" in this context indicates whether the HTTP request succeeds, not whether what you wanted the AJAX call to do on the web server was successful. For example, if you make an AJAX call to retrieve a piece of data, the success handler will be invoked for any 200 OK response, and the error handler will be invoked for failed requests, e.g. a 404 Not Found (if the server rejected the URL you provided in the url parameter) or 500 Internal Server Error (e.g. if your C# code threw an exception that wasn't caught). If an exception was caught and handled, or if the data requested wasn't found, this would likely go through the success handler, which would need to do further examination to verify it did in fact get back the data for which it asked. I discuss this more in the next section.

Logging and exception handling

At this point, I had a working application. If I ran into any errors or unexpected behavior, debugging was easy enough, but of course that's not an option on public web servers. Microsoft Enterprise Library 5.0 filled this gap nicely, with its Logging and Exception Handling functionality. First I installed Enterprise Library; NuGet as outlined above is probably the best way to do so. I needed a total of three assembly references--Microsoft.Practices.EnterpriseLibrary.ExceptionHandling, Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.Logging, and Microsoft.Practices.EnterpriseLibrary.Logging. VS links with the handy Enterprise Library 5.0 Configuration Console, accessible by right-clicking your Web.config and choosing Edit Enterprise Library V5 Configuration. In this console, under Logging Settings, I set up a Rolling Flat File Trace Listener to write to log files but not let them get too large, using a Text Formatter with a simpler template than that provided by default. Logging to a different (or additional) destination is easy enough, but a flat file suited my needs. At this point, I verified it wrote as expected by calling the Microsoft.Practices.EnterpriseLibrary.Logging.Logger.Write method from my C# code.

With those settings verified, I went on to wire up Exception Handling with Logging. Back in the EntLib Configuration Console, under Exception Handling, I used a LoggingExceptionHandler, setting its Logging Category to the category I already had configured in the Logging Settings. Then, from code (e.g. a controller's OnException method, or any action method's catch block), I called the Microsoft.Practices.EnterpriseLibrary.ExceptionHandling.ExceptionPolicy.HandleException method, providing the exception and the exception policy name I had configured in the Exception Handling Settings.

Before I got this configured correctly, when I tried it out, nothing was logged. In working with .NET, I'm used to seeing an exception if something doesn't work or isn't set up correctly, but instead working with these EntLib modules reminds me more of JavaScript (before the "use strict" v5 days)--it just does nothing and leaves you to figure out why, I presume due in part to the listener pattern Microsoft followed with the Enterprise Library. First, I verified logging worked on its own. Then, verifying/correcting where each piece wires up to the next resolved my problem. Your C# code calls into the Exception Handling module, referencing the policy you pass the HandleException method; that policy's configuration contains a LoggingExceptionHandler that references a logCategory; that logCategory should be added in the loggingConfiguration's categorySources section; that category references a listener; that listener should be added in the loggingConfiguration's listeners section, which specifies the name of the log file.

One final note on error handling, as the proper way to handle WCF and MVC errors is a whole other very lengthy discussion. For AJAX calls to MVC action methods, depending on your configuration, an exception thrown here will result in ASP.NET'S Yellow Screen Of Death being sent back as a response, which is at best unnecessarily and uselessly verbose, and at worst a security risk as the internals of your application are exposed to potential hackers. I mitigated this by overriding my controller's OnException method, passing the exception off to the Exception Handling module as above. I created an ErrorModel class with as few properties as possible (e.g. an Error string), sending as little information to the client as possible, to both maximize bandwidth and mitigate risk. I then return an ErrorModel in JSON format for AJAX requests:

if (filterContext.HttpContext.Request.IsAjaxRequest())
{
    filterContext.Result = Json(new ErrorModel(...));
    filterContext.ExceptionHandled = true;
}

My $.ajax calls from the browser get a valid 200 OK response and go into the success handler. Before assuming everything is OK, I check if it's an ErrorModel or a model containing what I requested. If it's an ErrorModel, or null, I pass it to my error handler. If the client needs to handle different errors differently, ErrorModel can contain a flag, error code, string, etc. to differentiate, but again, sending as little information back as possible is ideal.

Summary

As any experienced ASP.NET developer knows, this is a far cry from where ASP.NET started when I began working with it 11 years ago. WCF services are far more powerful than ASMX ones, MVC is in many ways cleaner and certainly more unit test-friendly than Web Forms (if you don't consider the code/markup commingling you're doing again), the Enterprise Library makes error handling and logging almost entirely configuration-driven, AJAX makes a responsive UI more feasible, and jQuery makes JavaScript coding much less painful. It doesn't take much work to get a functional, maintainable, flexible application, though having it actually do something useful is a whole other matter.

Oracle 64-bit assembly throws BadImageFormatException when running unit tests

We recently upgraded to the 64-bit Oracle client. Since then, Visual Studio 2010 unit tests that hit the database (I know, unit tests shouldn't hit the database--they're not perfect) all fail with this error message:

Test method MyProject.Test.SomeTest threw exception:
System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.BadImageFormatException: Could not load file or assembly 'Oracle.DataAccess, Version=4.112.3.0, Culture=neutral, PublicKeyToken=89b483f429c47342' or one of its dependencies. An attempt was made to load a program with an incorrect format.

I resolved this by changing the test settings to run tests in 64-bit. From the Test menu, go to Edit Test Settings, and pick your settings file. Go to Hosts, and change the "Run tests in 32 bit or 64 bit process" dropdown to "Run tests in 64 bit process on 64 bit machine". Now your tests should run.

This fix makes me a little nervous. Visual Studio 2010 and earlier seem to change that file for no apparent reason, add more settings files, etc. If you're not paying attention, you could have TestSettings1.testsettings through TestSettings99.testsettings sitting there and never notice the difference. So it's worth making a note of how to change it in case you have to redo it, and being vigilant about files VS tries to add.

I'm not entirely clear on why this was even a problem. Isn't that the point of an MSIL assembly, that it's not specific to the hardware it runs on? An IL disassembler can open the Oracle.DataAccess.dll in question, and in its Runtime property, I see the value "v4.0.30319 / x64". So I guess the assembly was specifically build to target 64-bit platforms only, possibly due to a 64-bit-specific difference in the external Oracle client upon which it depends. Most other assemblies, especially in the .NET Framework, list "msil", and a couple list "x86". So I guess this is another entry in the long list of ways Oracle refuses to play nice with Windows and .NET.

If this doesn't solve your problem, you can read others' research into this error, and where to change the same test setting in Visual Studio 2012.

MVC 4 and the App_Start folder

I've been delving into ASP.NET MVC 4 a little since its release last month. One thing I was chomping at the bit to explore was its bundling and minification functionality, for which I'd previously used Cassette, and been fairly happy with it. MVC 4's functionality seems very similar to Cassette's; the latter's CassetteConfiguration class matches the former's BundleConfig class, specified in a new directory called App_Start.

At first glance, this seems like another special ASP.NET folder, like App_Data, App_GlobalResources, App_LocalResources, and App_Browsers. But Visual Studio 2010's lack of knowledge about it (no Solution Explorer option to add the folder, nor a fancy icon for it) made me suspicious. I found the MVC 4 project template has five classes there--AuthConfig, BundleConfig, FilterConfig, RouteConfig, and WebApiConfig. Each of these is called explicitly in Global.asax's Application_Start method. Why create separate classes, each with a single static method? Maybe they anticipate a lot more code being added there for large applications, but for small ones, it seems like overkill. (And they seem hastily implemented--some declared as static and some not, in the base namespace instead of an App_Start/AppStart one.) Even for a large application I work on with a substantial amount of code in Global.asax.cs, a RouteConfig might be warranted, but the other classes would remain tiny.

More importantly, it appears App_Start has no special magic like the other folders--it's just convention. I found it first described in the MVC 3 timeframe by Microsoft architect David Ebbo, for the benefit of NuGet and WebActivator; apparently some packages will add their own classes to that directory as well. One of the first appears to be Ninject, as most mentions of that folder mention it, and there's not much information elsewhere about this new folder.

Posted by pjohnson | 4 comment(s)
Filed under: , ,

Unity throws SynchronizationLockException while debugging

I've found Unity to be a great resource for writing unit-testable code, and tests targeting it. Sadly, not all those unit tests work perfectly the first time (TDD notwithstanding), and sometimes it's not even immediately apparent why they're failing. So I use Visual Studio's debugger. I then see SynchronizationLockExceptions thrown by Unity calls, when I never did while running the code without debugging. I hit F5 to continue past these distractions, the line that had the exception appears to have completed normally, and I continue on to what I was trying to debug in the first place.

In settings where Unity isn't used extensively, this is just one amongst a handful of annoyances in a tool (Visual Studio) that overall makes my work life much, much easier and more enjoyable. But in larger projects, it can be maddening. Finally it bugged me enough where it was worth researching it.

Amongst the first and most helpful Google results was, of course, at Stack Overflow. The first couple answers were extensive but seemed a bit more involved than I could pull off at this stage in the product's lifecycle. A bit more digging showed that the Microsoft team knows about this bug but hasn't prioritized it into any released build yet. SO users jaster and alex-g proposed workarounds that relieved my pain--just go to Debug|Exceptions..., find the SynchronizationLockException, and uncheck it. As others warned, this will skip over SynchronizationLockExceptions in your code that you want to catch, but that wasn't a concern for me in this case. Thanks, guys; I've used that dialog before, but it's been so long I'd forgotten about it.

Now if I could just do the same for Microsoft.CSharp.RuntimeBinder.RuntimeBinderException... Until then, F5 it is.

Posted by pjohnson

MVC's IgnoreRoute syntax

I've had an excuse to mess around with custom route ignoring code in ASP.NET MVC, and am surprised how poorly the IgnoreRoute extension method on RouteCollection (technically RouteCollectionExtensions, but also RouteCollection.Add, and RouteCollection.Ignore which was added in .NET 4) is documented, both in the official docs by Microsoft, and various bloggers and forum participants who have been using it, some for years.

We all know these work. The first is in Microsoft code; it and the second are about the only examples out there:

routes.IgnoreRoute("{resource}.axd/{*pathInfo}");
routes.Ignore("{*allaspx}", new {allaspx=@".*\.aspx(/.*)?"});

I understand the first ignores .axd requests regardless of what comes after the .axd part, and the second uses a regex custom constraint for more control over what's blocked. But why is pathInfo a special name that we don't have to define? What other special names could we use without defining? What does .* mean in a regex, if that really is a regex? . is exactly one instance, and * is zero or more instances, so putting them together doesn't make sense to me.

Phil Haack provides this equally syntactically confusing example to prevent favicon.ico requests from going through routing:

routes.IgnoreRoute("{*favicon}", new {favicon=@"(.*/)?favicon.ico(/.*)?"});

I also tried these in my code:

routes.IgnoreRoute("myroute");
routes.IgnoreRoute("myroute/{*pathInfo}");

The first with URL's that ended in /myroute, and /myroute/whatever, but not /myroute/whatever?stuff=23. The second blocked all three of those, but not /somethingelse?stuff=myroute. Why does this work without putting the constant "myroute" in {} like the resource.axd example above? Is resource really a constant in that example, or just a placeholder for which we could have used any name? Do string constants need curly brace delimiters in some cases and not others? An example I found on Steve Smith's blog and on others shows the same thing:

routes.IgnoreRoute("{Content}/{*pathInfo}");

This prevents any requests to the standard Content folder from going through routing. Why the curly braces?

Just to keep things interesting, when I tried to type my IgnoreRoute call above, I accidentally left out the slash:

routes.IgnoreRoute("myroute{*pathInfo}");

which threw "System.ArgumentException: A path segment that contains more than one section, such as a literal section or a parameter, cannot contain a catch-all parameter." OK, so no slash means more than one section, and including a slash means it's only one section?

Has anyone else had more luck using this method, or found any way to go about it other than trial and error?
Posted by pjohnson | 3 comment(s)
Filed under: ,

SQL Server 2005/2008's TRY/CATCH and constraint error handling

I was thrilled that T-SQL finally got the TRY/CATCH construct that many object-oriented languages have had for ages. I had been writing error handling code like this:


BEGIN TRANSACTION TransactionName

...

-- Core of the script - 2 lines of error handling for every line of DDL code
ALTER TABLE dbo.MyChildTable DROP CONSTRAINT FK_MyChildTable_MyParentTableID
IF (@@ERROR <> 0)
    GOTO RollbackAndQuit

...

COMMIT TRANSACTION TransactionName
GOTO EndScript

-- Centralized error handling for the whole script
RollbackAndQuit:
    ROLLBACK TRANSACTION TransactionName
    RAISERROR('Error doing stuff on table MyChildTable.', 16, 1)

EndScript:

GO


...which gets pretty ugly when you have a script that does 5-10 or more such operations and it has to check for an error after every one. With TRY/CATCH, the above becomes:


BEGIN TRANSACTION TransactionName;

BEGIN TRY

...

-- Core of the script - no additional error handling code per line of DDL code
ALTER TABLE dbo.MyChildTable DROP CONSTRAINT FK_MyChildTable_MyParentTableID;

...

COMMIT TRANSACTION TransactionName;

END TRY

-- Centralized error handling for the whole script
BEGIN CATCH
    ROLLBACK TRANSACTION TransactionName;

    DECLARE @ErrorMessage NVARCHAR(4000) = 'Error creating table dbo.MyChildTable. Original error, line [' + CONVERT(VARCHAR(5), ERROR_LINE()) + ']: ' + ERROR_MESSAGE();
    DECLARE @ErrorSeverity INT = ERROR_SEVERITY();
    DECLARE @ErrorState INT = CASE ERROR_STATE() WHEN 0 THEN 1 ELSE ERROR_STATE() END;
    RAISERROR (@ErrorMessage, @ErrorSeverity, @ErrorState)
END CATCH;

GO

Much cleaner in the body of the script; we have to do more work in the CATCH (wouldn't it be nice if T-SQL had a THROW statement like C# to rethrow the exact same error that was caught?), but the core of the script that makes DDL changes is cleaner and more readable. The only serious downside I've found so far is when dropping a constraint, and you get a message like this without TRY/CATCH:


Msg 3728, Level 16, State 1, Line 1
'FK_MyChildTable_MyParentTableID' is not a constraint.
Msg 3727, Level 16, State 0, Line 1
Could not drop constraint. See previous errors.

(In this case I'd check for the constraint before trying to drop it; this is just for illustration. One I've seen more often is trying to drop a primary key and recreate it on a parent table when there are foreign keys on child tables that reference the parent.)

TRY/CATCH shortens the above error to only "Could not drop constraint. See previous errors." with no previous errors shown. Now, some research will reveal why it couldn't drop the constraint, but TRY/CATCH is supposed to make error handling easier and more straightforward, not more obscure. The "See previous errors" line has always struck me as a lazy error message--I bet there's a story amongst seasoned SQL Server developers at Microsoft as to why this throws two error messages instead of one--so I imagine the real problem is in that dual error message more than the TRY/CATCH construct itself as it's implemented in T-SQL.

If anyone has a slick way to get that initial Msg 3728 part of the error, I'm all ears; I've seen several folks ask this question and not one answer yet.

Windows Workflow Foundation (WF) and things I wish were more intuitive

I've started using Windows Workflow Foundation, and so far ran into a few things that aren't incredibly obvious. Microsoft did a good job of providing a ton of samples, which is handy because you need them to get anywhere with WF. The docs are thin, so I've been bouncing between samples and downloadable labs to figure out how to implement various activities in a workflow.

Code separation or not? You can create a workflow and activity in Visual Studio with or without code separation, i.e. just a .cs "Component" style object with a Designer.cs file, or a .xoml XML markup file with code behind (beside?) it. Absence any obvious advantage to one or the other, I used code separation for workflows and any complex custom activities, and without code separation for custom activities that just inherit from the Activity class and thus don't have anything special in the designer. So far, so good.

Workflow Activity Library project type - What's the point of this separate project type? So far I don't see much advantage to keeping your custom activities in a separate project. I prefer to have as few projects as needed (and no fewer). The Designer's Toolbox window seems to find your custom activities just fine no matter where they are, and the debugging experience doesn't seem to be any different.

Designer Properties - This is about the designer, and not specific to WF, but nevertheless something that's hindered me a lot more in WF than in Windows Forms or elsewhere. The Properties window does a good job of showing you property values when you hover the mouse over the values. But they don't do the same to find out what a control's type is. So maybe if I named all my activities "x1" and "x2" instead of helpful self-documenting names like "listenForStatusUpdate", then I could easily see enough of the type to determine what it is, but any names longer than those and all I get of the type is "System.Workflow.Act" or "System.Workflow.Compone". Even hitting the dropdown doesn't expand any wider, like the debugger quick watch "smart tag" popups do when you scroll through members. The only way I've found around this in VS 2008 is to widen the Properties dialog, losing precious designer real estate, then shrink it back down when you're done to see what you were doing. Really?

WF Designer - This is about the designer, and I believe is specific to WF. I should be able to edit the XML in a .xoml file, or drag and drop using the designer. With WPF (at least in VS 2010 Ultimate), these are side by side, and changes to one instantly update the other. With WF, I have to right-click on the .xoml file, choose Open With, and pick XML Editor to edit the text. It looks like this is one way where WF didn't get the same attention WPF got during .NET Fx 3.0 development.

Service - In the WF world, this is simply a class that talks to the workflow about things outside the workflow, not to be confused with how the term "service" is used in every other context I've seen in the Windows and .NET world, i.e. an executable that waits for events or requests from a client and services them (Windows service, web service, WCF service, etc.).

ListenActivity - Such a great concept, yet so unintuitive. It seems you need at least two branches (EventDrivenActivity instances), one for your positive condition and one for a timeout. The positive condition has a HandleExternalEventActivity, and the timeout has a DelayActivity followed by however you want to handle the delay, e.g. a ThrowActivity. The timeout is simple enough; wiring up the HandleExternalEventActivity is where things get fun. You need to create a service (see above), and an interface for that service (this seems more complex than should be necessary--why not have activities just wire to a service directly?). And you need to create a custom EventArgs class that inherits from ExternalDataEventArgs--you can't create an ExternalDataEventArgs event handler directly, even if you don't need to add any more information to the event args, despite ExternalDataEventArgs not being marked as an abstract class, nor a compiler error nor warning nor any other indication that you're doing something wrong, until you run it and find that it always times out and get to check every place mentioned here to see why. Your interface and service need an event that consumes your custom EventArgs class, and a method to fire that event, but creating the EventArgs by passing in null for the sender parameter--if you pass in this, as one normally does when firing events, for some reason the HandleExternalEventActivity won't see that the event has fired. Then you need to call that method from somewhere. Then you get to hope that you did everything just right, or that you can step through code in the debugger before your Delay timeout expires. Yes, it's as much fun as it sounds.

TransactionScopeActivity - I had the bright idea of putting one in as a placeholder, then filling in the database updates later. That caused this error:


The workflow hosting environment does not have a persistence service as required by an operation on the workflow instance "[GUID]".

...which is about as helpful as "Object reference not set to an instance of an object" and even more fun to debug. Google led me to this Microsoft Forums hit, and from there I figured out it didn't like that the activity had no children. Again, a Validator on TransactionScopeActivity would have pointed this out to me at design time, rather than handing me a nearly useless error at runtime. Easily enough, I disabled the activity and that fixed it.

I still see huge potential in my work where WF could make things easier and more flexible, but there are some seriously rough edges at the moment. Maybe I'm just spoiled by how much easier and more intuitive development elsewhere in the .NET Framework is.

Posted by pjohnson | 4 comment(s)
Filed under: , , ,

Migrating from VS 2005 to VS 2008

I recently helped migrate a ton of code from Visual Studio 2005 to 2008, and .NET 2.0 to 3.5. Most of it went very smoothly; it touches every .sln, .csproj, and .Designer.cs file, and puts a bunch of junk in Web.Configs, but rarely encountered errors. One thing I didn't expect was that even for a project running in VS 2008 but targeting .NET Framework 2.0, it will still use the v3.5 C# compiler. As such, it does behave a bit differently than the 2.0 compiler, even when targeting the 2.0 Framework.

One piece of code used an internal custom EventArgs class, that was consumed via a public delegate. This code compiled fine using the 2.0 C# compiler, but the 3.5 compiler threw this error:


error CS0059: Inconsistent accessibility: parameter type 'MyApp.Namespace.MyEventArgs' is less accessible than delegate 'MyApp.Namespace.MyEventHandler'

It's a goofy situation, the error makes perfect sense, and it was easy to correct (I made both internal), but I expected VS 2008 would use the compiler to match whatever the target .NET Framework version was. I wouldn't have expected any compilation errors it didn't have before conversion, not until I changed the targeted Framework version.

Another funny error happened around code analysis. Code analysis ran fine in VS 2005, but in VS 2008, it threw this error (compilation error, not a code analysis warning):


Running Code Analysis...
C:\Program Files\Microsoft Visual Studio 9.0\Team Tools\Static Analysis Tools\FxCop\FxCopCmd.exe /outputCulture:1033 /out:"bin\Debug\MyApp.Namespace.MyProject.dll.CodeAnalysisLog.xml" /file:"bin\Debug\MyApp.Namespace.MyProject.dll" /directory:"C:\MyStuff\MyApp.Namespace.MyProject\bin\Debug" /directory:"c:\WINDOWS\Microsoft.NET\Framework\v2.0.50727" /directory:"..\..\..\Lib" /rule:"C:\Program Files\Microsoft Visual Studio 9.0\Team Tools\Static Analysis Tools\FxCop\Rules" /ruleid:-Microsoft.Design#CA1012 /ruleid:-Microsoft.Design#CA2210 ... /searchgac /ignoreinvalidtargets /forceoutput /successfile /ignoregeneratedcode /saveMessagesToReport:Active /targetframeworkversion:v2.0 /timeout:120 MSBUILD : error : Invalid settings passed to CodeAnalysis task. See output window for details.
Code Analysis Complete -- 1 error(s), 0 warning(s)
Done building project "MyApp.Namespace.MyProject.csproj" -- FAILED.

I especially like the "See output window for details," which 1. screams of a Visual Studio hack as it is, and 2. doesn't actually give me any more details in this particular case, though Google tells me that other people do get more information in the output window.

I noticed Debug and Release modes both had code analysis enabled (I think switching Framework versions swapped them on me and I accidentally enabled it in Release mode), and Release mode wasn't erroring out but Debug was. I looked at the difference in the csproj file, and in the FxCopCmd.exe calls, and the key seemed to be the /ruleid parameters (bolded), of which there were a ton in Debug but not Release. Presumably this is because I disabled some of the rules in the project properties, so I tried enabling them all. The number of /ruleid params went down, but it still gave the same error. The Code Analysis tab in project properties looked the same between Debug and Release.

Finally I unloaded the project, edited the csproj file (I'm glad I found out how to do this within VS, instead of exiting VS and editing it in Notepad), and removed this line, which was present in the Debug PropertyGroup element but not the Release one:

<CodeAnalysisRules>-Microsoft.Design#CA1012;-Microsoft.Design#CA2210...</CodeAnalysisRules>

Code analysis then ran successfully. I imagine this solution isn't ideal for everyone, if you want to enable/disable particular rules, and it's not ideal for us long-term, but it did allow us to keep code analysis enabled without the build failing.

Windows 7 rocks!

I bought my current PC almost three years ago. I've had my own PC for 15 years or so, and, aside from my first desktop and a laptop I only use when traveling, that was the only time I've bought a whole PC, rather than buying parts and assembling my own (a Frankenputer as former coworkers affectionately referred to them). Like many of my colleagues who work in Microsoft technologies, I looked into buying a Dell, and they had a fine deal, and more importantly, they had finally started selling AMD processors, which I can proudly say without qualification is the only CPU in any computer I've owned. I configured one with a dual core, 64-bit processor, and all sorts of new technologies I'd never heard of but were (and appear to still be) the latest and the greatest. ("What's SATA? We use PCI for video again?" I asked myself.)

Windows Vista had RTM'd and was weeks from retail availability, and my PC included a deal to upgrade once Microsoft let Dell send upgrade discs. My PC had a rather small (160 GB, I think) hard drive, which I intended to replace with a 500 GB or so once Vista came out, installing it there fresh instead of trying to upgrade Windows--Windows upgrades have never worked so well for me, whereas fresh installs are fine. Then I heard all the complaining about Vista, and decided to hold off. I ran low on space before Vista SP1 came out, so got that second hard drive anyway and kept my photos there. From then on, Windows XP Professional worked "well enough" so I stuck with it.

Things got bad a couple months before Windows 7 came out. First, Norton AntiVirus misbehaved. To be fair, the program was about 6-7 years old; I kept it around because it seemed to work well enough, I got it free as a student, and virus definition upgrades were free. Then I noticed the dates on the definitions went from a few days ago, to the middle of 1999. It still changed every week, and still found upgrades, so I'm guessing it was just a bug in how it displayed the definitions date, but still made me nervous, as did the prospect of uninstalling old and installing new virus scanners.

Roxio was the next to act up. After Microsoft paved the way with Windows Update, suddenly every software manufacturer was convinced their product was just as important to check at least every week for updates, and the updates, just as urgent. Eventually I got Apple to quit bugging me to install Bonjour and Safari, but I couldn't get Roxio (or, perhaps more accurately, InstallShield Update Manager which came with Roxio) to quit prompting me to check for updates on the 0 products I had told it to check. I googled and finally found a tool I could use to uninstall that piece of it, without uninstalling Roxio, on InstallShield's support site.

That was a mistake. It stopped prompting me, but added about 3 minutes from when Windows comes up after I start my PC, until my computer was usable, and in the meantime, Norton was disabled, Windows Firewall was disabled, and programs wouldn't start. Add to this a nagging problem where my SD/CompactFlash card reader thinks it's USB 1.x intermittently, and the ugly way Windows Search was grafted onto Windows XP, and the fact that XP (and earlier versions of Windows--not sure about 7 yet) just slows down after a couple years, and I knew it was time to upgrade once Windows 7 came out.

The more I learned about Windows 7 (and, to be fair, much of it was new in Vista and largely unchanged in 7, but I'd barely ever used Vista), the more I liked it. The way search worked much faster, more efficiently, and was integrated into everything, even the Start Menu (no more reorganizing each program's 20 or so icons so I could find the ones I actually wanted! no more sorting alphabetically every time I install a new program!)... An overhauled Windows Explorer including a new Libraries feature (not in Vista) that didn't force you to keep everything in your profile for Windows to like it... and finally getting to install 4 GB of memory and take advantage of my 64-bit processor!

After Microsoft decided one day the long-activated Windows XP installation on my laptop was no longer valid, with no explanation why, I wasn't going to chance them deciding the same thing on my main PC, so I broke down and got the full version of Windows 7. I opted for Home Premium after finding little difference between Home Premium and Professional that I cared about, since Home Premium should be able to run IIS, and if it can't, Visual Studio 2008's web server should be enough for what I need on this PC. I installed it not quite a month ago, replacing NAV and Ad-Aware with the new free and highly-rated Microsoft Security Essentials, and Roxio with--well, either what's built into Windows 7 or my favorite CD ripper Easy CD-DA Extractor, and it's great. I can work the way I want to, customize things as much as I need (you're close, iPhone, but not quite there), and boy is Aero pretty. I'm a sucker for eye candy (I do have an iPhone).

The search works great. I was leery about using Windows Search (installed against your will by Office 2007) or Google Desktop Search (must be unchecked in order to not install with every Adobe program and tons of others) add-ons for Windows XP; that sort of thing just seems like too core a functionality to get some freebie add-on to handle. Windows XP's built-in search might suck, but it usually found what I wanted, and didn't take too long. Sure enough, Windows 7 search works instantly, and if you copy over a ton of files it hasn't indexed, as I did when I wiped my hard drive to install 7 then copied back from my backup, it might not find everything right away, but it will after it spends a few minutes indexing them. And, as advertised, it doesn't slow me down while I'm using the PC; I haven't heard my hard drive crank once while I've been writing this long post. By default, it only indexes in your Libraries, and MS suggests anything you want indexed, you put in a Library. That put me off at first--what if I need to search for a system file or something?--but really, the vast majority of stuff I search for slots fine in either the Documents, Music, or Pictures Libraries. Moving from XP to 7 takes some adjustment, but I gave it a chance, and I'm quite happy with it. And if I do ever need to do a search for a certain DLL, it's easy enough to add folders to the list of what's indexed. I don't even have to dig into Administrative Tools and various Control Panel applets and System Tools folders, wondering where Microsoft has hidden that options screen in this version, thanks to the Start Menu search feature. I just click the Windows key, type "index", and it's the first option:

 

Just like searching on the web, the content is what matters; its physical location is now much less important. You just type in a word or two and it figures out what you want, no matter where it is.

Another great and long-overdue improvement is the Windows Explorer dialogs for long-running file operations--deleting, copying, moving. It gives you more of the path, and best of all, an accurate estimate of how long it will take, and the rate at which it's copying files! No more operations where it takes 45 seconds for the first part and 23987105 minutes for the last part.

The most annoying thing I've found so far is based on principle, and not any difference I've observed. Occasionally, programs crash. Two that I use often, Mozilla Firefox and IrfanView, have each crashed once. (Amusingly, both when they tried to start up the Apple QuickTime plug-in; I have a few things to say about my iTunes migration experience, but that'll have to wait for another post.) But once is all it takes in Windows 7, before Program Compatibility Assistant (PCA) kicks in and applies some sort of mysterious sanctions to the offender.

 

Obviously I was curious about what those "settings" it applied were, and how to grant amnesty for first-time offenses. That link has a help article with those exact questions. And the answer? "It depends," and "go to TechNet and teach yourself about group policy," respectively. (In their words, "Adjustments to program compatibility features can be made by using Group Policy. For advanced information on how to use Group Policy, go to the Microsoft website for IT professionals.") Seriously? A little more googling and I found out that you can dig into it, or disable PCA altogether, using Group Policy Editor, which... doesn't come with Home Premium. So it sounds like my only choice is manually editing the registry. I can't even find out what PCA changed about how those programs run; a few articles allude to ominous performance degradations in order to ensure stability. Windows 7 addresses so many things that bugged me about XP and earlier versions; it's a shame they dropped the ball on this one. To be fair, it seems that this is how Vista functioned, and Windows 7 didn't make it worse, but didn't improve it, either.

But, to summarize, I'm thrilled with Windows 7, and with 64-bit computing, though I'm a little surprised more programs aren't 64-bit (Firefox and Flash Player, I'm looking at you). Oh well--we had the same problem switching from 16-bit to 32-bit, but I'm glad enough software and hardware is there, that I can upgrade and work just fine until the rest of it makes it.

Posted by pjohnson | 1 comment(s)
Filed under: ,
More Posts Next page »