February 2004 - Posts
When Microsoft released Enterprise Instrumentation Framework last year I investigated it to replace the Paul Bunyan logger we are currently using under our ASP.NET application. I liked EIF a lot, but we weren't able to incorporate it at the time for a variety of reasons. One of them was lack of a decent viewing tool. There is a simple little "Trace Viewer" sample app that comes with EIF, but when you hold it up against the Paul Bunyan Message Viewer that our operations people use today, it's just a toy. The Paul Bunyan viewer is able to view messages across many machines, with very powerful filtering (although the filtering interface is horrible, I've watched as people familiar with it make it work).
I expected that as EIF was more widely adopted we'd see some advanced viewing tools get written that could be given to our Paul Bunyan-partial operations folks without having to apologize. Maybe I'm not looking in the right place, but I haven't seen anything yet. When there was a better tool, I thought I'd bring up EIF again.
Then...At the PDC, I attended the tail-end of a session on ASP.NET diagnostics in Whidbey which confused me a little more. It was the “ASP.NET: Troubleshooting, Auditing and Tracing ASP.NET "Whidbey" Applications on IIS” WSV351 session presented by Erik Olson, and here is the Powerpoint:
I went in to this session fully expecting to see or hear about EIF, but unless I missed it, the presenter never mentioned it once. He talks about the Windows Tracing technology which underlies EIF and showed a command line tool for viewing the logs (yeah, the ops folks will love that) but he promoted System.Diagnostics.Trace was the one and only way to log messages. I'm very curious to know the future of EIF, it seemed like perhaps bits of EIF were moving down into the diagnostics system of Whidbey.
Here is an example (from the EIF docs) of how EIF is used, notice the static definition of an EventSource. All events are raised from this object.
private static EventSource myEventSource = new EventSource("Explicit Trace");
static void Main(string args)
TraceMessageEvent e1 = new TraceMessageEvent();
e1.Message = "Costly Hello";
// slightly less resource-intensive explicit
TraceMessageEvent e2 = new TraceMessageEvent();
e2.Message = "Less Costly Hello";
// static one liner (which wraps the above code sequence)
TraceMessageEvent.Raise(myEventSource, "Static One Liner Hello");
What I like about EIF:
- In flight reconfiguration: The ability to reconfigure logging levels without having to restart the web server.
- Fine-grained control: You are able to easily turn on just the messages from a single event source.
- Fast (?): The Windows Tracing is kernel mode so it's bound to be fast (although I never profiled it)
- IsEnabledForType: The ability to skip costly logging messages with the IsEnabledForType() call (This avoids unnecessary string concatenation or String.Format() calls to build strings that aren't going to be logged anyway.)
- Request tracing: You can correlate a single request across tiers.
- Custom Sinks: The ability to write custom sinks (I wrote a Paul Bunyan one for backward compatibility)
What I didn't like about EIF:
- No decent viewing tool.
- The static EventSource usage pattern made writing a drop in Facade difficult. We currently have a static diagnostics wrapper that follows the System.Diagnostics.Trace.Write pattern where there doesn't have to be any source class defined in each class, you just call MyTraceFacade.Write("My message"). If I'm thinking about it correctly, you almost need to put a static EventSource in every class that needs to send messages, or else you'd have to keep a hashtable of EventSource's behind the facade and pass an EventSource name with every Write().
- I also read many people had distribution problems, although we didn't experience this first-hand because we never got that far.
I'm interested in hearing what type of experiences others have had using EIF.
We are in the process of developing a common role based authorization layer for several of our applications and stumbled across the Microsoft's Authorization Manager. I had originally passed over it because I thought it was tied to Windows Server 2003, but it turns out that it runs on Windows 2000 which we are using in production, and Windows XP which developers use. On XP, just install the Windows Server 2003 Administration Tools Pack and you'll get it.
The idea seems great. It allows you to define logical operations like "CanApproveExpenseReports", map those to logical roles in your organization like "Manager". Then assign users to the roles. (The model is extremely flexible...I recommend Dave McPherson's article). There is a COM runtime and a .NET interop assembly for use from .NET, that allows your application to quickly check to see if the current user is allowed to perform some operation. Following is not the actual API, I just wanted to give you a feel for how you use it in an app:
if (azMan.CheckAccess(user, "CanApproveExpenseReport")
// Approve expense reports code goes here
It also provides an MMC snap-in tool to manage everything. If it works out, it promises to save us a ton of time.
I see the Patterns and Practices Authorization and Profile Application Block has a provider that can use it, and there is a RoleManager Provider in Whidbey that uses a part of it too. So Microsoft seems to think it fits the problem.
Actually while I'm on the subject of Whidbey, I was surprised that it doesn't seem to provide a general purpose authentication mechanism other than IsInRole. Roles are good for some things, but they are too course-grained for deciding whether a button is visible or not, or whether a user can delete items from a table. There can potentially be hundreds of these fine-grained secured operations scattered throughout your application, and hard-coding role names (if (User.IsInRole(“Manager“)...) throughout your app is not an acceptable solution in my opinion because it limits the ability to redefine the permissions of each role later on without touching code.
Authorization Manager (or AzMan for short) seems to fit the bill perfectly. However I hesitate to jump in too quickly because I don't see very much buzz about this technology on the web. I'd like to read other people's experience with it. Has anyone tried to use it in a .NET Application?
We finally have a first cut at continuous integration builds going for a large ASP.NET application. We are using CruiseControl.NET and NAnt with great success. I wanted to comment on an unexpected psychological effect I experienced. Continuous integration builds are addicting. OK, I'm not even currently on the development team for this project, just helping them out with their builds, but I still feel the need to check the status of the builds throughout the day. I feel stressed when the build is broken, and relaxed when it is fixed.
Checking the status is made easier by CruiseControl.NET's nifty little notification tray applet that indicates build status by showing green or red. One of the developers on the project is going to buy an Ambient Orb and set it up above his cube to glow green or red based on the build status. CruiseControl.NET also provides a web app that shows you the build results and a log of all check-ins since the last build (Here's an example from CruiseControl.NET's own live build server).
In the past, we only did a full build nightly at 11:00 PM. I'm telling you, it is _so_ much better to know the build is broken in “nearly-real-time” instead of having to wait until the next morning when you came in to find an e-mail from the build machine telling you that something you checked in (or forgot to check in) yesterday broke the nightly build.
At first, I wasn't so sure that CI was a good fit for our project because it's fairly large (about 30 solutions), and can take so long to build (20 min or more on a dual proc build box), plus we are using Visual SourceSafe which doesn't have atomic check-ins which can lead to premature build triggering. (By the way CruiseControl.NET has an elegant low-tech solution to reduce the chance of this with its <modificationDelaySeconds> setting).
Think about continuous integration builds if you aren't doing them. I'm sure lots has been written on why they work, but I can honestly say that I didn't fully “get” it until we actually started doing it. Had I known, I would have gotten them going a long time ago.