Archives

Archives / 2003 / November
  • XHTML and Accessibility in ASP.NET Whidbey

    Folks on my team setup a good set of reviews for me last Friday to go over our final plans on Accessibility and XHTML compliance for ASP.NET Whidbey.  I left the meetings feeling really good about where we'll end up when Whidbey ships.

    XHTML Compliance

    I've been pushing my team hard over the last few months to come up with a plan whereby we can definitely say that all output generated by the built-in ASP.NET Server Controls is XHTML (and HTML 4.01) compliant by default. 

    Identifying all of the various XHTML rules (and the varying levels of support: XHTML 1.0, 1.1 and strict/transitional) and how these impacted our controls took some time.  Thankfully we had the help of Stephen Walther (www.aspworkshops.com), who did a great job working with us part-time to come up with a detailed list of issues with our current markup implementations, and in identifying a list of changes necessary to support it. 

    At our meeting on Friday we signed off on making the appropriate code changes.  We spent some time debating the level of XHTML we'd end up supporting (I confess I went into the meeting ready to push only for XHTML 1.0 support).  In the end we walked away though with a plan that will enable us to output XHTML 1.1 by default. 

    There are about 15 or so classes of changes we need to make throughout the framework.  The most interesting/unususal is the requirement to wrap our hidden viewstate tags within a hidden div -- since hidden inputs are apparently no longer allowed immediately under a tag.  The good news is that we have time on our schedule to get all of these done in the next few weeks before our final feature milestone ends. 

    We'll also have appropriate device specific adaptation in place for our server controls, so that if a page gets hit with an older non-XHTML friendly browser (for example: one that barfs on xhtml conventions) -- we'll be able to optionally render markup that works around its short-comings while still maintaining visual fidelity on the page (giving us a great compatibility story).

    To compliment these runtime changes, we are also adding support inside Visual Studio to better validate XHTML markup for static elements.  Some of this work is already in the VS Whidbey Alpha (you can change the target schema to XHTML Transitional or Strict).  We'll then provide realtime intellisense (red squigglies and task list errors) everytime you try to author non-XHTML compliant markup on the page (along with friendly error messages that help identify how to fix it).  Below is a screen-shot of what this looks like on a recent build:

    In the meeting Friday we also decided to provide new “Add Item” templates that enable developers to add pages to their applications with an XHTML namespace and DOCTYPE value set (we'll probably have this be a drop-down on the new item dialog to enable devs to easily pick between the different standards when adding either dynamic or static pages to their sites).

    Accessibility Compliance

    Accessibility compliance is another key feature (and now a requirement for all goverment work) that I've been pushing the team on to make sure we'll be able say “just works” out of the box. 

    In Whidbey, all ASP.NET Server Control will now generate Section 508 compliant markup by default, no coding or config changes required to enable it.

    Our controls will also support automatically emitting WCAG compliant markup as well.  The one issue with making WCAG output the default is the requirement that WCAG has for the site to work without client-scripting.  Some of our controls use client-side script for up-level browsers (while optionally emitting script-less downlevel versions for older browsers).  Rather than disable script support by default for all browsers, we'll have a WCAG option you can set at either the page or site level which will enable users to turn off uplevel client scripting for accessibility compliance reasons. 

    To further help developers check compliance on a site, we have also built-in a compliance checking tool into Visual Studio that can automatically validate for both 508 and WCAG (level 1 and 2) compliance within a web site.  This can be run either manually, or configured to run automatically as part of the Solution Build and/or F5 process.  The accessibility checker will verify both static html markup, as well as attributes on ASP.NET Server Controls (for example: if you use an control both don't specify an “AlternateText“ property).  Below is a screen-shot of it working in a recent build:

    Hopefully the net result of both the XHTML and Accessibility work will be a platform and tool that enables web developers to build accessible, standards compliant, web applications easier than ever.

  • Performance Tuning and ASP.NET Whidbey

    We are now in the final mini-milestone of Whidbey feature development -- which is the time when we start to really push hard on performance, even if it means cutting features to hit the necessary performance goals (which is one of the hardest things to go through when shipping a product -- and thankfully one that we haven't had to-do yet with Whidbey).

    We've had a dedicated a performance developer and tester working on perf throughout the Whidbey product cycle, which has helped tremendously in getting to where we currently are at.  Dedicated performance resources is a best practice I've always been a big proponent of, and the only sure fire way I've found to ensure that performance always stays top-of-mind (otherwise new feature work always seems to creep up -- with performance too often getting pushed down the priority stack).  When we formed the ASP.NET team from scratch back in 1997/1998 we actually dedicated the second developer we hired on the team towards working full-time on performance (no feature work whatsoever) -- and it payed huge dividends in terms of the final product we shipped.  Hopefully we'll see a repeat dividend pay off for Whidbey.

    We have about 100 micro benchmark scenarios that we run on a weekly basis for ASP.NET, each on 1P, 2P and 4P boxes.  This gives us a good ability to track in real-time where the product currently stands, as well as to better pin-point checkins that cause regressions (we can basically isolate the checkin to a week period and investigate further based on that window).  To better prevent perf regressions we also have a few simple performance checkin suites that all developers need to run and pass before any checkin is allowed (we do a clever trick where we time and run-through some scenarios in both classic ASP and ASP.NET, and fail the test if the multiplier difference between the two falls below a certain level -- which gives us a good way to measure regardless of memory/processor speeds on the system).

    Of the 100 or so benchmark suites that we measure, there are 4 that I really care about right now.  Basically they are:

    1) Hello World Page.  This is a simple page that litterally runs a tiny amount of code that just spits out “Hello World” and has no server controls on it.  Not terribly real-world -- but it does do a good job measuring our core code-path execution length and timings.  It also is very sensitive to our http header and request/response buffer management (code that executes on every request regardless of scenarios).  If it sucks perf-wise, nothing on top will look good.

    2) MSN Home Page.  This isn't technically the MSN Home Page today - but rather one from about 6 years ago (when we first started getting serious about performance).  It does a pretty complex rendering of a page that has about 45k of HTML, and executes a lot of rendering code logic.  It doesn't hit a database (instead all the values are read from disk once and then cached in-memory), which gives us a good benchmark that enables us to evaluate just core ASP.NET and CLR execution performance.  It also doesn't use any ASP.NET Server Controls (it is a straight port of an old .asp version of the site -- and so uses <%= %>rendering style logic).  But it does give us a good test that enables us to evaluate how ASP.NET handles rendering “real-world“ size pages (as opposed to small byte hello world apps), and specifically measure how our async i/o system performs under high throughput and output loads (our buffering code here has been carefully tuned with literally thousands of hours of hard work).  It also does a good job evaluating how the CLR is doing in terms of both JIT code performance, and specifically with GC (garbage collection) memory allocation and collection.  If it regresses, no real-world app on the server will do well (regardless of whether there are server controls or not).

    3) WebForm Common Page.  This is a page with about 150 server controls declared on it, and which measures both rendering and post-back performance costs.  It doesn't do any user logic (no database access, page_load, or event handler code), which enables it to serve as a great benchmark to meaure and highlight the core ASP.NET Control framework performance costs.  We measure both throughput execution on the page (requests/sec) as well as ViewState and overall Rendering sizes in terms of bytes (example: we watch to make sure we don't blow out Viewstate serialization size that would increases overall response payload size, and increase browser latency).  We have individual performance tests that measure each individual ASP.NET control (to measure and watch for regressions on each control), but this core WebForm Common Page test does a good job making sure we keep an eye on fixed page framework costs.  If it regresses, every page with a server control regresses.

    4) Data Page Scenario with a DataGrid and ADO.NET SQLReader Connection.  This measure a simple database query (grabbing 100 or so rows from a database using ADO.NET) and then binding it to a grid to display the values.  Again not a super-complex page -- but one that does a good job measuring our data controls framework, as well as the performance of ADO.NET in server scenarios.  It is heavily dependent on the control framework in terms of performance (if test #3 sucks above -- this test will be bad too since it ends up generating hundreds of controls at runtime), as well as raw ADO.NET (which the ADO.NET team perf tests separately). 

    Where we are at with Whidbey

    Right now we are in decent shape with the above tests.  Adam Smith did an absolutely awesome job tuning our core execution request code earlier this milestone, which has enabled us to see improvements in ASP.NET Whidbey performance of approximately 20-25% over ASP.NET Everett for test scenarios #1 and #2 above (the Hello World and MSN application scenarios). 

    Scenario #3 above still needs more work, although we've made a lot of progress since the Alpha (where it was way, way, way off from our previous Everett release).  The challange is that we have lots of new features and code in Whidbey with the page framework, which necessitates multiple tuning passes in order to bring performance at par or better levels.  Our general model is to spend a few days profiling the heck of out of a scenario, open 20-25 performance bugs to fix each of the identified issues, assign them out, and then drive folks to check-in fixes.  We then repeat the profiling process again, find and fix the next set of optimizations, and then repeat again.  Over time the “performance onion“ slowly gets peeled back more and more, and additional optimization opportunities reveal themselves.

    Right now we are still about 15% off from where we want to be when we ship on test #3, but have identified some significant optimizations that we think will get us close.  The biggest optimization remaining will be to take a hard look at our memory allocation within the page framework.  Our new Whidbey features have added several extra fields to the core System.Web.UI.Control base class, which currently end up generating an additional 36 bytes per control per page request.  This doesn't sound like much at first glance, but adds up quickly when you realize that every control in the system has this extra 36 byte memory allocation added onto it (regardless of whether they need it or not -- for example: the literal control for static content), and that you can have hundreds of controls instantiated on each page.  When doing a few hundred requests a second, you can end up allocating several megabytes of additional memory per second on the system.  This ends up increasing GC pressure on the box, which can significantly impact server throughput.

    The fix we are working on now will be to defer memory allocation as much as possible (only doing it when necessary), and to be more creative in how we store fields (example: don't put them on the core control class -- instead moving them one level deeper within internal classes that the control class then generates on an as-needed basis).  We think that once these changes are made, test #3 will improve dramatically -- and hopefully get us either close or above the bar we've set to ship.

    Scenario #4 is gated right now to some extent on the core control framework test covered by scenario #3.  My hope is that once we get #3 to hit our goal, we'll see #4 come fairly close to hitting its goal too (assuming ADO.NET hits its performance goals).  This should set us up well for the next round of optimizations, as we broaden our focus to look at more tests -- and start benchmarking existing real customer applications on V1.1 (Everett) against V2.0 (Whidbey). 

    Assuming we get the above 4 tests in good shape for the beta (either at goal or within ~5% of them), I'll feel fairly comfortable about where we ultimately end-up on performance with Whidbey.  There will still be a lot of work and investigations left to-do, but we'll have our core performance work foundations in place, and will be able to finish tuning the product on a steady glide-path to final RTM.

  • ASP.NET Cross Page Postback to Different Applications Now Implemented

    Paul raised some concerns (http://weblogs.asp.net/pwilson/posts/34597.aspx) about the inability to use the new ASP.NET Whidbey cross page postback functionality to pages that live on other servers and other applications.  The good news is that it will work in the beta thanks to Ting's checkin yesterday.

    Below is his checkin mail from yesterday:


    From: WEBNETBUDDY [mailto:WEBNETBUDDY]
    Sent: Fri 11/14/2003 12:49 AM
    To: Ting-Hao Yang;
    Subject: FX Checkin (aspnet) by timmcb, tinghaoy [671350-671674]

    FX Checkins 671350 to 671674

    safesync_aspnet: 671674, updated at Fri Nov 14 00:49:46 2003
    Buddy Builder: tinghaoy
    Contributors: timmcb, tinghaoy

    Changes:

  • Legomation and Stop Motion Videos built with Web Cams

    I read an article in the November edition of Wired magazine on how you can build stop-motion animation films using Legos and fairly cheap Web Cams (many of which -- including the Logitech Quickcam -- come with built-in stop motion animation software). 

    The idea with stop-motion animation is that you move the objects on your set a fraction, take a picture, move them a fraction more, take another picture, and so-on (the goal being to have 15 frames or pictures taken per second to simulate smooth movement).

    There is now a whole community built up around doing these types of movies with Legos.  http://www.brickfilms.com/ is a great resource to learn more on how it is done (there are several great articles about the techniques, software to use, etc).  There are also a number of cool (and some downright weird) movies that people have posted.

    I bought a cheap webcam and spent an hour or two over the weekend animating a Duke Blue Devil mascot (complete with soundtrack).  I have to admit it was pretty fun.  Here is my Oscar winning contender: http://www.scottgu.com/movies/bluewithmusic.zip

  • My PDC Keynote Demo

    I was forwarded a few pictures from my PDC keynote demo today.  Keynote demos are always “fun” events, and are great at creating a lot of stress in a compressed period of time.

    This year I did an ASP.NET and Visual Studio Whidbey demo, where I built up a customer facing insurance website on stage.  Some of the new features I showed off included Master Pages, the Membership System and Login Controls, building a user control that databound to a web service (using the authenticated ID of the user) and bound the data to a new GridView control that automatically added sorting and paging support on it, and then lastly using the new Web Part server controls in Whidbey and page personalization to add end-user customization (enabling the user to drag/drop the controls on the page -- and then automatically save the changes to the new asp.net personalization store).  What was nice was that I was able to build all of the code from scratch in about 12 minutes.  The audience seemed pretty impressed (the drag/drop stuff at the end always takes people's breath away).

    In general, I tend to be fairly comfortable with speaking to large crowds.  I typically don't practice my talks at all for crowds less than about 1500 people, and don't get butterflys for audiences of less than 3000. 

    Keynotes like the PDC, though, are stressful.  Both because of the audience size (the total number of people in a general session keynote is between 7,000-8,000 people in seats), but also just because of the overall logistics and mass of people involved (and the fact that the press corp occupies the first few rows and are salivating over a slip-up or crashed demo). 

    Work on this keynote started about a week before the event, and we ended up building my demo over a few days.  Ususally the hardest thing about constructing the demo is making it look pretty (thankfully the product makes the coding part easy!).  In the past I've hired professional designers to help with the graphic look and feel.  This time we actually did something new -- which was purchase an existing site UI from a cool company: http://www.templatemonster.com.  They charge about $50 for a good looking site design, and ship all of the Photoshop image files, CSS and html needed to let you customize it (which is cheap considering how much you'd pay for a professional designer). 

    This got us started on the initial look and feel -- Jeff King from my team then did some awesome work in a short few days to modify the design for our specific scenario (“Woodgrove Insurance“), and built the ASP.NET Master Page that I used in the demo to base all of my pages on.

    We then built up a laptop demo machine with the bits, as well as a backup demo laptop that we could use as well (more on this later).  Some general demo machine tips/tricks I've learned over the years:

    1) Make sure the demo machines are laptops.  You'll want to practice, practice, practice before you get there.  Much easier than with a desktop.  You can also hand carry them down to the conference.  Don't ever trust anyone else to transport your demo machine for you -- carry it yourself personally to make sure it gets there.  Side-note: it is always fun watching the security people's faces as Seatac when I pull out 5 laptops to go through the X-Ray machine.

    2) Have both the main and backup machine imaged exactly the same.  Get decent hardware (this is not the time to be cheap)

    3) Have a batch file called “reset.bat“ that you put on the desktop (both primary and backup machines).  Simply clicking it should reset all files on disk to start the demo from the beginning (no thinking required).  Make it idiot-proof to make sure you are in a good working start state.  You don't want to worry 5 minutes before you go on as to whether everything is setup ok.  Just click the batch file, watch it say “everything worked“, take a deep breath, and then walk onstage.

    4) Disconnect the laptops from a domain.  Windows sometimes gets funny about re-authenticating SSPI credentials when offline (example: database connections might fail).  Ensure the laptops are totally self-sufficient for credentials.

    5) Disable instant messanger.  It sucks to have someone IM you with a “hey butthead!“ message in the middle of your demo onstage in front of lots of people (yes -- my sister did this to me once by accident.  thankfully it elicited a good laugh). 

    6) Turn off wireless access.  Eliminate those annoying wireless up/down messages on the task bar.  Eliminate randomness with how you are connecting to things.

    7) If possible, don't plug into the network -- and design your demo so that it doesn't need to connect to remote machines.  Network access is always highly random in a conference -- call everything locally to avoid having to worry about a router blowing up, DHCP overloaded, proxy machines down, etc, etc.

    To go along with the backup demo machine, you also always want to have a “demo buddy“ that you are willing to absolutely trust with your life.  They have three jobs:

    1) Double-check the demo-machines dozens of times (especially right before you go on stage and are trying to relax and stay calm), and make sure absolutely everything works.

    2) Guard the hardware during the hour before you go on stage and all the time you are on it.  Specifically they need to watch out for someone mistakenly confusing the machine and walking away with it.  At PDC 1996 someone unplugged and started walking away with a server while the demo-er was using the machine onstage (forcing the speaker to riff while his buddies backstage tackled the guy, grabbed the server back, and furiously tried to boot it up again).  A good demo-buddy will stop the guy before he unplugs the machine (another good reason to use a laptop: they will run for at least an hour on battery if someone mistakenly pulls out the power cord).

    3) Follow along with the demo on the backup machine while the main demoer is on stage.  That way if there is a catastrophic failure on the main demo machine (example: hardware crash, blue screen, etc), the speaker can calmly (or not...) switch to the backup machine (there is a button onstage for them to control this) and pick up the demo where they left off -- the demo buddy having previously followed the demo along and got the backup machine in the same location in the script.  Although by no means seamless, this at least allows the demoer to finish the scenario onstage and limp off without looking like a complete goat.

    If you have a good demo scenario and a good product, you are 80% of the way there.  All that is left is making sure you know what you are going to say and smoothly flow through it.  Generally you want to practice this a lot.  One of the things I usually do on the flight down to the event is spend an hour or so on the plane explictly writing out a detailed outline and/or script for the talk.  That way I can avoid awkward pauses and “umms“ as I talk, and can make sure I keep a fairly smooth flow. 

    The most important thing I find is the first 30 seconds of the talk, when you come out from behind stage, are hit with tons of strong lights, spy a close-up of your face on a 60 foot tall screen projected behind you (which is a weird “out of body“ experience), and you can't see the back of the room -- just hundreds of rows of chairs fading into the distance.   Having a pre-canned opening 3 sentences already prepared and which you know cold helps a lot at this point.  I find that once you get beyond this and into the flow of the talk, the rest comes naturally. 

    One trick I do in big keynote demos is to make sure I have this opening script printed out and on paper with me for immediately before I go onstage (while the stage director is flashing me the 1 minute hand signal).  That way, if I feel myself blanking out I can quickly refresh those opening 3 sentences, feel confident, put down the script, and confidently walk out on stage with a smile (note: always leave the script behind -- don't ever bring it with you... ).

    Thankfully I flew down Saturday afternoon and avoided the travel fun that most people who came in (or were supposed to) on Sunday ran into.  On Sunday, I checked in to the keynote backstage area -- which is somewhat akin to mission control at NASA. 

    It it setup backstage behind the big stage screens, and houses about 30 full-time stage crew members, lots of marketting folks working on slides and final messaging, a makeup artist, various technical support staff, and misc execs wandering about watching everything go. 

     


    There is a public area backstage where everyone with a special backstage pass can go, as well as the official “green room“ tent that has additional security outside and is limited to just speakers and demoers (and the makeup crew).

    The stage and technical crews handling the keynotes are always top-notch and total professionals.  They do a great job setting you up and getting everything ready.  The demo machines are plugged into a sybex (sp?) system backstage so that they can be projected anywhere.  There are then rows of demo dry-run stations where demoers (and their demo-buddies) backstage can check the machines (running through the sybex system) and test and run through everything before going on stage. 

    On Monday night we did a formal dry-run of EricR's keynote to make sure timings were right and the flow of the talk worked.  Thankfully Eric didn't want any changes to my demo so I got off early (tip: when the senior exec says no changes are required, get the hell out of there before they have a chance to change their mind).

    One thing I always try todo before a big talk is to make sure I get a good night sleep before.  It is much better to be rested and sharp than tired but more prepared (I've learned this the hard way).  If you aren't ready the night before the talk, frankly an extra hour of practicing won't really help.

    On Tuesday (the day of the keynote) I arrived backstage about 90 minutes early -- just in time to see the last minute changes to the slide deck (it is always good to double check what has been changed so that you at least know the 3 slides before you go on stage).  We did a last minute tech check (this is where the “reset.bat” file is a lifesaver -- and lets you leave the machines feeling totally calm).  They then shipped me off to makeup and audio.

    Having makeup put on you always feels a little weird (at least for me), although like everything the person doing it was a total pro.  On a previous job she said she had worked on Ozzie Osborne, although as I sat there in the chair pondering that statement I wasn't sure whether to feel re-assured or worried by that (I ended up not looking in the mirror and just trusted her not to add any dripping blood to my face). 

    About 15 minutes before my segment, a stage handler arrived to take me from the green room to some holding chairs right offstage.  Everything is dark offstage (which makes navigating through the maze tricky).  Thankfully there are white strips of tape on the ground marking off safe paths, with giant tape arrows directing traffic to help you find your way.

    As you wait in the holding chairs, you collect yourself, try to get yourself pumped up (while remaining calm).  AriB, who did the demo right before me, was doing rocky style air punches to get himself psyched up.  I ended up doing a few arm stretches, and walked back and forth a few times, to get the blood and energy flowing. 

    In no time at all the stage director was flashing the 1 minute sign, I did a last minute review of those first three sentences, took a final swig of water, and then walked calmly on as I heard Eric say “....and I'd like to invite Scott Guthrie....”

    P.S. I'm the guy in the blue shirt above.

  • Fun ASP.NET Whidbey Tip/Trick: DataBinding to Generics

    One of the great new features in Whidbey are "Generics" -- which basically provide a mechanism that enables developers to build classes whose signature and internal datatypes can be templatized.

    For example, rather than use an ArrayList (which is a collection of type Object), or force developers to create their own strongly typed list collection class (ie: the OrderCollection class) -- developers using Whidbey can use the new List class implemented within the System.Collections.Generic namespace, and specifically specify the type of the collection when using or referencing it.

    For example:

    // Use the built-in "List" collection within the System.Collections.Generic namespace
    // to create a collection of type "Order"

    List<Order> orders = new List<Order>();

    // Add Order objects into the list

    orders.Add(new Order(123, "Dell"));
    orders.Add(new Order(345, "Toshiba"));
    orders.Add(new Order(567, "Compaq"));

    // Lookup the "OrderId" of the first item in the list -- note that there is no cast below,
    // because the collection items are each an "Order" object (as opposed to "Object"
    // which they would be with an ArrayList

    int orderId = orders[0].OrderId

    // The below statement will generate a compile error, but would have
    // compiled (but generated a runtime exception) if the collection was
    // an ArrayList

    orders.Add("This will not work because it isn't an order object");


    --------------------------------------------------

    Below is a more complete sample on how to use Generics with the new ASP.NET ASP:ObjectDataSource control, and then bind the list to a GridView control.

    First is the "OrderSystem.cs" file which should be saved within the "Code" directory immediately underneath the application vroot:

    // OrderSystem.cs: Save within "code" directory
    
    using System; using System.Collections.Generic;

    public class Order { private int _orderId; private string _productName;
    public Order(int orderId, string productName) { _orderId = orderId; _productName = productName; }
    public string ProductName { get { return _productName; } }
    public int OrderId { get { return _orderId; } } }
    public class OrderSystem { public List<Order> GetOrders() { List<Order> orders = new List<Order>();
    orders.Add(new Order(123, "Dell")); orders.Add(new Order(345, "Toshiba")); orders.Add(new Order(567, "Compaq"));
    return orders; } }
    I can then write a simple .aspx page that uses the ObjectDataSource control to bind against the "GetOrders" method to retrieve a List of Order objects. I can then point the GridView at the ObjectDataSource control:

    <%@ page language="C#" %>
    <html> <body> <form runat="server"> <asp:gridview id="GridView1" runat="server" datasourceid="ObjectDataSource1" bordercolor="#CC9966" borderstyle="None" borderwidth="1px" backcolor="White" cellpadding="4"> <headerstyle forecolor="#FFFFCC" backcolor="#990000" font-italic="False" font-bold="True"> </headerstyle> </asp:gridview>
    <asp:objectdatasource id="ObjectDataSource1" runat="server" typename="OrderSystem" selectmethod="GetOrders"> </asp:objectdatasource>
    </form> </body> </html>

  • The PDC was a Blast...

    I just returned to Seattle from the PDC in LA (I ended up staying an extra day to present to the local .NET User Group on Whidbey Thursday night).

    The conference was a stunning success and a lot of fun.  The ASP.NET Whidbey sessions had great attendance (I had overflow in my overview talk on Monday, even though the room had 2,400 seats).  The evals from the 20 ASP.NET Whidbey talks have been outstanding -- people clearly loved the content.

    The best part of course was talking with developers from all over the world.  The ask the experts session on Tuesday night was my favorite.  I brought along my laptop and spent about 5 hours coding up answers to questions from folks as we all sat around a small table and talked about the various projects/ideas we were working on.  Totally fun -- you can never do activities like that enough. 

    It is informal jam sessions like those, and seeing the excitement in the eyes of people as you use the new product in front of them, that make spending thousands and thousands of hours working on a long multi-year product cycle worth-while.  I left LA totally amped and energized to finish Whidbey and ship this baby out the door.