Working as a contractor opened my eyes to the developer food chain.  Even though I had similar experiences earlier in my career, the challenges seemed much more vivid this time through.  I thought I’d share a couple of experiences with you, and the lessons that can be taken from them.

Lesson 1: Beware of the “funnel” guy.  The funnel guy is the one who wants you to funnel all thoughts, ideas and code changes through him.  He may say it’s because he wants to avoid conflicts in source control, but the real reason is likely that he wants to hide your contributions.  Here’s an example.  When I finally got access to the code on one of my projects, I was told by the developer that I had to funnel all of my changes through him.  There were 4 of us coding on the project, but only 2 of us working on the UI.  The other 2 were working on a separate application, but part of the overall project.  So I figured, I’ll check it into SVN, he reviews and accepts then merges in.  Not even close.  I didn’t even have checkin rights to SVN, I had to email my changes to the developer so he could check those changes in. 

Lesson 2: If you point out flaws in code to someone supposedly ‘higher’ than you in the developer chain, they’re going to get defensive.  My first task on this project was to review the code, familiarize myself with it.  So of course, that’s what I did.  And in familiarizing myself with it, I saw so many bad practices and code smells that I immediately started coming up with solutions to fix it.  Of course, when I reviewed these changes with the developer (guy who originally wrote the code), he smiled and nodded and said, we can’t make those changes now, it’s too destabilizing.  I recommended we create a new branch and start working on refactoring, but branching was a new concept for this guy and he was worried we would somehow break SVN.

How about some concrete examples?

I started out by recommending we remove NUnit dependency and tests from the application project, and create a separate Unit testing project.  This was met with a little bit of resistance because - “How do I access the private methods?”  As it turned out there weren’t really any private methods that weren’t exposed by public methods, so I quickly calmed this fear.

Win 1
Loss 0

Next, I recommended that all of the File IO access be wrapped in Using clauses, or at least properly wrapped in try catch finally.  This recommendation was accepted.. but never implemented.

Win 2 
Loss 1

Next recommendation was to refactor the command pattern implementation.  The command pattern was implemented, but it wasn’t really necessary for the application.  More over, the fact that we had 100 different command classes, each with it’s own specific command parameters class, made maintenance a huge hassle.  The same code repeated over and over and over.  This recommendation was declined, the code was too fragile and this change would destabilize it.  I couldn’t disagree, though it was the commands themselves in many cases that were fragile.

Win 2
Loss 2

Next recommendation was to aid performance (and responsiveness) of the application by using asynchronous service calls.  This on was accepted.

Win 2
Loss 3

If you’re paying any attention, you’re wondering why the async service calls was scored as a loss.. Let me explain.  The service call was made using the async pattern.  Followed by a thread.sleep  <facepalm>.

Now it’s easy to be harsh on this kind of code, especially if you’re an experienced developer.  But I understood how most of this happened.  One junior guy, working as hard as he can to build his first real world application, with little or no guidance from anyone else.  He had his pattern book and theory of programming to help him, but no real world experience.  He didn’t know how difficult it would be to trace the crashes to the coding issues above, but he will one day. 

The part that amazed me was the management position that “this guy should be a team lead, because he’s worked so hard”.  I’m all for rewarding hard work, but when you reward someone by promoting them past the point of their competence, you’re setting yourself and them up for failure.  And that’s lesson 3.  Just because you’ve got a hard worker, doesn’t mean he should be leading a development project. 

If you’re a junior guy busting your ass, keep at it.  I encourage you to try new things, but most importantly to learn from your mistakes.  And correct your mistakes.  And if someone else looks at your code and shows you a laundry list of things that should be done differently, don’t take it personally – they’re really trying to help you. 

And if you’re a senior guy, working with a junior guy, it’s your duty to point out the flaws in the code.  Even if it does make you the bad guy.  And while I’ve used “guy” above, I mean both men and women.  And in some cases mutant dinosaurs. 

If you’ve built a web application in ASP.NET chances are you’ve also consumed or even created your own web service.  Web services are an easy way to share data, and even commit actions across technology lines.  I recently ran into an application that used a web service that captured my curiosity – enough so that I thought I would share my experience.

The app was a game, and at the end of the game a message popped up that said - “reporting information”.  Interesting, I though.  It was a Silverlight application, which meant the xap file could be downloaded and dissected, which is precisely what I did.  A quick look at the .xap file and I had a config file which gave me a service location.  In other words, I now had the endpoint of the web service. 

Microsoft makes web services cake (yellow cake with chocolate frosting to be precise), and web services by default have an auto-documentation feature enabled which will show usage scenarios and parameter information when viewed through a web browser (HTTP GET).  This is also an easy way for a curious mind to find out just exactly what a web service can do.  So my next step was to navigate my browser to the web service URL.  And BINGO!  I had a list of all actions the web service could support, including the HTTP POST stubs for calling the web service.

BTW, if you don’t want ASP.NET to automatically generate this documentation (including the more formal WSDL), you need to remove the “documentation” protocol from the webSerivces section of your web.config.  Code below, simply paste it into your web.config or machine.config file. 

   1: <webServices>
   2:     <protocols>
   3:         <remove name="Documentation"/> 
   4:     </protocols>
   5: </webServices>

If you’re building a public SDK and encourage others to call your web service, you’ll want to leave this documentation feature up.  But if you’ve built a web service for your LOB application, and don’t really want others to know about it.. turn off the auto documentation.

Once I knew the service URL, I could have gone into Visual Studio and wrote a quick app that consumed the web service (add web reference…) but I decided to go the more fun route – Telnet.  It had been a while since I used Telnet, and I quickly found out that Telnet isn’t installed by default on a Windows 7 or Windows Vista machine.  But that’s quickly solved by going into add/remove programs, and choosing windows components –> Telnet. 

Why did I choose Telnet?  Well, because the web service documentation so kindly provided me with the exact HTTP POST information that I needed to call the service.  If you’ve never done a manual HTTP POST through Telnet, it’s time you tried.  It’s relatively easy, and a rewarding experience.

Telnet is a command line application, that connects to any port you tell it to.  In my case, I wanted to connect to port 80 (http).  I’ve also used Telnet in the past to send SMTP messages (yes, email...)  When running Telnet, you specify the port after the domain.  Remember, you have to use the domain when connecting, not the entire URL.  For example, if you wanted to go to you would do something like

telnet 80

You’ll use the rest of the URL when you execute your POST command.  This is low level HTTP right here, and it’s a great way to better understand what’s happening behind the scenes when you click buttons and fill out checkboxes in your favorite web browser. 

Here’s what you would see from the auto documentatoin on the fake myservice.asmx web service (

   1: POST /services/myservice.asmx HTTP/1.1
   2: Host:
   3: Content-Type: text/xml; charset=utf-8
   4: Content-Length: length
   5: SOAPAction: ""
   7: <?xml version="1.0" encoding="utf-8"?>
   8: <soap:Envelope xmlns:xsi="" xmlns:xsd="" xmlns:soap="">
   9:   <soap:Body>
  10:     <HelloWorld xmlns="">
  11:       <name>string</name>
  12:     </HelloWorld>
  13:   </soap:Body>
  14: </soap:Envelope>

There are a couple of things you’ll need to change in the text above to successfully call this service.  First, you’ll want to set the value for the parameter (<name>string</name>).  By default the documentation will put values in there that don’t make sense, in this case “string”.  Once you have the soap:Body information completed, you need to calculate the size of the data portion, which is the length of the entire XML fragment (lines 7-14 above).  You can either count the characters by hand, or you can paste the XML into MS Word, and use the Word Count feature to show how many characters (including spaces) you have.  Once you have the length, update your Content-Length value (line 4 above).

Calling this web service from a command line using Telnet is pretty basic once you have the text all ready.  You’ll want to paste in the top portion of the text (lines 1-5 above) then hit the return key 2 times.  The first time is to end the SOAPAction header, the second is to end the headers section of the POST (and begin the data section).  At this point, it’s time to enter the POST data, which is lines 7-14 above, followed by another enter key.  Once you hit the enter key you’ll receive the response from the web service.  If it’s an HTTP 200, you know it was successful, otherwise there was a problem with your post.  I was unsuccessful the first few times I tried, because I pasted in the complete text you see above (lines 1-14).  It seems that Telnet doesn’t like the way this text is formatted.  I’m guessing it’s a difference in the new line character that’s used.  Anyway, the trick (if you’re using copy/paste) is to paste in the headers, hit return twice, and then paste in the body.

Aside from feeling cool because you just called a web service from about the lowest practical level, this exercise should open up your mind about web services and security.  For starters, if you put a web service out there, you should expect that others are going to call it, and not necessarily the way you intended.  In the case that started all of this, data was being stored that I had created through the application.  But I didn’t like the data that was created through the application, I wanted to send my own data.  Much like securing a web application, you should not expect that your web service call will always be coming from your application!  This is a classic example of why client-side validation in a web page is not a security feature, though it is certainly a usability feature.  You can’t prevent a user from submitting data on the client side.  If they really wanted to post values that you’re UI is preventing, they can always turn to Telnet, other another HTTP utility.

Here’s one last trick for you.  Open up an HTTP Listener like Fiddler2, and run the application that’s calling a web service.  If you take a look at the activity that Fiddler has captured, you’ll see an HTTP POST to the web service URL.  Fiddler even has a ‘RAW’ view, where you can get the precise data to post, just like we pulled out of the web service documentation earlier.  In this case, you’re seeing exactly what was posted to the server.  I mention this because you may think, “well if I don’t store my web service URL in the config, no one will ever know where it is and I’ll be safe!”, and you would be wrong.  It also highlights that when you’re not using a secure web service, or SSL on a web application, the data is TRANSPARENT!  Anyone plugged in downstream from you with an ethernet sniffer can see exactly what is being transmitted, in clear text.  Luckily, there are secure ways of calling web services.  If your service is capable of making changes to data, or gives the caller access to potentially private data, you should certainly secure your web service.

If you want to learn more about securing your web services, you can read about this topic on MSDN

Touch seems to be the latest craze in software, but astonishingly, it’s nothing new.  Most applications out there will work perfectly well in a single touch emulated environment.  Basically, a hardware device turns your finger touch into a mouse action.  Yet, even though those interfaces and devices have been around for years, it really wasn’t until the iPhone, that people got reacquainted with touch.  Here’s the reason why. 

Touch vs. Multi-Touch

Touch in and of itself is pretty basic.  Take a finger or a stylus, and use it to drive the mouse subsystem.  While this type of touch is better than no touch at all, it didn’t do anything special enough to get a user to go from using a mouse, to using their finger.  Then multi-touch came around.  Suddenly, you could do things that would have been difficult or impossible with a classic mouse device.

Zooming with two fingers is now a classic example, and sadly one of the only examples out there.  The reason for the lack of multi-touch interfaces is actually quite simple.   Multi-touch isn’t easy to do, and more importantly, coming up with scenarios where multi-touch is beneficial in an application is a challenging task that requires more of a designer’s brain, than a developers brain. 

And here’s where it always gets disappointing.  Lack of applications that use the hardware mean the hardware isn’t in demand, which means the price of the hardware will remain high, which then curbs the demand for applications that use multi-touch.  And the cycle continues.  But the numbers are growing, and we’re getting closer to critical mass.  Businesses are starting to understand the value of an extremely usable, fluid interface with multi-touch.  As businesses start to invest in these technologies, hardware prices will drop, and everything else will hopefully fall in line. 

In the meantime, developers need to start getting used to the idea of building a touch interface.  If you’re a Silverlight developer, and want to get a quick start – take a look at Tim Heuer’s blog

Start playing around with the idea of using Touch events, and adding an extra layer of functionality to your application.  I’m tired of not having touch capabilities on my computer, and I’m ready to do something about it!  I’m not sure if Windows 7 will be the catalyst that ignites the multi-touch fire, but I sure hope it is.    

When talking about Silverlight, it is only natural to compare it to standard web based applications (ASP.NET, PHP, JSF, etc.).  Silverlight has some clear advantages when dealing with large quantities of data, or speed and performance in say grid operations, but there’s another aspect of Silverlight that may be hard to notice at first.  It’s the ‘little’ things.  I call them little, because alone, each of these features isn’t something that will shock or amaze you.  But combined, these features create a User Experience that is specific to Silverlight.  Let’s take a look at a few examples.


Above is the Infragistics XamWebGrid in GroupBy mode.  The part to notice is the small x that shows up when you mouse over the Group indicator.  It makes it easy for a user to figure out how to ungroup the data, and that’s just the start.  Notice the shape of the Group indicator?  When you group by multiple columns, these fit together like puzzle pieces.  Again, indicating to the user exactly what’s happening as shown below.


The differences are very subtle, like a glow animation when you mouse over a button; the same type of differences you see between Windows Forms and WPF.  But these subtle differences can create a totally different experience for a user.  The application feels more alive, and when these features are put to good use, like in the example above, the UX of the application can be improved.  And here I thought storyboards and animations were only used for 3D rotating textboxes with movies playing in the background..

I’m not sure there’s ever been quite as much activity in the world of Web development as in the past couple of years.  The browser ‘wars’ have been re-ignited, and technology is advancing at a frantic pace.  It’s almost too much to keep up with.  Right now as a web developer you have Silverlight 2 at your disposal, and a beta of Silverlight 3, with talks of Silverlight 4 already taking place.  Then there’s the CSS 3 specs, HTML 5 specs.. it’s almost too much to comprehend.  I guess we’re lucky in a way, that there are currently significant barriers keeping us from these technologies.

Let’s take a look at Silverlight first.  With clear advantages in performance and usability over standard HTML/JavaScript, Silverlight looked like a definite winner for many web applications.  But Silverlight suffers from one big disadvantage – the plugin that needs to be installed.  That simple plugin which only takes a minute to install, is preventing widespread adoption of Silverlight across corporate America.  As with any new software, IT must first test Silverlight on their systems, and then come up with a rollout plan which as many of you know, is not something that happens overnight.  So even with 3 versions of Silverlight in 1 year, many developers are just now getting the nod from corporate to start discussing the option of using Silverlight. 

CSS3 and HTML5 promise to change web development (in a good way), but are both being held back by standards committees and browser support (or lack there of).  In all likelihood browsers will implement the features of these specs before the specs are actually complete.  But even the implementation will likely happen at different rates for each browser.  The reason most of us write web applications in the first place is so that one page can be viewed in any browser on any computer by any user.  Which brings us back to IT, the guys who will inevitably be forced to hold back the latest versions of these browsers from being installed, so that they can be properly tested and a deployment strategy can be planned. 

While all of these new technologies promise to revolutionize web development, it’s unlikely the revolution will happen as quickly as any of us would like to see.  IE6 is still the corporate standard browser in many organizations.  The good news for developers is that it gives you extra time to learn the new technologies.  And if there’s one thing I know, it’s that you can accelerate rollouts if you prove a valid business case.  Show the IT director a prototype of the app you just put together using Silverlight 3 that took you half the time to write and solved the UX and Performance problems of the current html based application, and a Silverlight rollout is likely to be around the corner. 

Developers are often cast into a group and stereotyped as "visual design challenged”.  The fact is, a true Visual Designer picks colors and creates designs for applications that make my attempts look like grade school arts and crafts project.  But that doesn’t mean developer’s don’t care about design.  Actually, I think most developers put a tremendous effort into trying to make their applications look good.  It’s just that the results aren’t always award winning..

I used to get insulted when people would mention that developers weren’t good at styling applications.  From the first day I started coding, I was the only one working on my applications.  The application was my creation, and I was responsible for every aspect of it, including the design.  By hearing generalized statements about how developers are bad at visual design, it was like telling me that I don’t have the entire skills I need to build a successful application. 

Luckily, I’ve grown up a little since then.  While I still don’t like hearing I’m not good at something, I can certainly respect that there are professionals in the visual design world who can do a much better job of making my applications look good.  I can spend 2 years writing the most beautiful code, but at the end of the day, the user only sees the User Interface.  UX is perhaps the most important aspect of an application, because it encompasses all aspects of the application.  If the code was buggy, the UX will be poor.  But more importantly, if the code was perfect and the UI design was not, the UX will be sub-par.  Want proof of just how important the UI/UX is?  Take a look at Facebook vs. MySpace. 

MySpace was the clear favorite two years ago, so what happened?  The MySpace page was ugly by default, and users could do anything they wanted to make it even uglier.  Sure, customizing your page sounds like a good idea, until you realize that you’re not that good at it.  Facebook took a different approach – a fixed  UI that looked good out of the box.  Facebook users didn’t need to spend time making their page look good, and more importantly, you didn’t have to look at hundreds of other bad looking sites.  Take another example, the iPhone.  Apple cares about it’s image so much, that all iPhone applications need to follow design guidelines, and must be approved before they can be made available to iPhone customers.  Still not sold on the importance of UI?  Remember geocities?  How could you not.  An entire domain of horrendous designs.  The negative experience associated with those bad designs is something that sticks with you.  I don’t remember if I ever found anything functional on geocities, I’m sure I did.  But that’s not what I remember.  I remember the horrible color choices, images with choppy edges (that of course weren’t transparent), and JavaScript errors. 

There’s a point in here somewhere.  Oh yeah – designers are a developers best friend.  Designers make developers look better.  After I spent 2 years on a project, of course I want people to think it’s the coolest thing they ever saw.  But my late nights of coding are essentially going to be judged on how good the application looks, not how well I wrote the code.  I remember the days of getting beta feedback like, “It’s not colorful enough, can you add some color to it?”  I would go back to my desk, bummed that no one shared the same excitement as I did that the project was complete.  But in actuality, it wasn’t complete.  It was functional, but that doesn’t make an application “done”.   So, I’d go back to my desk and throw a few different colors around the application (usually different shades of blue) and try again.   I spent weeks just experimenting with different colors to see if I could produce something a *slightly* more visually appealing.  Weeks that I could have spent on my next project.  I certainly don’t miss those days.  Today, visual designers take the code that I wrote, and bring it to life, and they do it in far less time than I ever could have, with far better results.  They might hand me back a wireframe or a mock-up that I need to convert to code, or if I’m lucky, they’re just modifying the XAML or CSS directly for me.  And when I hear comments about how cool an application is, I know that without my code, that would have been nothing more than a static image.  I bring designs to life, designers bring my code to life.  It’s amazing that I ever did both the design and development.  And it’s even more amazing that I resisted change at first. 

Working with WebControls and WebForms for the past 8 years has taught me a lot about web development.  The one thing that I learned above everything is that the onus is on the developer to write good code.  Now that may not sound like something revolutionary, but the fact is that ASP.NET WebForms makes building web applications easy by abstracting away some of the difficulties of a stateless protocol.  And it also makes it easy to forget about what’s actually happening behind the scenes to make everything possible.  Does that mean WebForms is flawed?  No.

MVC offers a fix for some of the problems that WebForms has been plagued with.  Problems like bloated ViewState, and that ‘pesky postback model.’  On the surface, this looks like an attractive solution for every web developer out there.  A promise to get rid of ViewState – a web application’s mortal enemy?  It has to be good. That is until the application your building needs some sort of state mechanism, and you begin re-inventing the wheel.  I think it would have been much easier just to go back to you WebForms application and disable ViewState. 

The problem right now is that there is so much focus over why MVC is better than WebForms, that everyone is forgetting a basic rule of software development: use the right tool for the job.  I’ve said it dozens of times, MVC has a set of scenarios that it first very nicely, as does WebForms.  One platform does not need to be declared a winner – they’re two equals that should be placed into your metaphorical toolbox.  So who is actually losing right now?  The developer community that is caught in the middle of this faux war.  Developers who think that they have to move to MVC out of pride or fear that WebForms will be replaced by MVC.  Developers who are only hearing one side of the story right now at local community events and tradeshows because let’s face it.. WebForms isn’t sexy anymore at 8 years old.  Ironically, this same issue of “sexy talks” was discussed last week at NDC with Scott Hanselman behind the camera.

Conferences aren’t picking talks about “the old stuff”, because they’d rather focus on what’s new.  New is sexier, and conferences are all about attendance numbers.  But here’s what these conferences don’t seem to understand – every day there’s a new developer joining the community who needs to start from zero.  Developers gauge what’s important by what’s being talked about, it’s why we have trends and tag clouds.  It’s human nature.  If I hear 100 people talking about a subject, it must be important.  Right now I hear 100 people talking about MVC, and no one talking about WebForms, so I guess MVC is more important, right?  That’s the type of thinking happening right now, and it’s nothing new.

“MVC brings web development back into ASP.NET”, is one of the most common arguments I hear for its use.  Yet the whole reason for WebForms existence was to free you of the dirty details of building a web application, and let you focus on your business logic.  To give you a set of functionality to be able to build on top of, the same focus for every library and framework out there.  It’s why you don’t have to write browser specific JavaScript when using jQuery or ASP.NET AJAX, and it’s the reason we don’t still program in machine language.  Can I write a tighter loop in assembly than I can in JavaScript?  Sure.  But for the applications I’m writing that’s not going to be an issue.  Maybe I’m writing an application where performance is the top and only concern.  Now writing the application in assembly doesn’t sound like a crazy idea.  If my requirements were to build a spreadsheet like application that offers filtering, copy and paste and exporting to excel, I’m going to jump to WebForms.  That’s the whole point behind using the right tools for the job. 

I read a blog post today that talked about how you should dump WebForms and go with MVC as soon as possible, so you can get back to web development roots.  The argument just didn’t make sense to me.  It’s like asking me to stop coding in C# and move to C++ because C# developers just don’t understand pointers and memory allocation. 

So here’s my call to action.  Rather than talking about why one framework or platform is better than the other, start discussing when one is better than the other.  What scenarios is MVC geared for?  When would you use WebForms (MVP) instead? 

The best advice I’ve seen so far is that WebForms is the platform of choice for building web applications, where MVC is more suited to building web sites.  This is still a bit abstract since there’s no clear definition of web applications, but I think it’s safe to say that if you’re building a web version of a win client application, you’re building a web application.  If you have a ‘grid’ in your page for purposes above that of just layout, you’re building a web application. 

I hear a lot of overused and overloaded terms these days like “leaky abstraction” when talking about WebForms.  As people repeat these items like robot drones, I wonder if they truly understand what it means, and more importantly how it affects a developers ability to build software.  But whether it’s a group of robot drones, or an increasing number of well educated software engineers, we’ll leave for the subject of another debate.  Back to the matter at hand.  Of all of the WebForms complaints, it usually boils down to a few key issues – ViewState, ID generation and HTML Markup & Postbacks; each of which is undergoing changes in ASP.NET 4.0.


To understand the first issue of ViewState, you have to understand the ViewState model.  It works recursively in nature, with each control responsible for initiating the loading and saving of ViewState from it’s child controls.  The problem comes when you turn off ViewState at a root level.  Since the parent control is no longer storing ViewState, it’s child controls get ignored.  Basically, you can turn ViewState off selectively for children, but you can’t turn it on selectively if it’s disabled for the parent control.   The solution?  ASP.NET 4.0 introduces the ViewStateMode pattern which is separate from the EnableViewState property.  ViewStateMode has 3 values, Enabled, Disabled and Inherit (the default).  When ViewStateMode is set to disabled, any child control that is set to Disabled or Inherit will have it’s ViewState turned off.  If a parent control has ViewStateMode=Disabled, and the child control has ViewStateMode=Enabled (the important new scenario), the child control will still have it’s ViewState saved.  This in essence, is what the feature is all about.  The net gain for a Web developer is a lighter weight page, because you now have the ability to turn off ViewState in places that you didn’t really need it on in the first place.  

ID generation

Have you ever taken a look at the html markup produced by an ASP.NET WebForms application?  I have.  Actually, I spend a good portion of my time examining html markup and discerning how a page can be made lighter or faster based on what I see.  One of the big offenders happens to be ID strings.  If you want to address an element, you need to give it an ID.  If you want to address it uniquely, you need to come up with a mechanism to make that ID unique.  In ASP.NET WebForms this was done by pre-pending the parent control’s id to the ID string of a control.  The problem is that as applications became more complex, so did the control hierarchies, which then caused UniqueID and ClientID strings to become unmanageable.  Looking at an application today, it’s not out of the question to see something like “contentPlaceHolder1:ctl0:ctl0:UpdatePanel1:Panel1:GridView1” Now if you use that string a few times to address id’s or create css class names, you’re bloating your HTML.  Even worse, you’re creating a recipe for cascading failures if you ever change the containership or the ID of one of the parent controls.  Suddenly, your element’s ClientID has changed, and you need to go through and update your code. 

In ASP.NET 4.0, you’ll have the ability to set the ClientIdMode to “Static” which solves most of these problems.  With a Static ClientIdMode, the ID string you set for the control, is the same string that will be used for the ClientID.  This has the benefit of being less fragile, and gives you the ability to clean up your markup by shortening ID’s considerably.  The drawback to using ClientIdMode=Static is that ID’s will not be ensured to be Unique by the framework, making this a manual task for the developer.  Not sure about you, but if that’s the biggest challenge of my day, I’ll be smiling.

HTML Markup & Postbacks

Postbacks were the bread and butter of ASP.NET 1.0.  Luckily ASP.NET has come a long way since then, and developers today understand that creating a good User Experience means limiting postbacks.  The amount of HTML rendered to the client also has a direct correlation on User Experience.  The more HTML/Markup, the longer a page will take to load.  ASP.NET 4.0 improves this key scenario by adding Client-Side DataBinding and templating.  Think of client-side templates as a repeater that get’s populated by JavaScript.  Why populate the repeater on the client-side rather than the server-side?  It’s a matter of multiplication.  Take the same 4 lines of HTML in a repeater template and multiply it by the number of items in your datasource.  Now push all of that HTML down over the wire to the client.  Make matters even more interesting, use the same datasource to populate a grid.  In a typical scenario, you’re pushing the same data down two times, once for the repeater, once for the grid.  If you had that data available as a client-side DataSource, you only send the data down once, and re-use it on multiple controls.  At the price of a little extra processing power to create the dom elements through JavaScript, you get the benefit of possibly drastically reduced HTML markup, and free yourself from relying on Postbacks in order to populate a list. 

The three features listed above are just a short list taken from the ASP.NET 4.0 features whitepaper.  Even in this short list, it’s easy to see that ASP.NET 4.0 is changing the face of WebForms.  Many of the concerns that developers have expressed and many of the advantages that MVC touts over WebForms are being addressed.  The point is, WebForms isn’t dead, it’s quite the opposite. 

I’ve never been a big fan of surveys or polls.  Not because I don’t like them, but because I always disagree with their validity and the conclusions that are drawn from the results.  Lets take a look at a couple of examples.  Readers Choice awards were recently handed out.  I received an email from a customer who wanted to let me know that he was contacted by a component vendor to go out and vote for them, and he didn’t even own their product.  To my surprise, he wanted to know where the Infragistics email was.  My answer – we make a very conscious effort to limit the number of emails we send out to our customer base.  At the end of the day, our decision to keep spam out of our customer’s inboxes also meant that we were at a disadvantage in the survey/poll/award.  Does the poll actually show which product is the best?  Or does it show who spent more time and energy on campaigning for votes?

Now let’s look at case #2.  There’s a survey out right now asking questions about AJAX tools and frameworks, and which ones developers are using.  I don’t know exactly what the results of this survey look like, but here were my first impressions.  The survey is hosted on a personal blog, where most readers likely share similar interests.  Simone Chiaretta (the blogger) respectfully acknowledges that this blog has a limited audience and is likely biased toward the ALT.NET kind of developer.  I’m sure Simone has the best intentions about publishing the survey, but it will be very difficult to not get a skewed response.  In addition, what does the response actually mean?

A current snapshot of results shows that 76% of developers are using jQuery, compared with 48% using ASP.NET AJAX and 58% using AJAX Toolkit.  Does that really mean that there are more jQuery developers than WebForms/ASP.NET AJAX developers?  To confuse matters, AJAX Control Toolkit is built ONTOP of ASP.NET AJAX.  Yet these results show that there are more people using the toolkit than ASP.NET AJAX (which is actually impossible).  I know from experience that there is a huge portion of web developers out there who never want to hear the word “Javascript”.  Does that mean they’re not doing AJAX?  No, they’re simply relying on the tools that were handed to them.  Microsoft created the Microsoft ASP.NET AJAX stack and made doing things like partial rendering a breeze.  It’s all part of ASP.NET 3.5.  I don’t even call it “AJAX Extensions” anymore, it’s just ASP.NET 3.5, or Visual Studio 2008.   It’s actually so easy to do Partial Updates at this point, that a developer can ‘do AJAX’ with out even knowing how to write a line of JavaScript.  It’s very unlikely that those developers are or will be represented in this survey.  How many web developers still use the term DHTML?  That usually only shows up on resumes.  But in actuality, most web developers today are doing DHTML, and aren’t even aware of it.  I expect the same will be true of AJAX.  It’s so commonplace that there’s no reason to differentiate it from web development in the first place. 

So what spawned all of this?  Well, I saw a post from Betrand Le Roy on Twitter about this survey.  He immediately got a response asking “will the results of this survey be used to finally convince Microsoft to dump ASP.NET AJAX, and go with jQuery instead”.  I can only hope not.  jQuery is an awesome Javascript framework, but it absolutely abandons the web developer who doesn’t want to spend his days writing custom Javascript.  Microsoft created the idea of a server-side webcontrol for this very reason.  If I’m given the choice to drop an UpdatePanel on a form and have a label update on a button click via AJAX, or hand code it myself using a Javascript framework, I’m going to go with the UpdatePanel just about every time.  Will I get more satisfaction out of building it from scratch?  Certainly.  But personal satisfaction doesn’t pay the bills. And I’d be much more satisfied spending the two weeks time I saved, on the beach fishing.  The Infragistics ASP.NET controls try to find a comfortable balance between these two worlds.  They abstract away the client-side javascript for developers who don’t want to or have the time to deal with it, and they provide a client-side object model for developers who want to dive in and play.  We call it our CSOM (Client-Side Object Model) and it’s been around for years.  I think this is the path Microsoft needs to continue to take, supporting both of these key scenarios. 

But back to the topic at hand.  Polls and surveys are just that.  They represent a a sentiment of a select audience.  Was the audience wide enough to capture multiple views (usually not).  Was the survey done by an impartial third party (usually not).  Did others have a chance to influence the results (unfortunately yes most of the time).  And even when you get past all of that, it’s most important to look at what the questions were actually asking in the first place.  Each time I see the results of a poll, I argue that the answer that was being deduced was not actually the question that was being asked.  

My question to you – do you participate in these polls?  How much weight/validity do you give the to results?  And better yet, what do you think about “lobbying for votes” in readers choice type polls/awards?  Is that something you would be happy to hear about, or would you prefer not to hear at all? 

Perhaps I’m too analytical, but I generally tear apart survey results and the conclusions that are drawn from them.

If you’ve been following me on Twitter (igtony), you’ve probably heard me grumbling about Visual Studio crashing, services not working, or various other gremlins I’ve been fighting with for the past few days.  Well, I’ve finally got something to show for all of my pain! 

Image 1

My goal when I started building this application was to build a POC Silverlight Application which dynamically loaded sub-applications.  In essence, I wanted to build a Silverlight shell that could load N applications unknown at compile time.  I started out by choosing from the new Navigation project template in Silverlight 3.  This gives a MasterPage scenario that works as a good starting point for an application which may have multiple distinct Views.  I tweaked the base appearance of the template around just to make it look a little more presentable.  The White background used in the <Frame> element and the big 10px borders reminded me of Windows 3.1, and IE4.  I used a Border element to create the rounded edges which wrapped around my Frame, and I set my Frame background to be transparent.  With some of the UI tweaks out of the way, it was down to business.

The first task I needed to tackle was loading a XAML page (UserControl) dynamically.  It turns out this is a pretty simple task as long as that user control is in the same XAP file.  The Frame element has a Navigate method which takes a URI and will fill the Frame’s Content with the UserControl specified via the URI.  Unfortunately, the solution I was looking for places my UserControl in a separate XAP file, which meant I couldn’t use the simple Frame.Navigate(URI) method.  Instead I needed to dynamically load the UserControl in question.  Luckily, this is a pretty straight forward task with the help of the WebClient.OpenReadAsync method, and AssemblyPart class.  Here’s the code I used:

        private void NavButton_Click(object sender, RoutedEventArgs e)
            Button navigationButton = sender as Button;
            //Create a WebClient instance to download our separate XAP file
            WebClient client=new WebClient();

            client.OpenReadCompleted+=new OpenReadCompletedEventHandler(client_OpenReadCompleted);
            //Download the XAP file
            client.OpenReadAsync(new Uri("CompositeApp.xap",UriKind.Relative));

        void client_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e)

			//Create a StreamResourceInfo object out of our XAP stream            
			StreamResourceInfo sri = new StreamResourceInfo(e.Result, null);
			//Get a StreamResourceInfo for the Application's DLL
			//The one that houses our main XAML UserControl
            StreamResourceInfo mainAppSRI = Application.GetResourceStream(sri, new Uri("CompositeApp.dll", UriKind.Relative));
			//Load the Assembly using AssemblyPart.Load
            AssemblyPart loader = new AssemblyPart();
            Assembly assembly = loader.Load(mainAppSRI.Stream);
			//Now that the Assembly is loaded, create an instance of our UserControl
            this.Frame.Content = assembly.CreateInstance("CompositeApp.MainPage");

There’s one drawback to using this method to dynamically load a UserControl – dependencies are not automatically loaded for you.  If your UserControl in the separate XAP has a dependency on 4 assemblies you must manually load those 4 assemblies before loading your UserControl or CreateInstance will fail.  The hack solution I used was to include the assembly references in the main shell application.  This doesn’t scale well though, since ideally you want the additional dependencies to be loaded dynamically as necessary.  There’s no way you’ll know today every assembly reference each sub app will need for the entire life of the shell.  It’s actually impossible since you’ll likely need a ‘new’ assembly at some point down the road.  I saw some code fragments while looking for a solution which read the manifest in the XAP to determine what assemblies need to be loaded out of the (external) XAP.  If I were building this as a real world app, I would go that route.  If you know of a better way to handle this, I encourage you to share in the comments!  In a perfect world Frame.Navigate would allow you to specify a XAML file in a separate XAP and do all of this work for you.  Are you listening Silverlight team? ;)

Ok, main hurdle down (dynamic loading of UserControl from separate XAP), it was time for the next challenge – creating content!  I started out with a grid, because you can’t have a good application without a grid in there somewhere, right?  I used the XamWebGrid, and used it in a couple of places.  One on the main page which showed hierarchical data (Accounts and Transactions)

Image 2

Next I created another ‘page’ or View which Visualized data in a Map.  In this scenario the map shapes came from my “Shapefile” and the data came from a SQL Database via WCF and LinqToSQL.  I think this probably mirrors the major use case out there, where the data is kept separate from the Shapefile. 

Image 3

On the final Screen, I went with ‘live’ updating data (Image 1).  Since this was just a demo, I used a DispatcherTimer to change my underlying datasource every second.  Since my underlying datasource was an ObservableCollection of INotifyPropertyChanged objects, my grid immediately showed any changes made to the underlying datasource.  The Grid does 2-way databinding by default.  Since my View still looked empty, I dropped a chart in there as well, and added a new datapoint to the chart’s DataPoints collection each second (representing the history of changes made to the grid).  The end result was a chart and grid showing updates every second, with the Chart “always in motion”.  It was a pretty impressive experience.  My next step is to see what this would look like with a real data feed.  I’m guessing with proper queuing in place, the experience should mirror my mock up almost exactly. 

More Posts Next page »