Windows Communication Foundation (WCF) - I am not convinced yet

When I hear someone talk about WCF, I think back to the early days of COM+ all of the time.  In 97, 98, and 99, Microsoft told everyone to use MTS, the predecessor to COM+.  Microsoft pushed this so much, I felt like COM+ was the answer to my constipation problems.  Speaking from experience, it was a total disaster.  Most applications did not need the features of COM+, yet the hype machine continued to push it on me.  It was everywhere.  "Enterprise level apps need COM+" seemed to be all that I heard for about 18 months.  Using the features of COM+, I obediently built some applications.  Each of these applications exposed the same problem in MTS, that database access results in an isolation leve of serializable by default and that the overhead of a COM+ object is not trivial. 

Distributed transactions with an isolation level of serializable are typically pretty good for an application that must use distributed transactions, but I never saw a mention of serializable isolation level mentioned in any manual or howto anywhere.  No one talked about the issue.  We found this only from experience and only after asking questions was this finally confirmed to us.  This was a major issue.  Applications that were based on MTS that only used one database were still subjected to the isolation level of serializable and anything else a distributed transaction need (overhead?) whether or not this was needed or not.  With the introduction of COM+ 1.5, the isolation level of database operations can be changed, however, the default is still serializable (understandable, but the fact that there are options is good).  Why is this significant?  With inserts, updates, and deletes, this is not a bad idea.  Unfortunately 90+% of database operations are data reads (selects).  Why does this concern me?  Let's look at the simple statement when running within Sql Server 2000:
"select * from table where col1=47"

With this statement, all records where col1 is set to 47 are now locked with an isolation level of serializable.  That means that until the currently executing transaction is completed, those records are effectively locked from anyone else using them.  Yes, that includes a select.  If you look up your locking rules in Sql Server Books Online, you will find that issuing a select when the isolation level is set to serializable locks others from reading.  Additional commands are suppossed to wait, but they do not as Sql Server will throw a transaction deadlock error instead of waiting for the command to actually timeout. 

To add insult to injury with the Sql Server problem, Oracle had its own problems with MTS/COM+.  The Oracle server at the time had a tendency to lose the thread - client association on the server with MTS.  Oracle has since introduced Oracle Services for MTS, but this occurred after we were done with out project.  In addition, connections to the Oracle server seemed to be in short supply on the MTS server.  It seems that the client connection at the time did not flow the way that things were documented.  The result was that we needed to take special care in creating our connections and handing them around within out objects.

The next thing to encounter was the overhead with COM+.  Using these services comes at a price to setup these services for each component. If your application is only starting up one component per call, then the overhead is probably not significant vs. the entire time within the component.  What happens when some moron creates this very vertical object model that results in a large number of components (think 500-750) being created per call?  The answer:  Lots of wasted CPU time creating a context to run within.

One of the more interesting technology decisions I saw was that people wanted to create centralized COM+ servers to handle middle-tier requests.  I can't remember if MS suggested this or not, so I am not blaming them, just remembering some thingsfrom that time.  The thinking was that if these were big enough, performance would be greatly improved because these servers were dedicated and optimized for these COM+ applications.  Wrong.  When dealing with a web application written in ASP, separating the COM+ application from the web application resulted in generally slower response and in applications that were less stable.  Why?  Network latency seemed to be a problem.  With the MTS components running on the same server as the web application, my web server was very stable.  With the MTS components running on a separate server, my web server was very unstable and had a tendency to die.

Having shared my negative experiences with COM+, I am seeing the same marketing machine startup with regards to Windows Communication Foundation (WCF).  Everyplace I look, the marketing maching tells me that WCF cures war, famine, uphevel, and my constipation.  What am I planning on doing?  I think I am going to wait a little while to see the reaction from people that I know before I jump headfirst into WCF.  The last time, I jumped head first, I found that while COM+ was a mile wide, it was only three feet deep.  Fool me once, shame on you.  Fool me twice, shame on me.

I am interested in this bringing together of messaging technologies, my concern is that no one has told me the price that I have to pay for this.  Tell me that and a lof of my skepticism might fall away.

1 Comment

  • I see this comment was post back in 2006.. what do you think of WCF these days, has your first impression proven to be on it's mark?

    I stumbled upon this post as I searched the internet for 'WCF Overhead'; my wcf service (host in a windows service) is not performing anything like I expected and I was trying to figure out if it was something I was doing wrong or if I should consider an alternative design for my project.

    -David

Comments have been disabled for this content.