May 2004 - Posts
Several months ago I got into a debate on GotDotNet about the place of procedural languages in our programming world. During the course of the discussion my line of thinking evolved on the subject. At first I felt, why bother with a procedural language, when an OO language such as c# can be used? However since this discussion, I firmly believe that procedural languages have their place in our world. For example,
If you look at Service Oriented Architectures such as WebServices and the tenants associated with SOA. Clemens Vasters discribed the following 4 tenants as PEAC (gotten from theserverside.net and his weblog)
- Policy-Based Behavior Negotiation
- Explicitness of Boundaries
- Contract Exchange
The fact that we have explicit boundaries and each method has Autonomy and we are talking about a “Contract“, then do we really need an OO language such as C# to describe our facade? Wouldn't a procedural language be better? Everything about these tenants scream for a procedural implementation. Your underlying implementation could easily call into an OO implementation ; however, why bother with the overhead of an OO language to basically provide a contract to your consumers? Wsdl is, after all, a procedural language.
Other examples include static methods. In many scenarios static methods get abused and used incorrectly. But they also have their place in our world. For example, look at the Convert.ToXXX() methods in .Net. It doesn't make sense to run up an instance of the Convert class just to do a conversion. The designers of the class (IMO) did the right thing and hung a bunch of static methods off the class. However, why not just use a procedural language? If we do this, then how will this affect .Net? Wouldn't it be better to write it in a procedural language such as C#-Light (maybe we call it procedural c# ;) ) that is procedural and we can easily integrate into c# and other languages.
Anyway, just a bunch of rambling thoughts.
I would like to put together a compilation of some of these specific code examples for others to use. Full Credit will be given of course. The only requirement is that you have to be willing to share your code ;)
***Danger Will Robinson...Blood Pressure Rising***
The table designer for SqlServer Enterprise Manager keeps giving me “Invalid Cursor State” any time I try to modify a table definition. I'm pretty good with sql syntax so i can use query analyzer...but dam*it, i'm lazy and don't want to. I can't find anything googling about how to fix it. Prior to rebuilding my machine this never happened. Argggggg.
Ted Neward has an interesting rant what he calls the 11th fallacy. Specifically, centralizing all business rules is a fallacy...his 11th fallacy (http://www.neward.net/ted/weblog/index.jsp). His argument contends that that applications do perform some Business Rule Validations on the client and some on the Server and because you do this it's a fallacy to think you can centralize all business rules. Fallacy defined by Merriam-Webster is as follows
1 a obsolete : GUILE, TRICKERY b : deceptive appearance : DECEPTION
2 a : a false or mistaken idea <popular fallacies> b : erroneous character : ERRONEOUSNESS
3 : an often plausible argument using false or invalid inference
Webster's states a fallacy means obsolete. It then goes on to say it is Deceptive in Appearance“. I'll give you a “Deceptive in Appearance” because it is deceptive to think you can do so 100% of the time; however, I will raise an obsolete because the idea of trying to centralize all business rules is NOT obsolete to try.
However this auto-magical reality does not exist and probably never will. Industry veterans understand that it is good to try to centralize business rules; but, these same veterans understand from the get-go that they will be making sacrifices and break “Tier Purity“ in order to make a particular user-experience more platatable. This is what schools can't teach you. The experience to understand what can be done and when you should do it. The new or inexperienced developers often think in binary terms. I “will“ centralize all business rules and then get frustrated when they start trying to do so. Or they don't bother trying to centralize many rules because they think it is silly to try so why bother.
So is it a fallacy? Yes. I think it is, but it doesn't mean you shouldn't try. It reminds me of college and looking at a crowd of beautiful women. I knew it was a fallacy for me to think I had a chance. But I knew I was going to try.
Back in March I wrote an article for www.TheServerSide.net about using validators in the middle tier. The article Validators In The Middle-Tier enables you to use declarative programming techniques such as (Download the Validation Framework Here)
public void SomeMethod ( [RegExAttribute("[a,e,i,o,u]",RegexOptions.None)]string someParameter)
// Class to perform validation.
MethodValidator validator =new MethodValidator(MethodBase.GetCurrentMethod(),someParameter);
The technique heavily leverages the .Net System.Reflection namespace. Since then, a number of people have asked me about the cost of using Reflection. The simple answer is, Reflection does cost you and your software a performance hit. However, the benefit of using a common validation scheme does outweight the costs. Declarative programming using attributes is a form of Design-By-Contract. By nailing down what your system considers accceptable inputs, you protect your system from unanticipated data as well as malicious attacks by people looking for holes in your system. To me the cost of using Reflection is a drop in the bucket. However, it is a fair question. It is something we must always consider. For example, I don't advocate using the validation technique on all levels of your software. I would concentrate on main entry points into the system. Your lower-levels should, for the most part, let exceptions bubble up to the top anyway.
However, as architects and developers we are always making this trade-off. Much of the software we use today is a balance between maintenance and performance. You needn't look further then your existing .Net development environment to see proof of this. .Net is slower then many programming languages out there if you compare it to languages that let you compile to machine language. WebServices...another prime example. It is considerably slower then making a native .Net call....let alone a native c/c++ call. I get frustrated when people look at Webservices and SOA as the end-all-answer to everything. SOA architectures have explicit boundaries and the cost of traversing these boundaries are not small. However, SOA and WebServices enable you to cross platform boundaries and reuse business logic so they definately have a place in our world.
N-Tier Development. Another prime example of performance versus maintainability. Breaking up our software enables us to resuse software; however, if we wanted truly performant code...how much of this would we really do? Would we load another assembly in our BLL just to make a database call? Well yes we would because we have become so accustomed to doing so; however, if performance were our only objective, would we?
It is an age old story that as machine speed increases and memory becomes cheaper we move closer to tier purity and making performance sacrifices in our quest for purity. 12 years ago we were battling the 640K memory barrier with DOS and willing to make many compromises to minimize our use of UMB (upper memory block). In the java world we are seeing people battle over using the static model versus the dynamic model for Aspect Oriented Programming (static does evaluation at compile time; dynamic at run-time). My guess is that in 5 years no one will really care. Reflection....i don't think we should use it willy-nilly but if it helps, use it now.
The other day I was reading through a bunch of different blogs and one blogger had a quote about Arrogance + Ignorance being a deadly combination. I tried later to go back and find where I read it in order to give proper credit but I could not find it.
How true. For example, when I am interviewing someone or talking to someone and trying to determine how deeply they understand a technology...I am looking for the “I don't know” response. I am also looking for how well someone listens. So many people are afraid to say “I don't know“ Or are too busy talking to sit back and listen. It's ironic really. In a world where technology changes so fast it is so important to stay on top of your game, but if you cannot say I don't know or spend the better part of your day listening...how will you ever learn? Now, I am not talking about getting in a passionate discussion about one design versus another. That's healthy. It's also ok to have some intellectual arrogance. But if you think back and cannot remember the last time you said “I don't know“ or you cannot remember the last time you really listened to someone else's design ideas and said “Cool“ you have thought this out. Or “I like your solution“. Then maybe you should lay off the caffeine. Not only does Arrogance + Ignorance stunt your own growth but it stunts a projects chance for success. A manager or lead who does not listen to those around them is a disaster waiting to happen.
Ralfs Sudelbucher has an interesting post about using a DataAccess Layer.
<cite>How application development currently is "sold" is wrong. Developers still - after 8 years of Microsoft application server - don´t really understand the concept of application server or the value of business logic components. (And I mean "real" business logic with a service oriented interface.)How application development currently is "sold" is wrong. Developers still - after 8 years of Microsoft application server - don´t really understand the concept of application server or the value of business logic components. (And I mean "real" business logic with a service oriented interface.)<cite>
i agree with much of Ralfs end game specifically the seperation of responsiblities which i like to call tier purity (however i don't believe raw dataaccess should be done through a webservice...assembly yes...but not a webservice...but that's another discussion ;)). in many respects ralf is talking about the implementation of a model-view controller or a similar pattern that seperates responsibilities so that the rearrangement of one-tier does not necessarily affect other tiers.
however, i disagree with the culprit of the disconnect. ralf states that it is the pictures we draw that are the disconnect. i believe the culprit is microsoft themselves and many of us who write about the different microsoft technologies. only recently has microsoft really spent any time on the concept of patterns and practices. only someone with an understanding of tier purity/seperation of responsibilities will understand where to draw the line between the encapsulation of business logic and the encapsulation of dataaccess. for example, if you look at many of the examples out there for ADO.Net, you will see that the examples are concerned with showing how to use ADO.Net and not building software tiers. typically, these examples have direct ties from the gui to the database. neophytes look at these examples as a blueprint for building software. The extrapolation to a data-access layer and a business-logic layer is never discussed and it is lossed in the translation. so i agree with much of what you have to say and i agree the pictures are wrong. i just think the problem starts at the top and we as a community need to do a better job and present a better picture of where it all fits in the jigsaw puzzle of software....imho
A couple of my posts (here and here) on AOP have sparked a little bit of a discussion that I would like to clarify. I had a minor rant about AOP being more then Attributes + Interception (as have others) and that people in our industry who present on the subject or write about it should do so in the context of the greater body of work presented by Gregor Kiczales and team because I believe that it is causing people to think of Aspects as a superset of Attributes which just isn't the case (i have heard this from a number of people). First of all, whether or not you prefer a static model or a dynamic model. Or if your implementation is Attributes + Interception without a pointcut composition model...i have no issue with that. My point was to help evolve how we think of AOP. Not the implementation model.
Tom Barnaby does some presenting on Attribute Oriented Programming and tries to touch on the AOP subject. He brought up a good point when he said that it's a bit like trying to describe OO with an encapsulation example. Tom. I couldn't agree more ;). It's a tough subject to spend a short amount of time on.
My experience in discussing AOP with people is interesting. Some people find it interesting and useful. Some found it interesting but not too useful and others could care less because it doesn't affect them in their day to day job.
My take is that cross-cutting concerns are already a big issue in the .Net world and it hugely affects production software. We have just become so accustomed to writing special code to handle concerns such as logging, security and transactions (design by contract too) that we get caught up looking at the trees and not the forest. So, to the people out there who don't believe it affects them. It does. It's just that the tools in the .Net world are still in their infancy and with respect to AOP we as a community are 1-2 years behind the java community. However in the next 12-24 months I bet we close the gap....imho.
Links on AOP
Feel free to send me good AOP links and I will add them to a list for later reference.
When designing or using strongly-typed collection classes that are to be serialized over the wire with webservices and the xml serialization process, you should keep in mind that:
- Based on a couple of rules, your class will be serialized as an array of the type your collection is containing. This means that if you add additional properties to your collection, they will NOT be included. For example:
Here, CustomerAccounts will be serialized but it will be serialized as an array of Account objects. Name, Address, et al will not go across the wire. The obvious solution is to do do something like this:
Most people find the latter implementation more intuitive anyway. So few come across this issue but it's something to keep in mind. Also the following blurb was taken from MSDN
XmlSerializer can process classes that implement IEnumerable or ICollection differently if they meet certain requirements. A class that implements IEnumerable must implement a public Add method that takes a single parameter. The Add method's parameter must be consistent (polymorphic) with the type returned from the IEnumerator.Current property returned from the GetEnumerator method. A class that implements ICollection in addition to IEnumerable (such as CollectionBase) must have a public Item indexed property (an indexer in C#) that takes an integer, and it must have a public Count property of type integer. The parameter passed to the Add method must be the same type as that returned from the Item property, or one of that type's bases. For classes implementing ICollection, values to be serialized will be retrieved from the indexed Item property rather than by calling GetEnumerator. Also note that public fields and properties will not be serialized, with the exception of public fields that return another collection class (one that implements ICollection). For an example, see Examples of XML Serialization.
The title to the post was taken from an entry Larry Osterman http://blogs.msdn.com/LarryOsterman/archive/2004/05/03/125198.aspx. In this blog, Larry correctly points out that we should truly understand where and if we have a performance issue and if so...exactly where is the performance slow-down occurring. It's true. When I first started off in the software business, one of my first technical mentors taught me to constantly test for performance as I develop and that I should make sure I understand where any performance slow-down is before fixing it. True or false, have you ever fixed a performance problem only to find out the problem was actually somewhere else? If you haven't, you probably aren't being honest with yourself. It's a learning experience that every cowboy programmer goes though.
To add a tangent thought, as developers and architects we also often create unnecesary optimizations. How many times have you been in a code review or seen something where it was suggested to optimize because you could squeeze out a millisecond or two of additional performance without asking the question “Is it fast enough“. It is a dangerous game we play. In one sense we always want to have screaming code but in another instance the business requirements don't necessarily care as long as it is “fast enough“. So why do we bother?
For example, if I always wanted the fastest data-access, I would have my DataAccess Tier always return a SqlDataReader/OracleDataReader, etc; however, I know that if I do this I am:
- Forever tying my Business Logic Layer to understanding the different exceptions that might be thrown by these different readers (yes they are layer specific...a pet peeve of mine but another subject altogether).
- I am also requiring my clients to correctly close the connections.
Instead, I like to wrap my DAL in such a way as that it always returns disconnected data and returns exceptions defined within the DAL. This let's me control exception types and properly manage connections and connectionstrings. Does this mean that I never return a reader? No, we always need to balance performance with “Tier Purity”; however, my default DAL implementation is still very performant and fast enough for the vast majority of client tiers. Now if I was handling millions of credit card transactions in a short time period...well that's different. Or if my result set was sufficiently large and a client applications needs were such that disconnected data wasn't performing, I would then look at streaming the data. Here performance far outweighs Tier Purity....My point is, most business applications care about being fast enough and business sponsors don't give a da&$ about squeezing out the last bit of performance. We as architects and developers must understand the difference.