October 2006 - Posts

Jeremy's Programming Manifesto...
31 October 06 10:35 AM | MikeD | 4 comment(s)

Jeremy posts about his Programming Manifesto. It's a great read, and while I am still digesting it, I will just summarize it here. Go read his stuff please.

 Unit Testing and Testability over Defensive Coding, Tracing, Debugging, and Paranoid Scoping

Sharp Tools over Coding with the Kid Gloves On

Writing Maintainable Code over Architecting for Reuse and the Future

Explicit Code over Design Time Wizards

Software Engineering over Computer Science

Jeremy prefaces his manifesto saying "while I do believe there is some value in the stuff on the right, I think the stuff on the left [in large font bold] is more valuable and important."

The only thing I am grappling with in his manifesto is how will the next paradigm shift in programming fit in with this? Model-driven, software-factory stuff, for example. We've got to move somehow from building a high-rise office building (ie. enterprise app) by mixing the concrete with shovels in a wheelbarrow, instead of assembling pre-stressed concrete sections of floors and walls. The Design Time Wizards that Jeremy is (rightly) eschewing need to be much more standard, much more flexible, much more powerful.

There is a quantum shift in the programming paradigm coming, just when and how and who will drive it is the mystery.


DDD and ORM blog posts of note...
30 October 06 01:45 PM | MikeD

Ted Neward has a great post that compares ORM to the Vietnam war. http://blogs.tedneward.com/2006/06/26/The+Vietnam+Of+Computer+Science.aspx

And Paul Gielens on Domain Driven Design http://weblogs.asp.net/pgielens/archive/2006/10/20/Swear-by-a-Domain-Model.aspx

and on the database's role http://weblogs.asp.net/pgielens/archive/2005/09/11/424836.aspx

and a great comparison of DDD vs Microsoft's Three-Layered Services Application (their standard architecture used for PetShop and others). http://weblogs.asp.net/pgielens/archive/2006/08/08/Organizing-Domain-Logic.aspx


Filed under: , ,
Unit testing Cmdlets for PowerShell
26 October 06 03:59 PM | MikeD

I don't use PowerShell myself, but this blog post caught my eye.

If you miss VS, intellisense, TD.NET, etc., you might want to try extending PowerShell with custom cmdlets, which are .NET classes deriving from Cmdlet. They allow you to extend PowerShell while still programming in your favorite language.

Read Pablo Galiano's post for a step-by-step introduction to Cmdlets.

I'm hooked to PowerShell. It's been really fun to learn, and I'm loving it. 

I'm also hooked to Test Driven Design (that's what TDD should mean, IMO), so I naturally looked for a way to develop my cmdlets in a TDD way. Turns out that it's fairly easy.

As Joel would say, it makes me feel all warm and fuzzy when people take a "Test First" approach to something (and then he would say "...and the villagers dance.")


Model-View-Presenter in ASP.NET
26 October 06 03:54 PM | MikeD | 1 comment(s)
Haacked has a great article on the nuts-and-bolts of building ASP.NET pages using the MVC pattern and actually testing them (actually using TDD techniques, not just saying they should be used).
Filed under:
Spectrum of Knowledge? No...Landscape of Knowledge
26 October 06 03:48 PM | MikeD | 1 comment(s)

Phil writes about the Spectrum of Knowledge...

I've put together something similar for Knowledge of both users and developers.

I think HTML is the easiest for developers to use (think back to the 90's every man and his dog was writing HTML) and that the web is the most comfortable for 99% of users out there, they know and understand the web and how to use it, Windows Applications are much less comfortable to most.

Not sure it's 100% accurate but as a Web Developer it's certainly how I see it.

What do you think?

No, I don't agree. I can see how it's fairly easy to line up these technologies and assess their relative knowledge by users and developers, but I think it's entirely depending on where you are living in this "spectrum".

Me, I'm not really a web developer primarily, so I can't say that my knowledge increases in the same direction - probably exactly the opposite. And the users I have experience with (currently business users of a PowerBuilder enterprise-wide app) have far more knowledge of WinForms-type apps than they do of browsers and websites.

So if I were to draw this, I would probably draw a landscape of overlapping circles, where the circle represents a certain technology, and the overlaps are where the two technologies overlap (obviously). Then the users and developers would have to be people that would be placed somewhere on this horizon or landscape, and their relative knowledge would be a colour-gradient circle surrounding them - the further away from their position, the less they know of it. (I think there's large pieces of the developer landscape missing from Phil's drawing I think - database (SQL), integration (Biztalk), collaboration (Groove?), more infrastructure-type areas of developer technology)

Some people have been able to position themselves over time in lots of different places in the landscape of technologies, and perhaps their circles of knowledge are not as large wherever they have stepped, or the colour quickly fades. On the other hand, devs or users that spend a lot of time in a particular area can extend their circle of knowledge and saturate their colours because the depth and completeness of their knowledge increases.

That's how I would draw it, if I had Office 2007 drawing tools.

Filed under:
Interface first?
25 October 06 03:04 PM | MikeD

I've been re-reading this post from Sikin Joseph for most of the afternoon.

...I got it into my head that I should "Always program to an interface not an implementation". So whenever I needed to create a new class/abstraction I used to start of with an interface IThis, IThat etc. over a period of time I saw that most of my interfaces were only ever implemented by one class. This started becoming a pain to maintain as adding a new method meant that the method needed to be added to both the interface and the implementation class, further since there was only one implementation of the interface some developers started substituting one for the other and freely casting between the interface and the implementation. This became a source of constant code review changes and eyesores in code, with some methods expecting interfaces and others expecting the implementation class.

 I identified with his problem immediately - recently I have been using the Smart Client Software Factory from Patterns and Practices (except not in C# but in VB.NET, so the nice GAT recipes aren't there). Their pattern of web service proxies is really nice, and the Model-View-Presenter pattern is also useful. However, I found on the first app I wrote using these two patterns, a change in the web service WSDL (usually for method parameters) caused a large cascade of code changes - in the command class parameters, in the interface code, in the service methods that call the command classes, and so on. I was wishing at the time that I could just say "here is my (new or changed) webservice - please adjust my code accordingly" through some GAT recipe.

Often a major change would be easier updated in teh code by going to the "Implements IMyChangingInterface" line in the class code (that has the blue squiggly line under it) and pressing Enter so that VB's Intellisense would create all the method stubs with the Implements IMyChangingInterface.YaddaYadda stuff on it, then move that method header to the "old" one and replace it. Then mess with the internal method code to deal with the new parameters.

The alternative I am thinking is to program the concrete class first, use it everywhere, and then have the factoring tools extract the Interface at the end (or at least when the code churn settles down) - however, I think there are advantages to using the Interfaces early - testing, for example, can be better factored early on by using interfaces. And if you leave the Interface extraction until later, how do you decide where you want to use the interface?

Ideally, I think I'd like a code tool that allows me to synch up a concrete class with its interface - but let it be a two-way synch if I want it, and be smart about identifying the method signatures that should get updated and where. Like Red-Gate's SQL-Compare, for example.

Ok, if I had any spare time, I'd dig into the GAT stuff and write those two recipes - synching a concrete class with an Interface, and updating the web service proxy command and service classes with an updated WSDL.

I still think I would use Interfaces, one way or another, but I do understand Sijin's pain around them.

.NET, Java, Patterns, Abstraction, YAGNI
17 October 06 10:03 AM | MikeD | 1 comment(s)

Some interesting tidbits

 From Ted Neward's blog:

At the patterns&practices Summit in Redmond, I was on a webcasted panel, "Open Source in the Enterprise", moderated by Scott Hanselman and included myself, Rocky Lhotka, and Chris Sells as panelists. Part of the discussion came around to building abstraction layers, though, and one thing that deeply worried and disappointed me was the reaction of the other panelists when I tried to warn them of the dangers of over-abstracting APIs.

You see, we got onto this subject because Scott had mentioned that Corillian (his company) had built an abstraction layer on top of the open-source logging package, log4net. This reminded me so strongly of Commons Logging that I made a comment to that effect, warning that the Java community got itself into trouble (and continues to do so to this day, IMHO) by building abstraction layers on top of abstraction layers on top of abstraction layers, all in the name of "we might want or need to change something... someday". It was this very tendency that drove many developers to embrace YAGNI (You Ain't Gonna Need It) from the agile/XP space, and remains a fiercely-debated subject. But what concerned me was the reactions of the other panelists, whose reaction, paraphrased, came off to me as, "We won't make that mistake--we're smarter than those Java guys."

And from Sam Gentile (who quotes most of the rest of the above blog entry from Ted Neward):

I really loved Ted's post on the above with the quote, "But Java still has much more it can teach the .NET community: mocking, unit-testing, lightweight containers, dependency-injection, and the perils of O/R-M are just part of the list of things that the Java community has close to a half-decade's experience in, compared to .NET's none." Amen. I have been making this same point for years. Some people in the .NET/Microsoft community think all this stuff is whacked because its not part of a MSDN article but these things are part of parcel of great software architecture and development and the .NET community is way behind here. When I do my SOA talk around the country and talk about Software Architecture, I ask the audience if they have one of the bibles, Evan's Domain-Driven Design and almost no hands go up! Repositories, DI, OR/M, gosh I must do the database-driven stored proc thing all the time because Microsoft tells me to. I am really hoping that key books like Jimmy Nilsson's Applying Domain-Driven Design and Patterns: With Examples in C# and .NET, starts to solve this issue that Martin Fowler calls "Many people in the Microsoft community have not been as good as others in propagating good design for enterprise applications...this book is a valuable step." Her's hoping (again).

I am finishing Jimmy Nilssohn's book Real Soon Now and going onto Software Factories next. But I think I would also like to read the Evan's book on DDD and Martin Fowler's Patterns of Enterprise Application Architecture. Yeesh. That's a lot of stuff on the shelf.


"Developers" AND "Programmers"
13 October 06 10:38 AM | MikeD | 6 comment(s)

A recent (uncredited) article on Hacknot discusses the terms Programmer and Developer.

A modern programmer loves cutting code - and only cutting code. They delight in code the way a writer delights in text. Programmers see their sole function in an organization as being the production of code, and view any task that doesn't involve having their hands on the keyboard as an unwanted distraction.

Developers like to code as well, but they see it as being only a part of their job function. They focus more on delivering value than delivering program text, and know that they can't create value without having an awareness of the business context into which they will deploy their application, and the organizational factors that impact upon its success once delivered.

Ok, so far I am with this (mystery) author.

I look at these two roles and I can think of many examples of people I have worked with who fit into one of these molds, or somewhere along the spectrum in between.

Is there anything wrong with either of these two roles?

Well, read the article further, and you'll see considerable derision directed at the "modern programmer". There are several examples before these...

...programmers would be the hares, and developers the tortoises. Programmers, prone to an over-confidence resulting from excessive faith in technology's ability to save the day, will find themselves facing impending deadlines with work still to go that was meant to be made "easy" by that technology, but was unexpectedly time-consuming. Not surprisingly, the technology doesn't ameliorate the impact of too little forethought and planning...

Programmers often view their user base with disdain or even outright contempt, as if they are the ignorant hordes to whose low technical literacy they must pander. They refer to them as "lusers", and laugh at their relative inexperience with computing technology. Their attitude is one of "What a shame we have to waste our elite programming skills solving your petty problems" and "You'll take whatever I give you and be thankful for it." Programmers delight in throwing technical jargon at the user base, knowing that it won't be understood, because it enables them to feel superior.

...Programmers tend to rush headlong into tasks, spending little time considering boundary conditions, low-level details, integration issues and so on. They are keen to get typing as soon as possible, and convince themselves that the details can be sorted out later on.

So, according to my interpretation of this article, what differentiates the developer from the programmer the ability to communicate with the business/client/user, professionalism in coding practices, and some sort of social maturity.

Is there nothing good about being a programmer?

My dad owned a construction business for 35 years. His workforce varied over the years from a dozen or more hired men to just a couple. He built mostly residential homes and some farm and commercial buildings in a small rural Prairie town. There was usually one or two other small construction businesses in town that was his competition. He basically ignored them and focussed on producing high-quality work for a reasonable price. He didn't care about the volume of business he did, so long as he could pay the bills and raise his family. One of many things I learned from him was that the quality of work was directly related to the attitude of the worker towards the work, and not necessarily the attitude of the worker towards the client. Some of his workers could relate well to the customer, others didn't do so well that way. The ones that didn't communicate so well, it was important that they kept busy and kept their mouth shut when the customer was around. But many of them could produce really excellent results - fine finish carpentry takes skill and care, for example, that not everyone has. Other workers could talk and joke with the customer and yet not be the best person to build the kitchen cabinets. In the end, there wasn't really any particular correspondence between those roles. Some could do both, some could do one better than the other. It didn't really matter. There was room on the crew for anyone who cared about their workmanship.

What's good about a being a "modern programmer", to me?

Deep understanding of a technology

I would say a programmer has a deep and broad understanding of one or more particular languages or frameworks. In the .NET world, that would be someone who has a great understanding of lots of the .NET Framework, and probably has gone deep with some areas of it that aren't necessarily mainstream, like System.Security.Cryptography, or System.Reflection. As a result, they understand how best to use the language or framework to get the results they want. In other words, they "embrace the technology" and can code something the "C#-way" (or the TSQL-way or the whatever-way).

A developer doesn't always have the time nor interest to go deep with a particular technology, or perhaps what they knew deeply has now moved on (like, they were really productive in Visual Basic 5, but haven't kept up and aren't intimately familiar VB.NET for example). So, hand a coding task off to a "developer" familiar with VB5, and ask them to produce code in VB.NET, that code will probably work, and look remarkably like VB5 code. And probably not take much advantage of the VB.NET language features nor the .NET Framework.


A developer, according to the article, has "a much more cautious approach to new technology" than a programmer. A healthy dose of salt is required when looking at new technologies, of course, and I think mature programmers understand that as well. However, if programmers are more interested in new technologies than developers, are there benefits to that? I think so - one being the ability to see what is coming on the horizon and be ready for that wave when it arrives. If a programmer sees a particular feature in a new technology that could solve a problem, then they may be able to see how they could be ready to take advantage of that when (and if) the new technology can be adopted.

Elegant Abstractions

I am convinced that one person's abstraction is another person's voodoo. Some people are hugely productive with EMACS and VI, not me. As I have been teaching .NET languages over the past few years, I see that some people can understand abstractions in object oriented languages, others find it more difficult, or just not natural. Some students ask me, "do I have to use inheritance in .NET?" and the answer is "yes and no". Really, you ARE using inheritance whenever you build a Windows Form or an ASP.NET page or an XML web service. But you don't have to create a large (deep and wide) class hierarchy to code your requirements. However, often a well designed and implemented class hierarchy can introduce layers of abstractions that are truly elegant and flexible and simplify the code, but might be daunting to the maintenance programmer.

When I first heard about Generics in the .NET Framework 2.0, I thought they would be interesting to use for strongly-typed collections, but beyond that they wouldn't be of much use to me. Now that I have used them more, I have seen more patterns of classes that can be elegantly implemented using generics, and they're not just collections. In the project I last worked on, we based all our interactions between the .NET world and the Biztalk world on message classes (schemas) that were implemented using generic base classes.  It's an elegant and surprisingly simple thing now, to create a new message class to converse with a Biztalk orchestration, and by using the generic features, we enforce a corporate standard in the message schemas easily. The point is, until I had a deeper understanding of how generics are implemented in the .NET Framework 2.0, I didn't understand how to use the abstractions they provide.

I'm not going to say here that Programmers understand abstractions better than Developers. If Developers are "closer" to the business/customer/user than Programmers, then I would say that Developers would better understand the abstractions of the business domain while Programmers might better understand the abstractions of the technical domain. It's just different viewpoints in my opinion. Both views are important.

I probably differentiate Programmers from Developers differently than the author of this article. But I value both of them on a team. If there's someone that can easily talk to the customer and come to understand their needs and pains, I want them on my team. If there's another person who I would never put in front of the customer, who only eats flat food, and who creates vast amounts of great code in a short period of time, I want them on my team too.

Filed under: ,
Software Factories and the YAGNI factor
10 October 06 10:46 AM | MikeD | 2 comment(s)

Steve posted his concerns over software factories and the YAGNI factor. ("You Ain't Gonna Need It")

My concern with Software Factories is that the output from these procedures are chock full of code that you're not going to need, but are included in the mix because that's how the factory works.  I just went through a project where we used a factory approach where our plumbing was provided for us, but it was overkill for our needs.  And instead of helping us out, it's complexity slowed us down.

The output of the factory was full of YAGNI code, but we couldn't easily separate it out from the code that we actually needed.  So we keep it all in place and hope that we never have to touch it and live with a project that is full of code that we just don't have the time to understand.  Our simple project that should have consisted of 1000 lines of code ballooned to ~15,000.  Ouch!

I've heard similar comments about Typed DataSets vs Business Objects - if you count the lines of code in a typed dataset it far outweighs the equivalent if you were to hand-write an equivalent business entity object.

If I recall the above project Steve mentions correctly, the original intention was to include much of that code as external assemblies, and not as source code at all, right Steve? So to me it hardly counts. And I believe you ended up using some other components, like the OpenNetCF framework too. I'm betting you only used a fraction of that framework's functionality too, so is your distaste based on having more raw code for the compiler to chew through?

Just because we don't use the generated code, does that mean it is smelly?

To some degree I suppose, but it happens everywhere, not just in source code - I will never claim to build a project that uses all of the .NET Framework, or a database that uses all the functionality of SQL Server 2005, or a website that uses all of ASP.NET. When I have used typed Datasets, I have never used all of their functionality either.

So, my question is, do you judge the value of a software factory (in Steve's case, read Code Generation) based on how much it covers your required functionality? and do you reduce its value based on how much unused/unwanted functionality it provides you?

Here is another question, this situation taken to the extreme: if the Tests define the functionality, do you then also test to ensure that the components you are testing DO NOT provide additional functionality? Of course you can never write enough tests to prove that no additional functionality is present. What you must avoid doing, in my opinion, is rely on a component's feature (or indirectly, rely on a component's behaviour that uses another component's feature) without having a corresponding test for that behaviour. Otherwise your tests do not define the complete functionality of your component.

So, if someone uses our component that has untested functionality (due to code gen or the software factory's YAGNI features), and they use that functionality, they should write a test that ensures that behaviour is correct (and be prepared to reproduce that behaviour when the referenced component removes that feature [presumably because Steve thot it was smelly :) ] )

Don't get me wrong, I am not in favour of bloatware, nor adding in features I Ain't Gonna Need. I have stood accused of adding features I think might be useful, I will admit that. And I am much less averse to Frameworks than Steve is, having had more positive project experiences with them than he. Steve, I think you would appreciate Jimmy Nilsson's book "Applying Domain-Driven Design and Patterns" for his approach to ensuring the core domain's knowledge and behaviour is properly represented and testable first and adding in the infrastructure to support it (like persistence and presentation) later (iteratively, I'm sure he would want me to say). I am just finishing it and I have enjoyed it immensely. Jimmy blogs here.


MSBuild SQL database synchronization tasks (for RedGate SqlCompare and SqlDataCompare)
04 October 06 03:22 PM | MikeD | 2 comment(s)

Phil just posted a couple of MSBuild tasks that he uses to synchronize the schema and data of two databases, using the API's for RedGate's SqlCompare (for schema) and SqlDataCompare (for data).

The next project build server I set up I will try these out for size. It would be nice to be able to synchronize the schema from a dev database to a unit testing database, and the data from a unit database to a test database to a production database. The data synch task he wrote would need to be enhanced a bit to be more selective and flexible about the tables to synchronize and how to synch them.

The project I am currently working on has issues with Unit Tests that fail because the data they depend on gets reloaded on the database, or the unit test changes the data and it becomes unrepeatable. I can see this task being useful to have a "clean" unit test database that never gets tests run against it, and using that database as the source to synch the actual unit test database to.

These tasks could help in staging deployments too - by separating the tasks into a "diff" task and a "synch" task - a build process could "compile" the output (ie. SQL script) from the diff of one database to a snapshot of the current production (or test or staging or whatever) environment database, then another task could run a test deployment of that script to the target environement that needs updating. If that is successful, the final production environment deployment should be reproducible from the compiled output.

BTW, these RedGate products, SqlCompare and SqlDataCompare (they have others, but these are the ones I use), THEY ROCK!


Filed under: , ,
More Posts Next page »