September 2004 - Posts

House Sold Officially -- New House Almost Done

We have officially sold our house -- closing was this afternoon -- we are now homeless for 3-4 weeks.  :)  We're actually renting back from our buyers for the next month, so we're not literally homeless, but we're not home-owners at the moment.  Our new house only has a few small items still to be done, along with the final cleanups, and we'll close on it on October 20 and have 9-10 days to move our stuff.  Its a great feeling to have it all work out -- and not have to move twice and rent an apartment if our house had sold too fast -- or to have two payments if it hadn't sold.

I love the internet!  We sold our house ourselves -- without a selling agent.  All we did was stick up a sign and some flyers, put some small pointer signs up at the corners on the weekends, list ourselves on the MLS listings ($360), and put up a website ($8).  I know for a fact that the website made a difference -- the buyers we're looking at a house just like ours in our subdivision, saw ours when leaving and pulled the flyer and looked at our website.  They were ready to make an offer on the other house the next day, but our website's photos and extra information made them delay and look at ours instead.  We were not home when they came by, and without the site with more photos and details than my flyers they would not have bothered trying further to get in contact with us.

I love the internet!  I saved $500 on my new refridgerator by price-shopping and model comparisons -- and Sears lost my business since they were unwilling to deal in this new world.  I also saved another $500 on some new living room furniture through price-shopping and model comparisons -- and Richs/Macys lost my business since they were unwilling to deal in this new world.  And I could go on and on -- a couple of years ago I got a $600 Treo for $150 for instance -- and yet some people were still shopping in these old stores that just don't get it -- but for how much longer?

Posted by PaulWilson | 5 comment(s)

O/R Mapping or Code Generation asked me to contribute an item to foster a debate on O/R Mapping or Code Generation -- check it out here and add your comments.  I was a little hesitant to start this particular "debate", since as I've noted before I think both options are far better than the alternative of doing everything manually -- but I guess I should defend O/R Mapping since I do sell one for $50.  :)
Posted by PaulWilson | 1 comment(s)

More Problems with SP1 for .NET v1.1

Last time I ran into problems with SP1 for .NET v1.1 it was because it was installed on my web server and the client validation files were out of sync.  I had not installed it yet on my own machine, so I did not have any problems with using it myself, but I did go ahead and install it after that edisode.  Today I finally got around to making another small set of changes in my web app -- for the first time with SP1 for .NET v1.1 on my own development box.  Well, to make a long story short, somehow SP1 either requires different security settings for debugging, or it changed my settings during the installation process.  Or maybe it was because I couldn't install SP1 on top of my existing .NET v1.1 install (as many others have noted also) and so I had to uninstall and reinstall .NET v1.1 before installing SP1.  Anyhow, I now have to turn on Windows authentication to debug my web applications!  Note that I do NOT mean that I have to turn off anonymous access, for that's still on also -- but I do have to also turn on Windows auth at the same time.  Maybe there's something else I'm missing, but I never saw this before, and I don't see any way to do it otherwise now, and that's all the information the error message provides.  A search on the error message only turned up how to turn on Windows auth (duh) -- it gave no clue as to how to stop needing Windows auth in the first place.  :(
Posted by PaulWilson | 2 comment(s)

More Problems with GotDotNet Workspaces

My Xml ADO.NET Provider on GotDotNet Workspaces has had its release once again disappear.  Last time I re-released my files, and then later the original release re-appeared, leaving me with two confusing releases.  I guess I'll just wait and see this time.  :(  I also have been totally unable to approve a user that requested join my Workspace.  I've tried many times over several days, but I get a generic error everytime.

Update: It came back on its own after a few hours.

Posted by PaulWilson | 3 comment(s)

GotDotNet, SourceForge, and Other Ramblings

I really like SourceForge much better than GotDotNet Workspaces.  Why?  As an end-user SourceForge always works, and works fast.  On the other hand, GotDotNet Workspaces have a history of not working, or being too slow to be acceptable.  That said, they do seem to be getting better lately, but they still aren't quite there yet at times.  Anyhow, I already posted recently about my first experiences with SourceForge from the contributor perspective, and while it wasn't "easy" I can totally say that it was reliable.  So now I put my latest on GotDotNet -- and to be fair I should share my experiences again.  By the way, why did I switch?  Simply because SourceForge rejected my proposal and for no other reason since I still think they are the far better choice.  They didn't really specify why they rejected my proposal -- it was just a generic message that said it could have been anything from lack of sufficient details to not being a needed project.

So, back to GotDotNet.  First, I tried several times over 2 days to create my GotDotNet Workspace, but it kept erring out.  Note that it never once gave a reason for the error, and sometimes it was quick so I know it wasn't just a timeout.  Also, when it finally did create my Workspace it was with input no different than the other times!  Next I added all my files -- and SourceForge is much better here with Tortoise and CVS.  Why?  Well I've never been able to contribute to a GotDotNet Workspace with anything other than their simple Html file upload -- and that sucks since you have to do one file at a time.  I think the problem has something to do with their integration with Passport -- more on that in a second.  Finally, I made my first release and everything was fine -- at least for part of a day/night.  Then today I checked before making my announcement and my release was gone, with no history of it ever occurring.  So I made another release, only to find that much later today I now had 2 releases since the other came back mysteriously.  I have removed the first one, of course I'm still crossing my fingers that the other doesn't disappear now.  :)  Another annoyance with GotDotNet is that even with the url alias they allow you to set up, you still end up at a cryptic url with a guid everytime, and the level of services in no way matches up with SourceForge.

Finally, while mentioning Passport earlier, it reminded me how much Passport stinks on some occasions.  I think my main problem is that I have 2 Passports -- one for my older pre-WilsonDotNet days, and then my newer one associated with WilsonDotNet.  So some things, like GotDotNet, have my older passport, and this means that everytime I go to these sites they have no idea who I am.  Instead I have to signout of Passport, and then sign back in with my older passport, but since my new one is the default this only seems to half work.  There should be some way to merge Passports together so that you can avoid these issues (and maybe there is and I haven't seen it).  Anyhow, its never really caught on as Microsoft hoped, and its always more annoying than any other system -- so they should really give up on it!

Posted by PaulWilson | 4 comment(s)

News about IIS 7 -- IIS done Modular

I see on Fritz Onion's blog that Scott Guthrie has publicly talked about IIS 7.  I think this is targeted for Longhorn Server, so don't get too excited, but IIS 7 rocks!  I saw an early preview last year at the MVP conference and I think admins will love it.  Its easily configured with the new xml config files, and its totally modular.  This means that if you don't even have to install the modules for things you don't intend to use!  That's security done right, and it should sound a lot like something called Apache.  :)
Posted by PaulWilson | 6 comment(s)

What Makes an Effective Software Manager?

I was recently reading Code Magazine and came across an interesting editorial by Rod Paddock about effective software managers.  I have worked with several decent managers, along with a few not-so-good managers, but I've only ever worked with one exceptional manager -- and the things she did were exactly what Rod described here.  By the way, since I only have good things to say about her, I assume she won't mind me saying her name -- it was Stephanie Barulic (spelling?) when I was a contractor at Clarus Corporation in the good old DotCom days.  We were building yet another procurement package called eMarket, based on Microsoft Commerce Server (yuck), which won Microsoft's Global eCommerce Product of the Year Award while I was there.  Of course, winning awards in those days was more about being properly connected than actually having a great product, so I'm not trying to claim anything extraordinary here.  I'm pretty sure Stepanie's team was the smallest team of three major teams on the project, but we consistently got more done, on time, with less bugs, than all the others.

So what did she do that was so exceptional?  First, when I was first assigned to her team, she had me only work on bug fixes for a while.  This was rather annoying to me, but I did my time and did it well, and I was reward by being put in charge of a major piece after that.  A few others did not do so well and she had them released or re-assigned -- no other team to my knowledge ever released someone -- and the lack of quality showed at times.  She also put me over someone with more seniority, and who was better connected than I was, which was not the way most others (if any) ever picked their team leaders.  The other quality that Stephanie had that I absolutely think is critical, and this was mentioned in Rod's article which made me think of her, was that she was willing to make "the call"!  I actually disagreed with several of her calls, although I can't even remember what they were now since that's not important -- what was important was that she made "the call" when it was necessary.  This kept us more focused and working hard than any other team, with far fewer meetings than any others too.  Finally, not only did she clearly say who wan't up to par, she also clearly gave credit to those on her team for the work that they did successfully.

Posted by PaulWilson | 8 comment(s)

Examples of O/R Mapping vs Stored Procedures

Mike Schinkel of responded to my previous post about "dynamic sql" by asking for a real-world comparison "example" that would clearly demonstrate the pros and cons of O/R Mapping.  I seriously thought about providing such an "example" for a few minutes, but (1) there are already enough "examples" for those that are truly interested and (2) it would be a lot of work that I try to avoid.  :)  Seriously, see my ASPAlliance article for an introduction, see my site for more snippets, and finally, and most importantly, download my (or someone else's) ORMapper demo and see it all work for yourself.  And I also really do seriously mean giving you what you want would be a lot of work -- but only on the "traditional" side, since the examples on the O/R mapping "side" are quite trivial, as the article, snippet, and demo should make very clear if you think about it.

Why?  Lets say for a moment that our "example" is going to just be a single simple table with a few fields, along with the basic CRUD, and nothing else.  The first thing I need to do for both "sides" is to create my database table -- simple enough.  Now for my ORMapper, all I need to do is create a very simple entity class and an xml mapping file, both of which can either be done with my automated tool (the ORHelper) or manually without too much work.  An entity class, if you don't know, is typically a class with private member variables for each field and public properties to wrap those members with any additional business or validation logic necessary.  Note that you are also going to probably be creating this very same entity class in the "traditional" approach, unless you are going to work with untyped and bloated datasets (yuck), so the entity class is something you need on both "sides" again.  That really just means that the xml mapping file is the only thing extra required to use the ORMapper -- and again it can be generated or created without too much trouble by hand (see my demo or snippets to see for yourself if you don't believe it).  Seriously, that's all -- now you just create an instance of the ObjectSpace, passing it your connection string and the mapping file, and you can instantly create new objects, update them, and delete them -- let alone retrieve entire collections, filtered and sorted collections, and paged collections -- for any database provider at that. 

What about the "traditional" approach?  We've already noted that we need the database table and entity class for both "sides", so what else is needed?  We need a minimum of 4 stored procedures, and I really mean a minimum since you will likely want at least 5 or 6, or even more.  Why a minimum of 4?  One for inserts, one for updates, one for deletes, and one for retrieval -- and its that last one that may require several variants even in the simplest of cases.  Why?  How about one stored proc to retrieve all objects, one to retrieve a single object by its primary key, and one to retrieve some other variations.  Note that even this, a simple situation where just 6 stored procedures are involved, is often not enough since that "other variations" concept can get rather involved.  Again, why?  You probably want to retrieve one object by some other more "natural" key, and you are likely to want to retrieve collections with various filters and sorts -- and each of these either require a new proc -- or you're going to end up creating a huge ugly proc that accepts parameters for filters and sorts and which then uses dynamic sql to get the resultset!  And that kind of brings us back full circle to what you wanted to avoid, and we haven't even hooked anything up yet -- all we've done is create a bunch of boring stored procs.  Before we move on to see what else is needed, lets note that you're options really are either limit your query flexibility, write lots of stored procs, or write one massive proc that uses dynamic sql anyhow -- and will likely now require your application to know all of the field names in order to even work.

So what else is needed beyond procs?  You need a DAL, a Data Access Layer, that enables you to call all of these stored procs -- and hopefully one that is easy to use (the MS DAAB is not in my opinion) and which actually closes all of your connections (CSLA examples didn't at one time, although hopefully it does now after it was pointed out by yours truly).  I personally don't think a DAL is all that difficult, but then again that's why I could write an O/R mapper -- I've worked with lots of developers over the years and most fall seriously short in understanding how to use any DAL given to them, let alone build one themselves.  OK, so lets assume you have a DAL or are going to use the MS DAAB -- now you still have to add all the code in your entity class to "hook up" the class with all of these various stored procs -- and I assume you've done this enough to recognize that even using cut-n-paste this isn't trivial -- its a lot of boring repetitive work which is prone to mistakes.  And you've got to either have your code know all of the field names in order to associate the correct values with your class members, or you've got to very carefully assume a certain field order (hopefully you'll use an enum to at least make it a little more maintainable -- but most people don't).  As noted earlier, you'll either have limited functionality, or a lot of procs to hard-code into your class, or a very big monster proc that can be used lots of ways and which will be difficult to work with in reality.

That seems like a heck of a lot of work to me, just for one simple little table -- and no I'm not going to create you an "example" of it to compare with my O/R Mapper since we've all done it a million times.  Come on -- you should know the amount of work your "side" requires, so why don't you download the demo of my mapper, or one of the other ones, and see for yourself how little work they require!  Yes, there are alternatives -- maybe you like to use untyped and bloated datasets instead of entity classes and commonly accepted object-oriented principles -- you've still got to write all those procs, get or build a DAL, and then "hook up" all those stored procs for retrieval and persistence with these datasets.  Maybe you prefer typed datasets instead -- its still going to take that same work, although now you'll be able to at least avoid some compile-time errors and have a little bit of intellisense (not much).  Finally, you could of course generate all of this "traditional" method (using CodeSmith or some other tool) -- and this is one alternative that I won't make fun of -- it really is a valid workable alternative.  That said, you still have to either give up a lot of flexibility that a mapper will give you for free, or create and maintain some very sophisticated templates for your code generation.  For those that need the utmost of control (yes, everyone thinks they are in this category, but realistically very few ever are) this is a valid option, but its a lot of work and I know that most development teams aren't up to it!

What about maintenance?  What if you need to change a field name (silly example, but lots of people bring it up)?  For my mapper you just change the name of the field in the xml mapping file -- end of story -- no recompile.  Some mappers use attributes and would require a recompile, but what about these other "traditional" techniques.  First, you need to change the name of this field in a lot of stored procedures -- and if you're very lucky that's all -- still more work than using my mapper.  But if you hardcoded your field names in your application (and many really do as pointed out earlier) then you're also going to have to change source code, maybe in multiple places, and do a recompile -- and you'd better have the test team do some testing.  If you used code generation then you maybe only need to regenerate the application's code and recompile -- still more work than my mapper.  Oops, most teams really end up customizing the generated code since they didn't spend enough time on the templates and so they'll either have to redo this customization, or just give up on code generation and instead alter their procs and code manually!  What if you need to add a new field (a little more realistic)?  Add a new field to the mappings and a new member to the entity class if you're using a mapper -- ah ha the critics roar -- you have to recompile.  Seriously, can you really just add this field to your stored procs and never recompile?  Not -- because both of us also need to add this new field to our UIs or business logic in some manner -- otherwise why are we bothering to add this field in the first place.

Hopefully I've done a fair job at convincing you a good O/R Mapper really is a huge time-savings, both initially and for maintenance -- but what about other concerns?  Everyone knows that performance sucks -- NOT -- people can debate this one forever, so try it for yourself -- that's exactly what I did before I made the jump from "traditional" to mapper!  Most CRUD is pretty straight-forward actually, and most mappers avoid the bloat of datasets, so there's actually little difference, and sometimes they perform even better.  But seriously, there are many other factors that weight much heavier -- network traffic or really cool UIs come to mind immediately -- so try it out for yourself and get over it.  The other gotcha is security -- again mostly NOT -- although I'll readily admit this one isn't as clear-cut.  Basically, you are "forced" to give a database "user" direct read and write to tables (unless you want to map stored procs and lose a lot of the benefits), but no one ever said you had to give out the credentials of this database "user" to your real end users!  On the flip side, can your "traditional" approach support a different database if you have a client that demands it (common if you sell your application), can your updates handle optimistic concurrency or update only changed fields, and is your DAL and your application easy to port to a distributed scenario when it grows?  Do your users demand more flexibility in their queries, with the ability to specify any criteria or sort, possibly along with paging?  Does your team do a good job of syncing up all the changes to stored procs, entity-DAL logic, and UI changes, as opposed to a much smaller set of changes (I remember how painful this was in team development using the "traditional" model).

Finally, do you really think that your "traditional" model is really even all that traditional in the first place?  If so, then talk to some enterprise Java developers -- they are probably either using an O/R Mapper or some serious code generation!  Or what about SAP and other similar apps that are very configurable -- you'll find something very much like an O/R Mapper under the covers, although it probably won't be very generic.  For that matter, if you look closely at very configurable Microsoft applications you're going to see some very "mapper"-like use of meta-data, as opposed to everything being stored procs!  This is NOT something that Frans, Thomas, or myself made up -- O/R mappers are a commonly used, and tried and tested, enterprise scale design pattern!  That's also why half of the Enterprise Pattern book that I have by Martin Fowler is about nothing other than O/R Mapping -- no kidding at all.  No, I didn't invent this, nor did Frans -- in fact both of us were in the "traditional" Microsoft stored proc camp until we listened, tried it for ourselves, and learned something new.  I only created my mapper since I think most of the others out there are too complex for my taste -- simplification is the one thing that I'm actually very good at.

So I stand by my claim that you should seriously think about either using a good O/R mapper, or do some serious code generation -- and if code generation then please spend the time and do it the right way or you'll be back where you started when you have to extend or maintain.  The choices are simple -- spend all your time writing routine and boring CRUD instead of a killer application, use a tool like an O/R Mapper that does it for you and lets you focus on the real issues, or if you must insist on the utmost control then spend some real time up-front and write some killer code generation templates so that later you'll be free to focus on the real issues.  I think most projects don't justify the time or need for the utmost control, so I typically prefer the O/R Mapper approach lately, but I can also agree that a well-designed set of templates can be reusable to a large degree if you have multiple projects that you can fit into the same scheme.  Lastly, I also like O/R Mappers because I like simplicity -- I don't want to see the code of the third party grids that I use, and I really honestly don't typically need to see the code for all the boring CRUD either.  But that's just me -- I just hope that you aren't doing it the old cut-n-paste way that many samples and books (and even tools like Visual Studio) "teach" you to do.  Actually I take that back -- I hope you do continue to use those inefficient techniques so that I'll be able to underbid you on your next project!  :)

Open-Source ADO.NET XML Provider -- WilsonXmlDbClient v1.0

Have you ever wanted to work with your Xml files as if they were databases?  Would you like to use SQL Select statements, instead of XPath, to retrieve, filter, and sort your Xml data?  What about using SQL Insert, Update, and Delete statements, in transactions, against your Xml data?  OK, I haven't either, but I had someone request the ability for my WilsonORMapper, and all that's necessary is an ADO.NET provider that knows how to work with Xml.  I like a good challenge, and couldn't find one already done, so I took a little of my spare time and created just such an ADO.NET provider that anyone can use.

So I'm hereby announcing the open-source release of v1.0 of the WilsonXmlDbClient.  The WilsonXmlDbClient is an ADO.NET provider that enables Xml to be worked with just like any other database in .NET. It supports the most common Select, Insert, Update, and Delete SQL syntax, as well as tranactions and parameters.  It works with your own ADO.NET data access code, as well as the WilsonORMapper, and it most likely will work with other ORMappers with very few changes to those mappers.  I was rejected by SourceForge, unlike my WilsonWebForm, for some reason they didn't specify, so this is on GotDotNet.

Your Xml file must be in a format that the .NET DataSet can read, and you must include a schema if you want strongly typed data and/or identity fields.  Its easy enough to create a DataSet and then write it out as Xml, so I'm not releasing a tool to do this for you.  The following SQL syntax is supported:

SELECT *|fieldList FROM tableName [WHERE whereClause] [ORDER BY sortClause] [LIMIT pageSize] [OFFSET skipRows]
SELECT @@Identity
INSERT [INTO] tableName (fieldList) VALUES (valueList)
UPDATE tableName SET updateList [WHERE whereClause]
DELETE [FROM] tableName [WHERE whereClause]

Fields and tables can optionally be delimited by [ and ], parameters must start with @, and you can optionally use ; for the statement terminator.  No group by, sql functions, table joins, or multiple recordsets are currently supported, and you cannot use the DataAdapter for persistence (use the Sql commands instead).  Also, note that my Sql parsing is pretty "raw" (maybe someone can join the workspace and fix this), so its certainly possible that things can go wrong if your data contains things I'm parsing.  It should also go without saying that this probably isn't going to work very well in a multi-user environment.

You can certainly use this ADO.NET provider in your own code, just like any other provider, but if you do want to use it with my ORMapper, then here's the CustomProvider syntax:

Wilson.ORMapper.CustomProvider customProvider = new Wilson.ORMapper.CustomProvider(
 "WilsonXmlDbClient", "Wilson.XmlDbClient.XmlDbConnection", "Wilson.XmlDbClient.XmlDbDataAdapter");
customProvider.StartDelimiter = "[";
customProvider.EndDelimiter = "]";
customProvider.IdentityQuery = "SELECT @@Identity;";
customProvider.SelectPageQuery = "SELECT * LIMIT {0} OFFSET {1}";

Service Pack 1 for .NET v1.1 Broke My ASP.NET App

Yes, its true -- Service Pack 1 for .NET v1.1 really did break my ASP.NET application!  And the weirdest part was that it only broke it for IE users -- it still worked just fine with Mozilla!  Here's the story:

I should have updated my own development PC to SP1 already, but I had delayed it.  Why?  First, I didn't see anything that affected me in the fixes it contained, security or otherwise.  Also, I was updating a WinForms application and didn't want to inadvertently require them to have to download and install a massive SP just to use my latest update.  Yes, that probably wasn't a real issue but you never know with some of these updates, so I wanted to take my time instead.  So today I made a couple of very small changes to an ASP.NET v1.1 web application.  It worked great on my own PC, so I uploaded it to the server and deployed it without giving it much thought.  Next, I of course tried to test the updated application -- and I couldn't get the login button to work!  Note that it wasn't that my login failed, or didn't work in some way -- I literally mean that the login button no longer worked!  I checked my files, and I made no changes to the login page, and I looked at the rendered html for the page and it looked fine too.  On a whim I tried it with Mozilla -- and it worked!

OK, so time to think about what was different.  Well, there's not much on this page -- just a userid textbox, a password textbox, and a submit button -- oh yea, and a couple of validators.  Of course, validators aren't checked clientside by Mozilla, and if they were failing in IE then the page wouldn't even submit!  Now why would my validators have started failing all of the sudden, after all the validation scripts don't just change for no reason?  Oh yea, didn't my colleague tell me last night that he was going to install the latest critical updates, and maybe that included SP 1 for .NET v1.1?  So I checked the Microsoft.NET directory and although there wasn't a new build folder, it did appear that the file dates for the validation files were indeed very recent (actually July 2004).  So I copied these files to my domain's aspnet_client folder and everything instantly worked.  Now the weird part is that my web app was working this morning, after the SP update, but before my update, so I can only assume that the SP update actually updated my domain's aspnet_client folder, and then I recopied the old ones over it.  OK, so maybe I broke it technically, if that's really what happened -- but its still the Service Pack that was the cause.

Now my question, which has been asked by others here already, is why didn't they call this .NET v1.1.1 or something else?  This would have not only hopefully have avoided my problem, but it also would allow you to run the new SP version side-by-side with the original version -- and that would have alleviated all my original concerns in the first place.  And note that when you install this Service Pack it also tells you something about not being able to go back, so its quite legimate to worry in my opinion!  I think Microsoft has done us all a big disfavor by slipping in updates like this into Service Packs and causing us to lose that side-by-side advantage of .NET!  By the way, the end of this story is that I tried to install SP1 onto my development computer and it repeatedly failed.  I searched and found others with similar experiences, and tried several of their suggestions without success.  I finally found one that suggested uninstalling the entire .NET v1.1 framework first and then reinstalling it -- that worked and I was able to install SP1.

More Posts Next page »