January 2005 - Posts

Improved CodeSmith Templates for ORMapper

The following is news from Paul Welter, who's been gracious enough to create and update CodeSmith templates for my ORMapper:

I've updated the tempates on the CodeSmith site above. This update contains many new features. The biggest new feature is merge support. Now you can author new code in the generated entity class file and not loose your changes when the templates are re-run. The merge works by updating marked regions in the file. All your modifications will be saved as long as you don't edit regions marked as 'do not modify'. Another improvement is that it now generates a DataManager class that contains a singleton instance of the ObjectSpace class. Having this common singleton allows for generation of some default methods for each entity like Retrieve, RetrieveAll, Save and Delete. The goal of these updated templates was to abstract the ORMapper from the UI layer by creating methods off the entity objects. This leads to more of a true object oriented design. No longer does a consumer of the entity objects have to know any ORMapper syntax. Working with the entity now looks like ...

Customer cust = new Customer();
cust.FirstName = "Paul";
cust.LastName = "Wilson";
cust.Save();

All the ORMapper tracking and persisting is handled internal to the customer class. This is a much more intuitive syntax to work with.

There is also a new set of templates to generate NUnit test classes for each entity class. The test classes are very basic and designed as only a starting point. Another advantage of the improved way of working with the entity classes is that it is much easier to author tests for.

The templates should be used in the following order ...

1)MappingFile.cst - This generates the mapping file for the WilsonORMapper.
2)ClassGenerator.cst - This generates or updates the entity classes defined in the mapping file. This template also generates a DataManager class if it doesn’t already exist.
3)TestGenerator.cst - This generates NUnit test classes for each enity class. This is an optional template to use.

Thanks
Paul Welter

Posted by PaulWilson | 4 comment(s)

I did not know SQL Server Views are Static

I have a SQL Server view defined to be "SELECT * FROM Table WHERE MyCriteria".  I intentionally coded it with "SELECT *" since I wanted it to be all fields no matter what -- just a subset of records.  A new field was added to my table recently -- no problem -- at least that's what I thought anyhow.  But this new field did not show up in my view afterall -- and Enterprise Manager still shows my view as being defined with "SELECT *".  So I dropped the view and recreated it -- that did the trick -- my new field is now in my view where it should be.  What's up with this behavior?

The Latest Crap on Code-BePart in ASP.NET v2.0

Fritz Onion has a new post about the latest twist to the ASP.NET v2.0 Code-BePart (code-behind, code-beside, code-bewhat, ...) saga.  This really makes me think they should have just left code-behind as it was in v1.*. I was initially opposed to these code-behind changes when it was introduced back at the first private preview in October 2003 to a few of us.  But most of us relented when the ASP.NET team insisted the code-behind model was too "brittle", as well as too difficult. While there was certainly truth in this, and certainly beginners in OO have issues understanding code-behind, it just doesn't look like they've succeeded in the end here. Now it appears to be a monstrosity that's not OO, not simple, and not even as functional as the original model which at least allowed you to pre-compile the code-behind while leaving the design non-compiled so designers could modify small things in it if necessary.  Yes, they've added the ability to pre-compile everything, including the aspx design pages, which is important to many, but just as many of us depend on the aspx design being left as is while still protecting our code-behind.  Code-behind was at least pretty standard OO, and now that you get full intellisense in a single file there isn't much of a reason to try to bring simplicity to the code-behind model, so why oh why did we muck with this?
Posted by PaulWilson | 1 comment(s)

Bug Fix: WilsonORMapper v3.1.0.1

I inadvertently broke most custom providers in my WilsonORMapper v3.1 -- this is fixed in v3.1.0.1.  Those that were manually adding their own parameter names in their mapping files were not affected, which was also why I didn't notice this in my testing.  Of course the recent change in the MySql provider did not help make this any easier to spot.  Thanks to David Dimmer for helping me track down the bug in my code -- and just when I was bragging about stability.
Posted by PaulWilson | 4 comment(s)

MySql ADO.NET Provider Change

MySql recently acquired ByteFX which made the ADO.NET provider for MySql.  They recently released a new version of their provider with the MySql namespace instead of the ByteFX namespace which also introduces a small but significant change that will affect all .NET MySql users.  You now have to prefix your parameters with "?" instead of "@"!  For backwards compatibility you can add "old syntax=yes;" to your connection string to force "@", but I'm not sure if this will be supported forever or not.  Note that with the WilsonORMapper this means that you specify yourCustomProvider.ParameterPrefix = "?", or add the "old syntax=yes;" to your connection string (but don't do both).
Posted by PaulWilson | 5 comment(s)

UI Mappers Making News -- OR Mappers Are Common

Jimmy Nilsson recently hosted a small architecture workshop in Lillehammer that is making the news now.  Oh how I wish I could have went, but alas it was a little too far way and costly for me.  Anyhow, so far I've found two things of interest to me in Mats Helander's posts.  First, they didn't spend much time discussing O/R Mapping because its pretty much a given now!  Its been that way in Java for quite some time, and I do believe its getting there in .NET with your top architects too.  Unfortunately, I don't think any such claim should be really made in .NET just yet, since most typical .NET developers still haven't even heard of O/R Mappers in my opinion.  But it is nice to see that at least among my peers that you don't have to spend time justifying O/R Mappers any more.

The other item of interest to me was the discussion of "UI Mappers" -- apparently both Patrik Löwendahl and Roger Johansson are building their own UI Mappers now (update: Roger is building an O/O Mapper).  Of course, as my readers know, I've had a UI Mapper in beta for some time, and in production at a client, so this really makes me think I need to find the time to finish it up.  :)  By the way, I'm sorry for tooting my own horn, but I'm just thrilled to see them use the term "UI Mapper", especially with the likes of Martin Fowler being present, since I invented the term on this blog 6 months ago (at least to my knowledge) !  Oh well, now I need to find the time to finish it up, adding support for other O/R Mappers and 3rd party controls -- what do my readers think about the concept?
Posted by PaulWilson | 7 comment(s)

SQL Server 2005 and Limitations to Assembly Loading

I've recently started reading "A First Look at SQL Server 2005 for Developers" in my spare time (yea, I'm not getting very far since I don't have much spare time) and I came across something rather limiting I think.  It says that you must be logged into SQL Server using an integrated security login, as opposed to a sql server account, in order to create a .NET assembly in SQL Server 2005.  The rationale given was that this was necessary in order to check if the user should have access to the file system location where the .NET assembly is to loaded from.  That does make sense, but it seems that implies that shared web hosts won't be able to easily allow us to use .NET assemblies on their SQL Servers -- am I missing something here?  Of course I'm not convinced that I would actually want a shared SQL Server on a shared web host allowing .NET use anyhow, since I don't want my data access slowed down by someone else playing with .NET stored proc, but I hadn't realized this limitation would exist either.
Posted by PaulWilson | 2 comment(s)

Unusual Chinese Fortune Cookie

My son Zack (nearly 7) got the following fortune in his cookie:

"You have an unusual equipment for success, use it properly."

Hmmm . . .
Posted by PaulWilson | with no comments

The Best O/R Mappers: NHibernate, LLBLGen Pro, and EntityBroker

Frans calls me to task on my last blog post where I said:

"By the way, this is also the one of the few things that I think still makes my mapper stand out as unique against the likes of NHibernate, LLBLGen Pro, and EntityBroker.  The others may have more features, and NHibernate is open source, but just try to use any of these others for the first time in 30 minutes, or just try to extend any of those to have a new feature you desire.  Of course the other main thing mine has to offer is provider support -- I don't think any other can claim to support so many databases.  And that's not just a claim -- its also a reality that many have proven -- a reality that is possible primarily due to simplicity and not targetting every possible feature."

Sorry Frans if this offends you, as that was not my intent at all.  I've tried most of the mappers out there, and there's a reason why I mentioned LLBLGen Pro, NHibernate, and EntityBroker -- they are the best out there in my opinion!  I think I've also proven that I do in fact recommend many people to your mapper and the others, so I was not trying to say anything negative at all.  Do I think your mapper is easy to use -- absolutely -- and I also think most other mappers are easy to use.  LLBLGen Pro is also probably unique in that it actually gets easier to use over time, due to the code gen approach that you take which makes intellisense possible.  But I still think that too many people totally new to O/R mapping get frustrated and quit when it takes longer than 30 minutes to get working the first time.  Is that fair?  No, its not, but that's the type of developer that the MS community often brings us -- they download our cool products and get frustrated when our products can't read their minds and tell them what they are doing wrong -- then they quit and go on their way content in their belief that O/R mappers are not the right approach.  And that's been one of my goals -- too give people an entry point that is simple enough that anyone can use it in 30 minutes or less -- then if they need more they will be much more willing to consider the other mappers.  Are there people that get yours working in 30 minutes?  I'm sure there are, but I seriously doubt that the average MS developer can get most O/R mappers, and many other cool tools for matter, in 30 minutes or less -- and I don't think that's a negative statement about your mapper.  As I said earlier, LLBLGen Pro is probably unique in its code gen approach, which probably makes it actually get easier to use over time -- and that's very cool.  Your mapper is hands down the only mapper I would recommend to anyone that those that prefer code gen -- yours and not mine -- although that's not my personal preference.

Next Frans, you asked how many databases my mapper supports really supports?  MS SQL Server, Oracle, Access, MySql, PostgreSql, Sqlite, Firebird, DB2, VistaDB, Sybase, and lastly I think SqlCE.  All of those have people that I know for a fact are using my mapper with them, except for SqlCE which I know some people were interested in using but I never heard back to know if they succeeded or not.  Furthermore, unlike other mappers, if you work with another database that I have not listed, you can probably get it to work with my mapper without writing or modifying any driver code -- no recompile necessary.  But yes, you are absolutely correct too when you say that I don't "really" support all theses databases, if you what you mean by that is supporting features that are peculiar to individual databases.  Do I support sequences?  Yes, but it does require that you know how to set it up in your database, which yours probably does automatically.  Do I support joins, aggregates, group by, having clauses?  No, not even on one database -- as I have said on many occasions, LLBLGen Pro has far more features than mine, as does NHibernate and EntityBroker -- mine simply targets the most common 80-90% (or more) of CRUD, with or without stored procs, while giving the user a decent DAL for the other cases.  Many people may read that and immediately choose your mapper, or one of these others, and that's absolutely the right thing to do if you need these features, but many people have also apparently decided that they were quite content with a mapper like mine too.  For instance, I actually do work with joins, aggregates, group by, and having clauses -- in my databases -- that's right, I'm quite comfortable writing a view or stored proc and mapping it.  That's a "heresy" too many purists -- but I like databases -- my mapper doesn't shield me from the database -- it simply allows me to avoid writing all the boring and repetive CRUD and start working with objects right away.

I thought about ending here by saying when I would recommend LLBLGen Pro vs. NHibernate vs. EntityBroker -- but that would probably just cause more issues since I would be making some generalizations to some degree.  So instead I'll end it by just challenging everyone that reads these blogs but still hasn't tried an O/R mapper to just try one and see for yourself for a change.  And I'm very sorry if any of my statements, here or earlier, are generalizations that may be debateable -- that was not my intent and I apologize sincerely.  I consider myself an O/R mapping evagelist more than an O/R mapping vendor (and I'm certainly not a fulltime vendor, nor do I make enough money to quit my day job) -- but there is a fine line that sometimes I inadvertently cross in my comments.

How do you decide what features to add or cut?

Ever since my WilsonORMapper hit v3.0 back a few months ago things have been very smooth.  By that I mean both that there have been very few bugs and very few new feature requests.  In other words, its reached a mature point and it meets most expectations.  Lately I've been readying v3.1, and I had to make some decisions on what to include.

Some feature requests are easy to decide to include -- they are easy to code, affect little else, and are often requested.  Examples of this were the desires to map properties (as opposed to just member fields) and to have a public ExecuteScalar method.  Note that I actually don't like mapping properties and ExecuteScalar isn't really necessary, but they were still included.  There were also some other requests that were easy to decide to include, even though they were not often requested.  These were still easy to code and affected little else, but they also provided some real value even if they were seldom requested.  Some examples of these were adding support for multiple mapping files (or multiple embedded resources) and output parameters for stored procs.

Next there were a few requests that were easy to not include -- these aren't easy to categorize, so lets look at examples.  One case was Whidbey generics and nullable types -- these were often requested, and they may be easy to code and affect little else.  But's let be realistic -- these are still in beta 1, changes may be possible, and few really need them yet.  But note that these will be one of my top priorities for v4.0, probably in the beta 2 timeframe when there is a go-live license.  Another case is that a few people don't agree with my assumption about unchanging primary keys, also called surrogate keys.  I try not to force my personal tastes on others, thus I decided to allow properties to be mapped now, but this is one assumption that is to integral to my mapper.  I hesitate to say that those that disagree are doing something wrong, but there must be a few basic assumptions, especially with a "simple" product.

But then there are the requests that are hard to decide on whether they should be included or not -- these are especially difficult when people send you code.  My mapper does support entity inheritance, but only the most minimal database inheritance -- this would be a huge plus to add to my list of features.  But this is a big change, probably affecting a lot, no one has sent me any code, and this is a minor point update.  So this did not make the cut -- it might make sense in v4.0 though.  Another thing my mapper supports is composite keys, but not composite key relationships -- at least not until now.  This was also a big change, and it certainly did affect a lot, but someone did send me some code on this one.  Now I should point out that just because someone sends me code does not mean its done -- someone else's code usually solves their cases, but not all the other generic cases, so it can still be a lot of work.

Finally, there was one case that really required me to make a difficult call -- one-to-one relationships.  My mapper supports one-to-many, many-to-one, and many-to-many relationships, but not one-to-one relationships.  This is not trivial to implement, and it also affects a lot of things, but it would be a big plus on my feature list -- and someone sent me some code!  But this did NOT make the cut -- that's right one-to-one relationships are still not supported by my mapper, and likely never will.  Why you ask?  First, there is an easy work-around -- one side of a one-to-one relation is actually a many-to-one relation, and the other side is a one-to-many relation where the many is always equal to one!  If you don't like to see that, then that's what a property is for -- leave the member field an IList, but make the property be the strongly typed object with the getter and setter hiding the fact that you are actually always working with the 0th index object in a list.

But isn't this requested enough to justify making it easier?  That's where the hard call came in -- and I decided that it is not worth the additional complexity.  That would be one more thing to have to explain on the end-use side, and it would complicate the codebase greatly.  That's because every relationship type has to be handled for new and existing objects, lazy-loaded and not, dynamic sql and stored procs, and now for single and composite keys.  That's a lot of cases -- a lot to code (the code sent me handled only the few pertinent to that person), a lot to test (I still haven't tested all the cases of composite key relationships), and a lot for the next person to worry about!  And that last part is one of the most important things for my mapper -- the simplicity of the codebase itself.  This is why I get so many user contributions -- they find it easy to extend when there is something else they want.

So I have consciously chosen to keep my mapper "simple", although I think that I can safely say that it does meet most people's needs already -- far beyond the most common 80-90% that I was originally shooting for.  By the way, this is also the one of the few things that I think still makes my mapper stand out as unique against the likes of NHibernate, LLBLGen Pro, and EntityBroker.  The others may have more features, and NHibernate is open source, but just try to use any of these others for the first time in 30 minutes, or just try to extend any of those to have a new feature you desire.  Of course the other main thing mine has to offer is provider support -- I don't think any other can claim to support so many databases.  And that's not just a claim -- its also a reality that many have proven -- a reality that is possible primarily due to simplicity and not targetting every possible feature.

Posted by PaulWilson | 3 comment(s)
More Posts Next page »