Archives

Archives / 2004 / January
  • Mixing Forms and Windows Security in ASP.NET

    My latest article, Mixing Forms and Windows Security in ASP.NET, has finally been published on MSDN's ASP.NET Developer Center -- yeah!  I see questions almost daily in the forums from people asking how to combine Forms and Windows authentication -- so here's the answer finally.  This article details how to use Windows authentication for Intranet users, including capturing their username, while still providing a custom login screen for Internet users.  Thanks Kent for making this happen.

    Read more...

  • WilsonORMapper, EntityBroker, and LLBLGenPro

    I've apparently really annoyed Frans, and Thomas a little too, so I thought I would do something different. First, let me just say that I've never spoken bad about either EntityBroker or LLBLGenPro that I can recall. In fact, I said publicly many times that both of their products are very good and highly recommended. Why? Because as Frans noted, my WilsonORMapper is "severely crippled" if you need more than simple CRUD. That's right -- my O/R mapper is a simple tool for simple minds only -- and that's all I want it to be. Yes, I want to add cool features, and yes I have a blog to announce it, but I try to also share what I learned in the process. Does that make me arrogant? I hope not, but its not the first time I've been called that, so I'm sorry and I'll try harder, but I hope my accusers can also try a little harder please.

    So, before telling you more about EntityBroker and LLBLGenPro, I'd like to share with Frans what I have done right. First, I have done a great job of getting the O/R mapper message out there, as have many others like Steve. If that weren't the case then I wouldn't be getting so much positive email in the last few weeks. In fact, several have went out of there way to tell me how much the hostile attitudes of Frans and Thomas had turned them off! Yes, I have found that there are many people out there that just want a simple O/R mapper, which is where mine excels, but I have also recommended others to EntityBroker and LLBLGenPro while admitting mine wasn't up to their needs. So I think I'm helping the cause in general, and I'm probably even going to increase business for Frans and Thomas to some degree.

    Finally, if my WilsonORMapper is so simple, what is it that EntityBroker and LLBLGenPro do that I don't? Both include "real" GUI designers, as will MS ObjectSpaces, whereas my ORHelper merely helps. Both support COM+ transactions, multiple concurrency types (supported now), full data-binding, object query APIs (some support now), cascading persistence (supported now), dirty-field only updates (supported now), and much better support and documentation. EntityBroker supports the major databases already, and I think I just got Thomas to start thinking MySQL seriously too now. LLBLGenPro has plans in the works to support Access, MySQL, and more, so its always been only MS ObjectSpaces (and some others) that I've criticized vocally in that regard. EntityBroker does some really advanced caching, and I have personally used it enough to say I really like it, although it is too "complex" for many simple cases.

    I haven't ever used LLBLGenPro personally, since its not really my style, but it looks very good so I do recommend it too. Its designer will automatically discover your relationships, which seems extremely helpful. It also supports stored procedures, as does mine, so again I've never intended to criticize it in this regard. LLBLGenPro also supports things like joins, group bys, validation, and many advanced Oracle features I can only dream about. It also already supports multiple field primary keys, which is something I still need to try to get to. Do I think Frans should use only generic ANSI SQL? No, I've never said that -- I've simply said that starting with generic ANSI SQL will automatically get you working with many databases in the simple cases.

    And that's what I don't get -- why are so many O/R vendors totally ignoring the simple users?  Are we really going to convince people they need an O/R mapper by raising the bar so high to just get started?  I still can't personally understand half of what Frans and Thomas list as their features, and they focus so much on these that the simple ones aren't obvious at all.  Many of the other vendors have much better explanations of the benefits of O/R mapping, but most of them do a terrible job at more than code generation.  I want people to really understand how O/R mapping can help them, and if they go buy someone else's then that's fine too.  But I don't really see how all the antagonistic debates in forums and blog comments make any of these points clear to most users.

    Read more...

  • WilsonORMapper works well with MySQL

    I spent a couple of hours yesterday testing my WilsonORMapper with MySQL (note the My, not MS). I designed my O/R mapper to work with any ANSI compliant database, so I had high hopes. Note that I do of course have a few optimizations for MS SQL, Access, and Oracle, so I don't mean everything is or should be generic. I'm very experienced with Oracle, although its been a few years back, but I had never touched MySQL until now. I started by downloading MySQL v4.0.17, which was the recommended version, MySQL Control Center, and MySQL Odbc Driver v3.51. I installed everything, then I figured out how to start the server and create a database and a table pretty easily. The next thing was configuring Odbc and figuring out my connection string, and do a few small regular ADO.NET tests. So far so good -- by the way, MySQL seems surprisingly comparable to MS SQL in ease of use.

    So next I tried my first test with my WilsonORMapper, a simple GetObject, and it failed. My debug version spits out all the SQL so I copied it into the query tool to test it directly since it looked good. It turns out that MySQL is not ANSI compliant by default -- it uses ` instead of " for its table and field delimiters. By the way, neither is MS SQL, which uses [ and ] by default, so my Commands class allows this to be easily overridden. I tried to get the Option clause in my connection string to turn on ANSI support, but finally gave up and just started MySQL in ANSI mode (mysqld --ansi). That was easy enough so now all my GetXXX methods work great, just as expected when you support generic ANSI. Note that I will still need to write a custom Commands provider to get paging to work, but that will be a simple method override.

    Next, I tried to do an Insert and it failed at first, but this time the error message in the debugger helped right away. I use parameterized dynamic sql for all my inserts, updates, and deletes, which stops sql injection attacks and allows the database to cache the execution plan. The Access OleDb driver supports named parameters, not just ?, although you still have to assign your values in the same order as the parameters, so that's what I have coded. Well the MySQL Odbc driver requires ? for parameters -- but guess what, I have an optional parameter mapping for all my fields since I support stored procedures. A simple change to the mapping xml file and now all my inserts, updates, and deletes work with MySQL too. Note that auto-generated identities are not supported by ANSI SQL, so I have a hack using MAX in place that isn't scalable, but a custom Commands provider can easily fix this with again a simple method override.

    So I would like to say that my WilsonORMapper supports MySQL, but I really should probably write that custom provider to make it perfect. That brings me to a few questions though, which maybe some of my readers can help me better answer. There are multiple MySQL .NET native drivers out there, as well as OleDb and Odbc drivers -- what should I expect of my users? Should I force a specific driver on them or should I just write a custom Commands provider and expect them to do something more to take advantage of it? And if the latter, should they have to recompile my source with their few driver specifics, or should I somehow make it a configuration option? The same questions will apply to PostgreSQL also, which I have someone asking me about already, so whatever I decide will apply to other databases also.

    By the way, to me this experiment validates a couple of my design decisions that I blogged about previously. First, generic ANSI SQL is not a silly pipe dream -- it really is possible in most simple cases, although certainly you should provide provider specific optimizations too. Yes, I know I'm just talking about the simple CRUD operations, but apparently I've guessed correctly that this really is all many people care about, judging by the responses I have received. Finally, having my mappings in an external xml file was crucial here, since I needed to change my parameter names to ? for just MySQL, not all the other cases. This means that I did not have to recompile my code -- all I needed was a MySQL specific mapping file and everything worked perfectly. If O/R mappers are to support multiple database platforms, then how can we honestly ever expect the mappings to only ever be specified once in the code as attributes?

    Read more...

  • WilsonORMapper now tested with Oracle

    I just finished updating my WilsonORMapper to v1.1 today.  I added support for many-to-one and many-to-many relationships (already had the one-to-many case) which proved pretty painless.  I also added support for some common requests, like loading object collections with stored procedures (already supported stored procs for insert, updates, and deletes), loading datasets with only a subset of fields (instead of always loading all fields), and directly executing set-based updates and deletes (along with expressions for updates).  Anyhow, the hardest thing was that I finally got around to testing everything with Oracle 9i, so everything now works with Sql Server, Access, and Oracle (including my paged collections).  I'm using a new feature of Oracle 9i to do my inserts that rely on database-generated keys, the RETURN INTO clause, which assumes that a trigger exists that gets the next value from the appropriate sequence and includes it in the insert -- almost like identities in Sql Server and Access.  I also did go ahead and break out my “Commands” class into a provider model that makes it easy to replace my ANSI SQL for the special cases in Sql Server, Access, and Oracle, while still supporting the generic cases for all other databases!

    Read more...

  • Running .NET WinForm Applications on Citrix

    We have been creating a major .NET WinForm application that will be deployed in Citrix for a few months now.  Basically, think of it as a custom rewrite of MS Access (note that this wasn't a techie's idea) that works against MS SQL Server.  Of course, business as usual doesn't like any limitations, so we have way too much data and far too many features to work in a shared environment like Citrix.  Or do we?  That has been the bane of my existence for the last couple of months, and unfortunately there doesn't appear to be many others that have such experience to share.  My main concern has been whether or not the .NET garbage collector would see the 4GB of RAM and think it could have it all, unaware of the other users on the box.  I have posted that concern to various forums several times, and I've seen a few others with similar concerns, but no one ever answered, other than people telling about .NET works in general without any Citrix experience.

    Well I can finally report that things look pretty good afterall -- after much painful performance tuning and tracking down a memory leak in a 3rd party grid control.  There were many small performance gains, but the biggest one is something I still don't understand, but the numbers are proof.  Basically, we have a business class that has a method that gets a DataSet and stores in one of its members, and then the calling class separately sets the grid's datasource to the class property that exposes that member.  It seems like that would be fine, since a DataSet is a reference type, but it turns out that setting both the class member and the grid datasource in separate steps takes twice as much time as having the class method set the member and then return it to the caller!  I would love it if someone can explain that one to me -- my team told me it didn't work this way, but the numbers are proof, and none of us understand it.

    So we were hoping for good results with our test this week -- sure enough, the first 45 minutes testing the application in Citrix with 15 users looked very good from a user's point of view.  But then things started grinding to a halt, and didn't pick up for 30 minutes when most of us started closing and restarting the application.  The naysayers in our company quickly released their observations that our application could not even support 15 users, which isn't very good in an expensive Citrix environment.  The infrastructure guys pointed to some poor .NET performance counters -- but wait, those numbers didn't get bad until 15 minutes after everything went to crap, and all the .NET numbers before then looked sweet.  The problem instead is obvious when you look at the overall memory numbers -- there was a gradual memory leak, unrelated to .NET, that eventually swamped the system, and eventually that also affected the .NET numbers.  I finally found the control, the 3rd party grid, that had the memory leak, and it turns out they already knew about this and had a hotfix, not to mention a newer version that we were still investigating.

    We will now start updating that grid control, and I hope to soon be able to report a better test result in Citrix.  There may still be problems, but the numbers once again speak for themselves, and I think its pretty clear that our .NET app ran better than Access and our older Access-like applications that we are trying to replace.  The naysayers see .NET start with 20MB and freak out saying that its not going to scale, but that initial grab doesn't mean it takes 20MB to run the app, it just means .NET is getting memory so it doesn't have to later.  Its actually quite an incredible feeling to look at the graphs of .NET's memory and garbage collection when you have 15 people on a Citrix box, each looking at about 3 separate instances of our main view, each having an average of 25000 rows of data (some much more), averaging 25 columns (but some with more again).  .NET works great, and so does MS SQL Server, which we use exclusively with some of the world's largest databases -- that's right, we don't have any instances of Oracle at all.  Life is good.

    Update: Life is NOT good since none of the global .NET memory performance counters that I relied on actually work!

    Read more...

  • Moving from Wise to VS.NET Deployment

    I just finished ditching my company's use of Wise Installer for .NET. I replaced it with the built-in Setup and Deployment project of Visual Studio .NET. Why did we choose Wise to begin with? Mostly ignorance -- I'm not a build engineer, and I was told to pick Wise or InstallShield. I was sadly too ignorant to even consider the VS.NET built-in tool. I had read it was better than the one in VB6, but the VB6 one was so bad that I just assumed VS.NET was still not up to par. I assume the person that told me to pick Wise or InstallShield has similar excuses, but he's gone now. Anyhow, my "upgrade" to the free VS.NET installer from the much more expensive Wise installer is now done -- and I'm glad to be rid of Wise.

    What was wrong with Wise? First, I need to state that its probably just as good, if not better, if you are a build engineer that knows WiseScript, and the idiosyncrasies of Wise. I assume something similar can be said about InstallShield, but I don't have any recent experience with it to even talk about. The first problem I had with Wise was that it kept removing valid references or adding unnecessary ones, including duplicates, which became very annoying. I also started to experience crashes when I was doing several things, which forced me to start saving after every change. Then there was the fact that I simply don't know WiseScript, so I had no idea how to do the custom things that were starting to arise. The final blow was that Wise, at least not in my simpleton attempts, did not install assemblies in the GAC when it was told to do so! All the help documentation, and rave reviews, say its trivial -- just drag the file into the GAC folder in Wise -- but that did not work for us. My one attempt to fix this in a workaround by including GacUtil in the install, and then calling it to manually register my assemblies with one of their custom actions, did not work either. Again, maybe it was all a case of my being a stupid user, and I'll admit that I also never called Wise for help.

    So why the VS.NET Installer? First, I started hearing more people ask me why I wasn't using the one built into VS.NET, which got me at least thinking. Then I read an article somewhere that really opened my eyes -- sorry, I can't find it anywhere now. So I finally decided to convert my Wise project to a VS.NET Deployment project, which the wizard in VS.NET made very easy to do. I now have my assemblies getting deployed to the GAC as effortlessly as should be, and I haven't encountered many idiosyncrasies, at least none that keep happening. One problem I found is that the deployment packages won't automatically uninstall old versions, even when you tell them to check for them they just stop and tell you to do the uninstall yourself. The fix for this was to change both the version number and the GUID product code of the deployment project itself before every build. This was easy to automate since the project file is plain text, unlike Wise's project file, so I just have a command-line app that is called from my batch file that changes these things, much like it already changed the application version numbers in all my AssemblyInfo files. I also no longer have a single EXE that includes the .NET framework, but that's not a big deal with the Setup.exe bootstrapper, so I can live with that. The main thing is that now I look forward to working with it, since I can use C# or VB to write my custom actions in the future, and therefore be far more productive. Its also nice that its free (if you have VS.NET), so all of our developers can open the deployment project now if necessary, and all of our other projects can also use it, so I guess I can even say I've saved my company thousands of dollars!

    By the way, my batch file I referred to starts each build by deleting all my old code and binaries, then it gets everything from Visual SourceSafe, either from a label or the latest depending on settings.  Then it changes all the version numbers, before building all binaries, finishing with the build of the deployment project.  Finally it copies the installation files to an appropriate location.

    Read more...

  • SOA Is Not New -- Its Just Easier and "In"

    Steve and others have been talking about SOA and it got me thinking that we basically had an SOA architecture where I previously worked (Roche Diagnostics), we just didn't know that was the terminology then.  We had a GUI application that needed to talk to a variable number of medical devices, which typically had a life of their own, so that any given task initiated from the GUI could take anywhere from a second or two to 15 minutes.  They could also start doing things on their own in some cases, and we also needed to be able to easily integrate new devices on the fly as customers bought more devices, or when we came out with new ones.  The first thing we did to solve this was specify that each device could only talk to its own "controller", which knew its own unique features (or problems), that ran in its own process.  Then we defined a set of common interfaces that the GUI and each controller had to do all of their communication through, which we then registered with the COM+ Event subsystem.  This allowed us to add new devices on the fly very easily, or even "test or monitoring controllers, since neither the GUI nor any of the controllers ever talked directly to each other.  We also specified that all communication with the COM+ Events interface was done through MSMQ Queued Components, which allowed everything to occur in its own time asynchronously, without blocking, while still being guaranteed.  Finally, we also had an "aspect" service for event logging and tracing, which was totally configurable to allow us to turn on different levels of logging, for all or just some devices.  This may not have been anything "sexy" by today's standards (no .NET or web services), but it was very much a service-oriented architecture, built on COM and COM+, taking advantage of COM+ Events and MSMQ.  The only thing that I never really liked about our architecture was our data tier, which always felt way too convoluted with its stored procedures that were wrapped by COM+ components, that were finally called by the GUI.  It was too late when I started learning about O/R mappers, which would have greatly simplified things by basically making an entity service.  Of course I would have had a hard time selling the O/R mapping stuff at Roche, but then again maybe not since the business upside would have been that we could have sold our product with any database rather easily.  So in my opinion SOA systems are not anything new, although there are certainly new ways to build them, which are easier than ever, although the downside of this is that many people are building SOA systems that don't need them.  Anyhow, the principles have been around for a long time -- loosely coupled contracts, whether that be interfaces or web services, often with asynchronous independence, possibly with guaranteed delivery, running in different processes or machines, or possibly even different networks.

    Read more...

  • O/R Mappers: Maximum Performance

    I now have my WilsonORMapper (v1.0.0.3) performance very comparable to that of DataSets. In some cases I am actually beating DataSets, with or without private field reflection.

    My tests compared DataReaders, DataSets, and my ObjectSets, both using my new optional IObjectHelper interface for direct access, as well as testing private field reflection. Each run consisted of a number of repetitions of loading the appropriate structure from a database table and then looping through the resulting structure to access the values. The database table consisted of 10,000 records filled with random data that I generated, with the table fields consisting of 2 ints, 3 strings, 1 float, 1 datetime, and 1 bit. The numbers posted all represent loading 10,000 records, but cases varied from 1 record repeated 10,000 times, to 100 records 100 times, and finally 10,000 records only 1 time. The tests were ran many different times, and the numbers were always consistently similar. I also tested a 100,000 record table, and the numbers were similar, just 10 times bigger.

    Notice first that hitting the database many times for one record is noticeable slower. Next, note that DataSets are pretty much always twice as slow as using a DataRepeater. If you want to load a single record then my WilsonORMapper beats a DataSet hands down. This remains true even in the case where I continued to use private field reflection. On the other hand, my O/R mapping was 50% slower than the DataSet loading 100 records, and 75% slower than the DataSet when 10,000 records were loaded, using direct access. The numbers were another 2 times slower when I allowed the private field reflection. So performance varies depending on the number of records, although keep in mind that my WilsonORMapper supports paging to make the larger number of records easily manageable. I also added a new GetDataSet method that returns DataSets and performs just as good.

    Why does my O/R mapper still perform a little slower than DataSets with many records? No matter what I did, almost every millisecond could be attributed to the fact that my mapping framework stores a second copy of each value in the manager for its state. This state allows you to check if the values of any entity object has been updated, as well as giving you the ability to actually cancel all the changes made to an object. I may also someday use these extra state values for more specific dynamic sql updates. On the other hand, large DataSets load faster initially since they don't load twice, but they also may have larger overhead in the long run as they track all later changes. DataSets also have a considerably larger serialization when they are remoted, so you should also consider this additional overhead that occurs in distributed environments.

    What did I do, other than implementing the IObjectHelper interface, for performance? The biggest performance change I made, far bigger than reflection, was changing all of my foreach loops over hashtables to instead be regular for/next loops over typed arrays. The next biggest performance gain was changing a hashtable key from a struct to a string, which could not be just a regular object since each object instance was always different. Next, and still making a slightly better performance impact than private reflection, was accessing each datareader value only one time, even though I had to store it twice. I also now save the reflected FieldInfo, for the cases when reflection is still used, which did make a small but measureable difference, contrary to Steve Eichert's report. And of course, you can now implement the IObjectHelper interface to avoid reflection.

    I also made a few other observations in the course of all this performance testing. Most surprising to me was that I could not find any significant difference between accessing datareader fields by index or by name -- it was barely measureable at all. I also confirmed that there was no measureable difference between sql and stored procs. Next, while Steve Maine noted the huge differences that private reflection can make, it is still a relatively small part of the bigger picture that includes data access. This is in agreement with several comments I received that there is a whole lot more going on in an O/R mapping framework than just worrying about how the values are set. Also note that public and protected field reflection was hardly noticeable in tests. But overall, the little things like foreach and boxing were the worst offenders.

    So if you were first concerned that my WilsonORMapper didn't have adequate performance, then think again and download the demo for yourself (the demo only runs in the debugger).

    # Records 1 100 10,000
    # Repetitions 10,000 100 1

    DataReader 1.91 0.14 0.11
    DataSet 3.69 0.21 0.21
    OR Direct 2.29 0.31 0.37
    OR Reflect 2.78 0.75 0.81

    Read more...

  • MasterPages Template Properties in .NET v1.*

    I've been asked one question about my MasterPages for .NET v1.* often enough that I suppose I should blog about it for everyone's benefit.  The question is how do you access public properties that have been exposed by the template user control from your pages that implement MasterPages.  This will be very easy in .NET v2.0, since MasterPages will contain a Master property in the Page class that will automatically be strongly typed to your master. But how do you do it today, in .NET v1.1?

    C# Version:

    private MyTemplate Master {
      get { return (MyTemplate) FindControl("MyMaster_Template"); }
    }

    VB.NET Version:

    Private ReadOnly Property Master() As MyTemplate
      Get
        Return CType(FindControl("MyMaster_Template"), MyTemplate)
      End Get
    End Property

    Assumptions:

    The user control that contains the template for MasterPages is of type MyTemplate.
    The MasterPage control itself is assigned the id "MyMaster" in the page using it.

    Extensions:

    The Master property can be added to a base page class if the "MyMaster" id is consistent.
    The Master property can work with several templates if there is a common base user control.

    Read more...

  • O/R Mappers: Avoiding Reflection

    Steve Eichert posted his findings yesterday about the performance cost of reflection.  I knew reflection was slower, but I had no clue it was that bad potentially.  I haven't done many tests yet myself to see if it really is that bad, but it doesn't really matter since I can agree that reflection is definitely slow.  So why does this matter -- well, right now my WilsonORMapper uses a lot of reflection to get and set the field values.  I was planning on doing something to fix that sooner or later, but Steve's post got me thinking about making it my next priority.

    So how should I go about getting rid of reflection?  The first solution I came up with is the easiest for me to implement, although I'll admit it seems kind of kludgy and ugly to the user.  Basically, I would provide an interface with one property whose single parameter would be the member name specified in the mapping file.  The user (or my WilsonORHelper) would simply implement this interface's property with a switch block where they set or get their own member variable to avoid reflection if performance is a consideration.  I would simply need to use reflection one time, on the initial load of the mappings, to see if they implemented this interface.

    OK, that does sound pretty kludgy, and it does mean that I would be requiring the user to do something specific for a change.  But this would not be "required" unless they want or need the performance gain, and its still not requiring them to inherit from a specific class either.  Implementing an interface, while definitely a requirement, is not as big of a "burden" since you can have multiple interface inheritance in .NET.  And again, its not really required unless the user really wants or needs the performance, which may be necessary for collections of many objects, but maybe not for other cases.

    So what other options exist to avoid reflection?  One option that Steve mentioned is to use CodeDOM to dynamically create an assembly with a signature that the O/R mapping framework understands that knows how to call the public members or properties of the original class.  Those public members or properties might be specified totally in the mappings, or reflection might need to used one time at startup at most.  The problem I have with this technique is that it requires public members or properties, and it doesn't handle any member that is read-only publicly.  Assuming that there aren't many read-only cases, what's the problem with using public properties, since they almost always exist anyhow.  The problem is that properties are often (and should be) wrappers around the private fields that contain additional validation or business logic.  There's nothing wrong with your public property rejecting or modifying the user's attempt to set an invalid value, but it should not prevent me from loading data that currently exists in the database.

    So what do other O/R mappers do instead?  Some O/R mappers use CodeDOM to dynamically create an assembly with new classes that inherit from the original classes which become the real ones used by the mapping framework.  This can be done by either having the original classes be abstract with all the real logic created in the new dynamic inherited class, or by requiring the original class to expose its member variables as protected.  The problems with this approach are that you can't use new to explicitly create your classes anymore, and the classes that the framework returns are actually a different type technically than was originally expected.  Neither of those are significant problems, but the requirements to do this aren't trivial.  You either have to forego providing your own implementation that includes your validation and business logic with an abstract class, or you have to expose all of your member variables as protected, and neither of those requirements are very friendly.  There may be solutions to this that some O/R mapping frameworks have discovered, so I'm not trying to imply there isn't, but I doubt any such solutions are trivial, and they are probably therefore out of my reach to easily implement.

    What other solutions are used by O/R mappers?  Some O/R mappers require you to generate lots of code in order for them to work so that they don't have to use reflection and so that they can also gain other "insider" knowledge.  I don't want to imply that this is "bad", or not a valid technique for O/R mappers, both because that's not necessarily the case, and because there have been other discussions on this already.  That said, its not what I want to do with my O/R mapper, so this was not ever an alternative I seriously considered.  One thing it does do for me, however, is to at least validate that allowing the user to implement an optional interface with a single property if they want or need the extra performance is not something totally out of line with other tools.  And since I can make my WilsonORHelper generate this code if the user wants to use my helper and wants to turn on this feature, then I do feel like its not at all too much of a "burden".

    So at this point I've just about concluded that my first solution is probably good enough, at least for my simple O/R mapper, and I also have decided that I don't really like the other alternatives, at least that I can think of or find.  Then it occurred to me that I should try to figure out what ObjectSpaces is doing, but I quickly gave up since their code is just too much for me to try to figure out without lots of time and work.  Then, on a whim, I decided to  Google on ObjectSpaces and reflection.  The first result was a blog entry by Andres Aguiar that confirms ObjectSpaces does use lots of reflection, but this is somehow going to be less of a performance hit in .NET v2.0.  The fifth result returns some documentation about ObjectSpaces and an IObjectHelper interface that I had never noticed before -- and remarkably it sounds exactly like what I was proposing to do!  There's also an IObjectNotification interface that can be implemented to enable your objects to receive events when is updated or deleted or when an exception occurs, which was something else I was wanting to do somehow.

    That's enough research for me, since I liked my solution to begin with, and since I'm mostly using the syntax of ObjectSpaces anyhow, this will now definitely be the thing I implement in the next few week or so.  Of course, that still doesn't mean its the best or "right" solution, so I'm still interested in what others think of my solution and the other options.

    Read more...

  • Finishing my own Basement

    My basement consists of our drive-under two-car garage and originally two unfinished room locations.  I finished the first room as our kid's playroom in January of 2001, and then I finished the second room as my wife's craft/sewing room in January 2002.  Last year we worked outside to install a patio under the deck, and then finished with a pond in the spring.  Anyhow, my point that I'm getting to is that I never did do anything for a ceiling in the two rooms when I created them.  I don't really know why, other than the fact that I had never messed with the drop-down suspended ceilings before.  I didn't have those same reservations about building walls and installing wiring, since I had helped my Dad and others many times with similar tasks over the years.  But drop-down ceilings just seemed weird to me, and all the books and stuff have barely a page or two about doing it yourself.  Well, we really needed to do these ceilings, especially since we may try to sell our house and get a bigger one in the next year or two, so I committed to doing it this month.  As it turns out, one of my friends had recently done a drop-down ceiling in his basement, and he gave me a day of his help last weekend as my Christmas present.  It turned out to be really easy -- just careful measuring and other routine tasks.  We probably could have done both rooms, but I didn't want to ask my friend for too much when we could better spend his time drinking and watching movies.  So yesterday I did the second room's drop-down ceiling by myself, which was just a wonderful couple of hours.  What does this have to do with .NET -- nothing really, but I'll make something up if you insist.  I put off my building my own O/R mapper for similar reasons, that being that it was just something I hadn't done before.  But in the end, as long as you have all the basics down, sometimes it is relatively easy to just do things yourself.  You get the pleasure of doing it, learn something in the process, potentially save lots of money, and end up with something customized just for you, instead of something costly that someone else did that may have somewhat better quality but isn't really exactly to your liking.

    Read more...

  • O/R Mappers: Simple Database Features ?

    I'm going to get grilled for saying this, but I don't see why most O/R mappers leave out some very simple database things.  Note that I'm not saying I have everything down, nor do I really have any of the more complex things that some will need.  What I'm saying is that there are some things that everybody wants and are very simple that are often left out of O/R mappers.

    First and foremost is that every O/R mapper should work in some minimalistic sense with every ANSI SQL compliant database!  Everybody that wants to use an O/R mapper expects to at least be able to do the very basic CRUD operations that are mostly just simple SQL.  Sure, there are a few gotchas, like how to handle database generated primary keys with inserts, and the fact that some data types like DateTime vary in their handling.  But its easy to handle these basic issues with the big players (MS Sql, Access, and Oracle), and then just expose a generic OleDb and Odbc provider and warn the user they are not guaranteed.  .NET even makes such generic coding to all database providers very easy if you always code to the System.Data interfaces instead of using specific providers like SqlClient.  Yes, I know that this won't work for creating the most performant database code, nor for more advanced situations, but that shouldn't prevent the basics from being handled genericly.  In particular, I think its very very bad that Microsoft is going to release ObjectSpaces only working with MS Sql, at least initially.  Maybe they will add others later (but maybe not), but you can't really design a flexible system if you don't test it in more than one case from the beginning.

    Another relatively simple case that many people want which fear performance (warranted or not), and some people also really need, are stored procedures.  Yes, the bread and butter of an O/R mapper is and should be dynamic SQL created at runtime, but does that mean you can't give the customer another often requested option when its easy to implement.  Like it or not, there are people that won't ever believe dynamic SQL is often just as performant, and there are some cases at least where they are right.  There are also other issues related, especially when data is denormalized for performance reasons, that stored procedures simplify, although certainly there are other alternatives, like triggers.  Enabling stored procedures will also allow you to provide a work-around for those silly users that want to use one of those other databases and still have database generated primary keys.  There's bound to be other reasons for allowing stored procedures too -- my point is that there really is no reason to not make them an option since they are simple to include.

    Finally, I have one other "simple" scenario that I included in my WilsonORMapper that no one has asked for -- but I bet everyone will want now that I've included it!  Basically, there's no reason that every O/R mapper can't provide a way to ask for a specific page of data, where you get to specify the size of each page.  That's right -- my ORMapper will query the database for only a specific page of data, using standard ANSI SQL, and not force you to have to jump through hoops or return more records than you really want.  You can actually see me using this feature on my site where I have added a "grid" (I use repeaters) with paging of 20 records, also with sorting ascending and descending enabled.  Once again there's no reason why the basic SQL can't be optimized better for the major players while still providing the generic functionality for all databases.  I'm sure I'm going to be ridiculed as naive, and I am when it comes to all the advanced features that others are providing that I will never even attempt, but that's no reason to not deliver the basics.

    With that I'm done with my "lessons learned" topics.  Those are the basic features I expect in all O/R mappers.  What do you think are the simple basics that should be required?

    Read more...

  • O/R Mappers: Base Class or Not ?

    One of the main criticisms I've seen of most O/R mappers is that they typically require you to have your entity objects inherit from a base class provided by the mapping framework.  This can be a headache if you have your own class framework, and it can also contribute to other problems if you aren't careful.  I started out to also require a base class, since it does make things easier for the mapping framework, but I noticed that Microsoft does not have this restriction in ObjectSpaces.

    Why does a base class make things easier?  It easily allows you to maintain any state required by the mapping framework in the entity objects themselves.  What type of state do you need to track?  You at least need to track whether or not the object is new or existing so you can decide to do an insert or an update.  You also need to track if the object has been marked for deletion if you don't process that immediately.  Why would you not want to process a deletion immediately?  Think transactions and rollbacks mainly.  Finally, you also need to track the original values of the entity object's members if you are going to do anything fancy.  So far I only use the original values for checking if an object has changed and to enable cancelling changes (which seems to be missing from ObjectSpaces).  I can also change my implementation easily to generate more specific updates or introduce optimistic concurrency in the future since I've tracked the original values.

    So what can go wrong with a base class?  First, some customers do have their own class framework, like it or not, and is it really the O/R mapping framework's business to dictate their own base class if its avoidable.  Next, if you agree that the Manager Design Pattern is the approach to take, and you don't plan on actually having Save and Delete methods in your base class, then you also have to question if the state of the entity objects required by the mapping framework belongs in the objects or the manager.  I'm not saying its wrong to put the state in the objects here, just that its a fair design question to be asking at this point.  OK, but I asked if anything can actually go "wrong" with the base class approach?  Well, how many copies of a single instance of an object do you think are valid?  A base class approach lets you retrieve multiple copies of the same instance of an object, each with it own state, adding overhead and possibly causing concurrency issues.  On the other hand, relying on the manager to track state, without a base class, means that there can only be one copy of each instance of an object that is actually being tracked.  Another possible problem with the base class approach is that it may (or may not) make your entity objects non-serializable and/or non-remotable.  I suppose that problem goes away if the base class inherits from ContextBoundObject, but that adds its own rather heavy overhead, so one has to ask if that's really the best answer.

    So what can go wrong without a base class?  Now you have a central manager object that is tracking the state of all your entity objects, but how does it know when to release all this state its tracking?  This seems to be the biggest problem I found as I started down my implementation witout a base class, and I think I got it right.  First, you need to track whether or not there is actually any actual object that still exists that needs its state to be tracked, without actually holding a reference to the object itself.  This is exactly what the WeakReference type is for in the .NET framework.  It lets you hold a "reference" to an object while still allowing the garbage collector to ignore your "reference" to it.  I certainly can't begin to understand half the code actually used in ObjectSpaces when I look at it, but I did confirm that this was also the solution that Microsoft was using, which makes me feel a little more comfortable.  Next, that doesn't actually remove the state that the manager is tracking -- it simply makes it possible to determine the state is no longer needed.  So we now have to introduce a thread that will periodically go out and actually check on these WeakReferences and remove the state that is no longer necessary.  I created a simple timer for this, and I made the timer's interval configurable -- something that I don't see in ObjectSpaces, although it may be there somewhere.  One issue that remains is what to do about server based systems (web, web services, remoting) where the manager will not be able to ever keep WeakReferences?  My solution is to also track the time of last access and only remove state that is older than a configurable session time, which defaults to zero for normal in-process desktop applications.  I have no clue whether or not Microsoft has also thought about this in ObjectSpaces, although I certainly hope they have if they are going to have ObjectSpaces work in distributed systems.

    So, I chose the no base class approach for my O/R mapper, since its what many people want, its what ObjectSpaces will be delivering, and I think its the overall more sound approach.  I'm still pretty new at this, so maybe I've missed something, but I think I'm on to something since Microsoft is also taking this approach, although I'm sure everyone can agree they've been known to have missed the boat at times.  What do think?

    Read more...

  • O/R Mappers: To Attribute or Xml ?

    I learned a lot while I was creating my own O/R Mapper, so I thought a few lessons learned would be good to blog. I'll be adding a few of these over the next couple of days, so stay tuned for some other interesting observations.

    First, I think Microsoft has made the right decision to use xml mapping files instead of attributes in code. Note that I'm not commenting on the complexity of theirs, although I suppose it may be necessary for performance. What I'm getting at is that attributes just are NOT the best approach for things that are external to the code! Attributes make perfect sense for things like unit tests, design-time only features, and aspects like logging too. But persistence mappings are very external to the code, like table and field names, and they do often change.

    Why should I have to recompile my business objects just because someone changed the table or field names again? I can also easily imagine some cases where applications must work with existing databases as well -- then what? And what if I want to support multiple databases but I must make some schema changes for performance reasons? Finally, I've even seen people say they want to change their schemas on the fly -- that's weird, but what if? Only external xml mappings allow you to handle these.

    So far I've stayed away from some of the other reasons that attributes are not the best choice for persistence, but why not list them anyhow just to stir up the kettle. My pet peeve -- persistence attributes make my business objects very ugly for no important development reason. Speaking of development, I can imagine that many shops will have different people creating the business objects from the people that will be worried about the mappings. Finally, if you use a GUI helper to create the mappings then it seems obvious they should go in a separate file.

    I can't think of even one reason to prefer attributes other than the argument that external files get messed up. I know attributes are cool in .NET, especially since its one thing that Java does not have, but I do not think that they make much sense for object-relational mapping.

    What do you think? I chose xml mappings, not attributes.

    Read more...

  • Announcing the WilsonORMapper

    Take a look at http://www.ORMapper.net for the latest on the WilsonORMapper!

    Announcing the WilsonORMapper -- Object-Relational Mapper for .NET.
    Automatically retrieve and persist objects from MS Sql, Access, etc.

    • Largely compatible with the syntax of MS ObjectSpaces in .NET v2.0.
    • Supports MS Sql, Access, and Oracle -- should work with others too.
    • No need to inherit objects from a base class or embed any attributes.
    • Object-relational mappings are made in an extremely simple xml file.
    • Includes a Windows ORHelper to generate the mapping and class files.
    • Can optionally use stored procedures for any and all data operations.
    • Explicitly create objects with new and StartTracking, or use GetObject.
    • Retrieve individual objects by key (identity, guids, or user-entered).
    • Retrieve collections as a static ObjectSet or a cursor ObjectReader.
    • Query with any where and sort clauses, or setup a default sort order.
    • Supports paged collections -- queries can specify page index and size.
    • Now supports one-to-many, many-to-one, and many-to-many relationships, 
    • with optional lazy-loading to reduce unnecessary loads from database.
    • Collections support one-way read-only binding for both Web and Windows.
    • Persist individual objects, or a collection of objects in transaction.
    • Separate instances can be created for multiple databases or providers.
    • Easy to deploy with simple x-copy in shared web hosting environments.
    • Supports server systems with a configurable session and cleanup time.
    • Free to demo in VS.NET debugger, purchase of $50 includes C# source.

    Online Demo: http://www.wilsondotnet.com/Tips (including paged grids)
    Free Demo:  http://Download.WilsonDotNet.com/WilsonORMapperDemo.zip
    Documentation:  http://www.WilsonDotNet.com/Controls/WilsonORMapper.htm

    • Also included in purchase is the entire source code of WilsonDotNet.
    • This includes the WilsonWebForm control for multiple server forms.
    • Also included are dynamic MasterPages, stylesheets, and localization.

    Why?  I'm not trying to compete with Frans' LLBLGen Pro or EntityBroker.
    I'm simply extending my best practices website with the next logical extension.
    I can't do that with a third party O/R Mapper and still offer fully working source.

    Note: The Demo's limitation is that it only works inside the VS.NET debugger.
    Update: WilsonORMapper v1.2.0.0 (3/4/2004) includes the following:
    (1) Build simple expressions with an OPath-like syntax using the QueryHelper.
    (2) User can define fields for optimistic concurrency, or read-only fields.
    (3) Updates can optionally be only the changes, with or without concurrency.
    Update: WilsonORMapper v1.1.1.1 (2/26/2004) includes the following:
    (1)  Bug Fix: Some null-value cases were not working -- known problems fixed.
    (2)  Bug Fix: Entity objects with private constructors can now be used in mapper.
    (3)  ORHelper: Fixed another bug that caused some naming schemes to fail.

    Update: WilsonORMapper v1.1.1.0 (2/22/2004) includes the following:
    (1)  Tim Byng has helped me add support for optional field default null-values.
    (2)  Bug Fix: Transactional PersistChanges with Collections now works correctly.
    (3)  ORHelper: Fixed two VB.NET bugs and one bug for some naming schemes.

    Update: WilsonORMapper v1.1.0.0 (1/27/2004) includes the following:
    (1)  Now supports One-To-Many, Many-To-One, and Many-To-Many relationships.
    (2)  All relationships support lazy-loading, use ObjectHolder for Many-To-One.
    (3)  Stored Procedures with parameters can now be used to load collections.
    (4)  GetDataSet allows you to specify what subset of columns to actually load.
    (5)  Execute set based updates and deletes, including expressions with updates.
    (6)  Oracle support has been tested, and a provider model has been introduced.
    (7)  ORHelper: Chad Humphries has made some improvements to the logic and GUI.
    Performance: WilsonORMapper v1.0.0.3 (1/14/2004) includes the following:
    (1)  Performance is now very much comparable, sometimes better, than DataSets.
    (2)  Objects should implement IObjectHelper to achieve the best performance.
    (3)  Objects can implement IObjectNotification to handle persistence events.
    (4)  A new method GetDataSet was created for situations where it is desired.
    (5)  The ordering of fields in the xml mapping file is now retained internally.
    (6)  Bug Fix: Table names with special characters (space, underscore) work.
    (7)  ORHelper: Optionally implements IObjectHelper and/or IObjectNotification.
    (8)  Documentation: Now includes a new column to indicate if part of MS syntax.
    Bug Fix: WilsonORMapper v1.0.0.2 (1/9/2004) includes the following bug fixes:
    (1)  The 1-to-many child relationship failing in some cases should now be fixed.
    (2)  The xml mapping file should now allow comments to be inserted inside of it.
    (3)  An extra check and exception was added on loading the xml for better help.
    More Help: See here for more details on the mappings and here for child setup.

    Read more...