Archives

Archives / 2003 / May
  • Entity: why do some people who write IT books re-invent definitions?

    Paul Gielens blogged about a possible misunderstanding about the term 'Entity'. Reading his text it appears as if the general term 'Entity' has changed recently. The reason: Eric Evans created a different definition. Let me be blunt here: if a definition of a general group of knowledge is known for years by a given term, do not use that term to extend that definition so it will cause misinterpretations between people thinking they are talking about the same definition. The term here is 'Entity', and it is defined for a long time, firstly by Peter Chen if I'm correctly informed, in his article 'The Entity-Relationship Model', ACM Transactions on Database Systems vol.1 nr.1 (March 1976), and his book 'The Entity-Relationship Approach to Logical Database Design' - Wellesley, Mass.: Q.E.D. Information Sciences, 1977.

    Chen's work is about a model to design databases, the Entity-Relationship model, in short the E/R model. As you can see, this model is rather old, more than 25 years, and is replaced later by the work of prof. G.M. Nijssen and prof. T.A. Halpin (Conceptional Schema and Relational Database Design, 1989) by their work on the NIAM modelling methodology, later renamed to ORM and extended by prof. T.A. Halpin. (read more about ORM here)

    Read more...

  • A quick update on LLBLGen Pro

    You might have heared about the DAL generator I released last year, LLBLGen, which as a surprise to me, became a worldwide success (over 25,000 downloads). I'm currently busy developing its big brother LLBLGen Pro, which should be released later this summer. As a quick update on what this successor is capable of, some lines of example code which uses some generated code (entities, collections) that is produced with an alpha version of LLBLGen Pro.

    It loads a collection of entity objects (that's right, normal entity classes) of the type OrderDetail, from the Northwind database, which contain references to the product with ID '24', binds it using databinding to a datagrid in a form, which allows full editing of the OrderDetail objects, and after that, all changed objects are saved using an embedded transaction to the persistent storage using a single line of code. I think it's pretty neat :) Of course this is one of the many ways to retrieve / construct entity objects using the O/R mapper code generated by LLBLGen Pro. More updates later on.

    Read more...

  • My wish-list for the next Visual Studio.NET release

    I'm now working for a week or so with VS.NET 2003 and it has some neat features that version 2002 didn't have which are, well, neat to have, but also started me thinking why they are implemented the way they are and why there is so much ground left untouched. Below I've formulated some wishes for the next version of Visual Studio.NET (2004?), and as with every wish-list, I hope the wishes come true, or better: what's wished for is a bad example of a short sighted vision of how reality should be and the next version of VS.NET (2004?) will prove that :)

    Read more...

  • Don't try to re-invent the browser, please.

    I saw several "Is IE dead?" blogs and most recently DonXML's blog about this subject and I really think this discussion is not focussing on the real issue.

    What's the problem with current browsers? It's not that they can't render version ABC of a given HTML, XML or XSLT variant. The problem is that they are used as application-GUI hosts while they are intended to be used as stateless viewers of information. Through an evolutionary process, Andreessen's tool to view hyperlinked texts has become an interactive viewer of a GUI for applications but still does that using the same old techniques. Which is a result of the way HTML works, and all mark-up languages in that respect.

    I read all kinds of thoughts how and why IE should evolve but it really shouldn't. It should be put to rest, and the focus should be moved to an application which is already at our desktop: the CLR itself. It's a waste of energy when you are trying to re-invent the wheel that is already available: winforms. The majority of web-applications use cumbersome HTML-forms to try to build a workable GUI, while a winforms developer can do that with ease using the winforms glyphs and controls. If there was a way to run a GUI using winforms on the desktop of the website visitor, you can build a rich and powerful GUI with common technology which doesn't suffer from the fact that there is no scripting available, all HTML form glyphs are text based and other nasties related to HTML (or XHTML for that matter) which are totally avoidable when you use a decent GUI framework, like winforms.

    I've been developing websites and webapplications since 1994 (ah, those good ol' days without images on pages) and I never understood why on earth a browser is used to host a rich GUI, because HTML is not meant for that, it lacks serious building blocks available in every GUI toolkit on the planet (even the console library Cursus has them!). Trying to keep it alive for webapplication GUI's is not the way IE should be evolved, IMHO.

    The problem is platform independence. When Mono sees the light of day, a CLR with decent winforms is available everywhere. It should then be possible to run any decent winforms GUI frontend for any webapplication out there on every decent system and OS. IE as an application is then not needed anymore, the browser is then obsolete.

    HTML, or its markup successor, will not go away of course. It will be rendered by components embedded in other applications, like helpviewer, blog readers and other tools. Such a component can be embedded in winforms as well, as a control.

    The concept of the 'browser' is a concept of the past. Let it rest, let it die in peace, it's about time users move on to richer environments and technologies which truly connect user with application, no matter what type of connection (local system pipe/lan/wan/Internet via modem/ADSL/WiFi, you name it) is used and whatever flavor of browser and client side settings.

    Read more...

  • Firebird .NET data provider v1.0 released

    Yesterday, the Firebird team released their .NET provider for the Open Source database Firebird (based on the Interbase code). The provider is also open source and of course free. I haven't played with the Firebird database recently but according to the features Interbase had and looking at what the Firebird team added / updated in the source code, it is a massive alternative for those developers who need a true RDBMS with ACID compliance throughout the system and full stored procedure and trigger support but can't afford SqlServer (or alternatives which cost even more money).

    Read more...

  • Quick note from the HN-rehab center

    A quick note on the Hungarian Notation/Coding blog I wrote this morning and which was food for some good replies from Patrick and Chad. I now am a full day clean, that is, no Hungarian Notated member variable or parameter left my fingertips. The arguments I had this morning, about the naming problems of some parameters I had and the prefixing of private member variables, were not that hard to overcome.

    As mentioned in the comments and also by Chad's clear posting, I totally forgot the 'this' statement in my arguments, which is my own fault, I never use 'this' (due to HN), and it didn't occur to me. 'this' however is a good statement for non-HN style development, you can totally abandon the prefixing, if you feel you have to. The naming problems I had also occurred with class name clashes and parameters which suddenly had similar names (the class name PasCal cased, the parameter caMel cased), so I had to come up with new names for these parameters, and at first I thought this was a disadvantage, but it turned out quite OK, re-thinking your names for parameters is good, you come up with better ones over time, and in the end the result was much better than I expected in the beginning.

    Now if I only could find that phone-number for that AA-meeting for ex-HN coders...;)

    Read more...

  • Farewell, beloved Hungarian Coding.

    Today, is a memorable day for me. This morning I had to bite the bullet: get rid of my Hungarian Coding style in C#. Not because I hate Hungarian Coding, on the contrary: I love it, but because I have to. The reason for this is that when you want to sell a library which targets .NET, you simply can't feed your customers a library with interfaces which use input parameters like cmdCommandToExecute or efEntityField. I did convince myself when I started this project, it was my code and I should decide how the interfaces looked like. But this is just plain stubbornness. A customer doesn't give a you-know-what about the zillion + 1 reasons you can bring to the table why you had to make the interfaces of the classes inconsistent with the rest of the classes used in the application (f.e. the .NET classes).

    Does it hurt? Well, a little. I'm still convinced Hungarian Coding is necessary. F.e. I had an input parameter 'iOperator'. You can't change that to 'operator' because that's a reserved keyword. So you have to come up with another name for a parameter that's perfectly described with 'operator'. Member prefixing is another advantage, however you can also have that with your caMel cased members, f.e. by using a '_' prefix. Microsoft avoided every guideline for private member variable naming, and if I have to guess I think the reason for this is that with a prefix you can use the same name for a member and a parameter which supplies the value for the member in a constructor or method, without forcing you to come up with 2 names for the same value.

    I'll miss it though, as I missed my own scheme when I switched to Hungarian Coding back in 1995 (doing C++ work is driving you towards Hungarian Coding, like C# is driving you to PasCal /caMel casing). I already fear the Cold Turkey syndrome I expect when I re-read my own code without a type prefix in sight. *iShiver*

    Read more...

  • Concurrency Control Methods. Is there a silver bullet?

    Which concurrency control method do you use most of the time? "First Save Wins" ? (optimistic locking in ADO.NET) "Last Save Wins" ? (Overwrite the row, no matter what). Ever wondered what the difference between the two is when it comes to efficiency? Most people haven't and think "Last Save Wins" is BadTM and "First Save Wins" is GoodTM. But both make at least one person loose his work to preserve the work of another person. I read a thread today in the microsoft.public.dotnet.framework.adonet newsgroup where one person asked how he could make the SQL generator in Visual Studio.NET to use the Last Save Wins method and another person stepped in and bashed him for using a not very smart concurrency control method like Last Save Wins. But does it matter which one you pick when both are as inefficient as you can possibly make them to be (i.e. someone looses work) ?

    Read more...

  • Microsoft gets Database Performance crown back!

    On May 20, HP regained the TPC-C performance crown for non-clustered (you know, "Big Iron") , using Microsoft Windows server 2003 Datacenter Edition and SqlServer 2000 Enterprise Edition 64-bit. It almost also nailed the clustered (you know, "Little Iron") record, also HP/Windows/SqlServer, result which was submitted September 2001. A stunning 707,102 transactions per minute were clocked on this beast with 64 Itanium 2 processors.

    And who was that little fellow who said Microsoft's products do not scale well? :)

    Read more...

  • "Framework-Hell"-solution is here: ISV's can upgrade to VS.NET 2003 without pain.

    Yesterday I blogged about a problem ISV's can run into when they are selling .NET class libraries compiled with Visual Studio.NET 2003 to customers using Visual Studio.NET 2002. Today I've found a solution, which seems to be used by other ISV's already, and which solves the problem which caused me to post my rant yesterday. Below I'll first describe in a few words the exact definition of the problem and then the solution which works without any problem for your customers. After that I'll enlighten a bit the things you have to keep an eye on when implementing this technique.

    The Problem
    A short version of the problem: ISV Acme is currently building a set of class libraries targeting .NET which are sold in a package. Acme's developers are using Visual Studio.NET 2002, and the class libraries are thus using .NET 1.0. Acme's class libraries use additional assemblies from the .NET framework like System.Data. Because Acme's developers are subscribers of the MSDN universal program, they received Visual Studio.NET 2003, which targets .NET 1.1. Joe and Jack, Acme's main developers, migrate to Visual Studio.NET the same day they received the DVDs and are very happy with the new IDE and its new features. Visual Studio.NET 2003 upgrades the projects for the class libraries Acme is planning to sell to make the projects target .NET 1.1, and Joe and Jack couldn't care less, who wants to go back to Visual Studio.NET 2002 when you have Visual Studio.NET 2003 on your machine? Acme's project leader orders them not to include any .NET 1.1 features because customers using .NET 1.0 should also be able to use the class library.

    Now, Acme finishes its set of class libraries and releases them to the public. Janice, Early's sister, is one of the first customers to buy Acme's wonderful class libraries, however because her boss, Peter Jansen, is from The Netherlands and doesn't want to spend any money on new Visual Studio.NET versions, so Janice is still using Visual Studio.NET 2002. The class libraries are added to the project Janice is working on and she compiles the code. It compiles without errors. Excited, she starts a trial run of her project. However her flawlessly compiled code throws an exception:

    Unhandled Exception: System.IO.FileNotFoundException: File or assembly name System.Data, 
    or one of its dependencies, was not found.
    File name: "System.Data"
     at Acme.Libraries.CoolStuff.MagicTricks.DoMagic(int seed)
     ...
    === Pre-bind state information ===
    LOG: DisplayName = System.Data, Version=1.0.5000.0, Culture=neutral, 
    PublicKeyToken = b77a5c561934e089
     (Fully-specified)
     ...
    

    Stunned she looks at her monitor. Version 1.0.5000? That's weird. Being a geek for ages, she knows .NET 1.0 uses version number 1.0.3300 for its assembly files.1.0.5000 must be .NET 1.1. She can't use .NET 1.1, because she runs Visual Studio.NET 2002 which is hard-linked to .NET 1.0 and her boss is not willing to pay for Visual Studio.NET 2003. She calls Acme and tells Joe the problem she's having. Joe understands her problem and tells her he will look into it a.s.a.p.

    Joe fires up the old test machine in the corner which still has Visual Studio.NET 2002 installed on it and creates a test application using the Acme class libraries. Testing it, he runs into the same problems Janice experienced. Jack tries to help but like Joe he can't seem to fix this easily inside Visual Studio.NET 2002. Apparently their class libraries reference .NET 1.1 and that is causing trouble on a machine with .NET 1.0 installed and not .NET 1.1. They try different workarounds, but none of them seems to be very nice: or they force the customer to perform significant actions or they force Joe and Jack to keep track of multiple projects or compile on the command-line. (Jack has worked with SunOS and Gnu Make for years and refuses to do that, Joe agrees immediately). However their boss orders them to come up with a solution a.s.a.p. since Janice is a good customer.

    The Solution
    The solution is quite simple and yet so easy to miss. Jack comes up with an idea of fooling Visual Studio.NET 2003. It works, the newly compiled class libraries, when send to Janice, work flawlessly on her .NET 1.0 machine. What's the trick? Visual Studio.NET 2003 converts everything in your project to .NET 1.1: references, project format etc. The only thing left untouched is the code itself. If you compile a converted project with Visual Studio.NET 2003, it will target .NET 1.1, and the assemblies created will have references to .NET 1.1 assemblies, with version number 1.0.5000. To avoid this, Jack simply removed in the Visual Studio.NET 2003 projects of the class libraries the references to .NET 1.1 assemblies and replaces them with references to .NET 1.0 assemblies. (which was installed on his machine also). All projects in a complete solution have to have references to the same assembly versions (in this case .NET 1.0 versions). Jack saves the solution and the project files, closes Visual Studio.NET 2003, re-opens it and rebuilds the complete solution. The class library assemblies now compiled by Visual Studio.NET 2003 contain references to .NET 1.0 assemblies and will run without trouble on a .NET 1.0 installment, when used in a .NET 1.0 program.

    What made them fall into this pitfall? Visual Studio.NET 2003's eagerness to help the developer out: when it creates a new project or converts a 2002 project, it will create references to .NET 1.1 assemblies and a developer will probably not think about these references until a customer like Janice calls with an error. If you have code which has to run on .NET 1.0 also, and you are like Joe and Jack and want to use Visual Studio.NET 2003, keep in mind that you have to change the references to the framework assemblies in your projects by hand, all of them.

    Things you have to keep in mind
    Now, the story about Jack and Joe is probably one that looks a lot like the situation you're in or will be in in a short notice. Will it always work? If you write .NET 1.0 compliant code it will. However you should keep in mind the following things when using this trick:

    • Be aware that Visual Studio.NET 2003 will offer you to develop .NET 1.1 specific code and if you are not careful, you will not notice this (but your .NET 1.0 using customer will).
    • If you have compiled your projects during a session in Visual Studio.NET 2003 and you change the assembly references in the projects, be aware that the CLR can't unload assemblies from an appDomain. It can be that Visual Studio.NET 2003 has still .NET 1.1 assemblies in core and compiles will then not succeed (probably show double declaration errors and other goo). After changing the assembly references, save the solution and restart Visual Studio.NET 2003. Compilation will then succeed.
    • When you have more than one project in a solution, keep in mind that all projects in a solution have to have the same versions of a .NET assembly referenced (so you can't have one project reference to System of .NET 1.1 and another project to System from .NET 1.0), most likely your compiles will fail due to double declaration errors. If this is not possible, give the projects which have to be used on .NET 1.0 their own Solution.
    • You can't mix framework versions. An assembly has to reference one version, it can't use System.Data from .NET 1.0 and System.Xml from .NET 1.1
    As always this list is probably not complete. Always think about why you choose for the option to support the total range of frameworks in your code. If there are big reasons for migrating the code to .NET 1.1 and take advantage of its new features and just a small set of reasons to keep the code on .NET 1.0, you probably will move to .NET 1.1 completely and this trick is not required.

    Acknowledgements
    Special thanks to Thomas Tomiczek for giving me the hint that made me stumble into this solution (which he knew already :) ) and Sam Gentile for good blog-advice.

    Read more...

  • Why I think some people shouldn't use VSNET 2003.

    Additional Note:
    I've changed the title of this article. The title is a hint from Robert. While I think blogging is about ventilating personal opinions, I do not feel it's my duty to bring Scott and this site (because the blogs are in the main feed) in trouble.

    I do however find it pretty weird that people complain about this article but do not complain directly to me, by email f.e. I know I choose titles and article topics sometimes on the 'edge' of what can be tolerated. I learned that when I was a newseditor. Bad habbits never die, they say. The comments on this blog showed there are some solutions but they're all not that easy (for the customer or for the developer).

    As a final remark, for Microsoft: a developer has to make two choices: which .NET version to target and which IDE version to choose. With Visual Studio.NET I have one choice: the .NET target is also implying the IDE version. I can't force my customers to shell out money to upgrade to VS.NET 2003, just because I think it's nice to target .NET 1.1. These two things: platform and IDE, should be separated. I can still develop software for Windows98 in VS.NET 2003, and I can also develop win32 software for Windows2003 in VS.NET 2002. Let that kind of freedom also be a choice for .NET developers. Thanks.

    This morning it hit me: upgrading to Visual Studio.NET 2003 for .NET development of class libraries and controls is a bad choice when you are going to sell those class libraries and controls. I'll try to describe what drove me to this conclusion.

    Visual Studio.NET 2003 offers great new features, for me the best are the faster compiler and the great new intellisense features. It also has a big caveat: it will use .NET 1.1 when compiling your code. This means that any library or control you develop in Visual Studio.NET 2003 and which relies on even a single .NET assembly, f.e. System.Data, will target .NET 1.1. Visual Studio.NET 2002 targets .NET 1.0. If you compile your code with Visual Studio.NET 2002, your code is requesting the .NET 1.0 versions of the libraries you reference.

    Because of the new features in Visual Studio.NET 2003 (and don't forget the huge amount of bugfixes, still not available for Visual Studio.NET 2002), developers who can, want to upgrade to this new Visual Studio.NET version a.s.a.p. Microsoft offers side-by-side installation of this new version with the 2002 version, so it is a no-brainer, right? No. It's a big mistake, and this article is ment to warn any software vendor who wants to release assemblies for developers of .NET applications: do NOT, I repeat, do NOT upgrade to Visual Studio.NET 2003, because it will seriously hurt your customer, if he can use your code at all. Only consider an upgrade if you are totally sure your customers use .NET 1.1 and / or you have to because you are using .NET 1.1 features.

    Let's investigate a real-life situation:

    The problem
    Within a few months I hope to release a generator/framework which will generate code which targets the accompanying framework. This framework is written in C# and is pre-compiled, it comes in the form of a couple of assemblies. You can also pick a situation where a developer writes a control-library in C# or VB.NET and sells it to other developers for usage in their products, same problem.

    When you, as an ISV, write your code in Visual Studio.NET 2002, it will be compiled with .NET 1.0 in mind. This means: your class library DLL will run on any system with .NET 1.0 installed, or .NET 1.1 installed: when both are installed, it will pick .NET 1.0, if .NET 1.0 is installed it will pick .NET 1.0 and when .NET 1.1 is installed, it will pick .NET 1.1. This also means that any developer can buy your library/framework: if they use Visual Studio.NET 2002 or 2003, it doesn't matter, the assembly will run OK.

    The situation changes dramatically when you, as an ISV, switch to Visual Studio.NET 2003. Your class library DLL will from then on target .NET 1.1, even if you do not use .NET 1.1 features. This also means that customers who buy your library and who use Visual Studio.NET 2002 will run into problems, because their code is compiled against .NET 1.0 and will not run, simply because the bought assembly requires .NET 1.1 assemblies which are not available. Errors like this will appear at the customers machine:

    H:\Temp\dotnet test>FrameworkTester.exe
    Unhandled Exception: System.IO.FileNotFoundException: 
    File or assembly name System.Data, or one of its dependencies, was not
    found. File name: "System.Data" at SD.NorthwindTest.DaoClasses.CustomerDAO.FetchCustomer(String customerID) ... Fusion log follows: === Pre-bind state information === LOG: DisplayName = System.Data, Version=1.0.5000.0, Culture=neutral,
    PublicKeyToken = b77a5c561934e089 (Fully-specified) LOG: Appbase = H:\Temp\dotnet test\ LOG: Initial PrivatePath = NULL Calling assembly : SD.NorthwindTest, Version=1.0.1235.22288,
    Culture=neutral, PublicKeyToken=null. ...
    FrameworkTester.exe is a testapplication compiled on a box with .NET 1.0. The error-causing method is in a library compiled on a box with .NET 1.1 (using Visual Studio.NET 2003) and referenced during the compile of the FrameworkTester testapplication. As the error shows, the assembly compiled with Visual Studio.NET 2003 requires System.Data v1.0.5000.0, which is .NET 1.1.

    The customer who runs Visual Studio.NET 2002 has now a big problem with your software: he can't use the assembly, unless he adds a long list of cryptic assemblyBinding tags to the config file of his application, and this is a cumbersome job to say the least. These assemblyBinding tags are automatically added in Visual Studio.NET 2003, when you specify .NET 1.0 as a supported platform for your application. A user of Visual Studio.NET 2002 doesn't have this functionality.

    There are no figures of how many developers are on .NET 1.1 / Visual Studio.NET 2003 at the moment. Fact is that a very large part of the current .NET developers have .NET 1.0 on their machines and are most likely compiling against this platform. If you want to sell your software to developers, thus the developers in this large group, you shouldn't upgrade to Visual Studio.NET 2003 to avoid the platform conflicts and the resulting support hell that follows this. ISV's should compile using .NET 1.0 to be sure their customers can use the software they bought with the IDE they currently have: be it Visual Studio.NET 2002 or 2003.

    What's causing this mess anyway?
    The reason people have to jump through these hoops is the hard-wiring of the target platform in the IDE used. You can't target .NET 1.1 assemblies in Visual Studio.NET 2002 and you can't target .NET 1.0 assemblies in Visual Studio.NET 2003. This means that the choice which editor (!) you want to use for the source-text (!) is automatically also the choice which platform you are going to target.

    Is there a solution?
    Well... it would be very nice if I could set the target platform for the compiler in Visual Studio.NET myself, so I can benefit from the editor enhancements plus target older platforms so I can be sure all customers of my library will be able to use it in their applications no matter which version of Visual Studio.NET they use. However, this is not possible nor will it be possible. Microsoft decided it would be nice if developers get scr*wed over when they buy Visual Studio.NET, so here we are. Then there is the phenomenon Publisher Policy File. Microsoft states in the MSDN that you should be careful with these files and only release them when your application requires them. And how to use them in this situation? It probably requires a policy file for the .NET framework so all assemblies are redirected to the .NET 1.0 versions when a .NET 1.1 assembly requests a 1.1 version. Does anyone know? I couldn't find any information about this matter related to the backwards compatibility issue.

    The best workaround is to supply a list of assemblyBinding tags with your assembly which then have to be included in the app.config file of the exe file compiled with .NET 1.0. This then redirects any requests from your .NET 1.1 targeting assembly to the .NET 1.0 assemblies. Did you know this? I didn't for sure. Is Microsoft supplying information about this? Not at all. Only upwards compatibility issues and information about side-by-side execution which will not work. See this article.

    Oh, before you think: "but then I'll keep both Visual Studio.NET versions side by side on my machine!"... you can't open your projects in Visual Studio.NET 2002 after you opened them in Visual Studio.NET 2003. They get converted and you can't convert them back (you can, but it's messy). So a simple recompile using another Visual Studio.NET version is not possible.

    If someone has a solution for this which works, please post it in the comments. I've tested it on a machine with solely .NET 1.0 installed and the only way I could run a .NET 1.0 compiled .exe which uses .NET 1.1 assemblies (i.e. bought assemblies) on the .NET 1.0 machine was using the assemblyBinding tags in the .config file. (I copied them over from the Visual Studio.NET 2003 version of the config file which contained these tags generated by Visual Studio.NET 2003).

    I'm seriously considering going back to Visual Studio.NET 2002. It sucks, but I have no choice.

    Read more...

  • Beware of the UnUnloadable IL!

    Today an interesting thread was started on the DOT-NET CLR mailing list. The topic: a possible memory leak can occur when you are not careful with XSL transformations or compiled regular expressions. The reason for these memory leaks is that the CLR is generating IL behind the scenes and generated IL cannot be unloaded from an AppDomain unless the complete AppDomain is unloaded. Garbage Collection (GC) is not fixing this. The XSL transformations which will cause trouble are the ones which have to deal with XSL stylesheets with JScript embedded. The JScript is compiled into IL and as said, this will never be unloaded.

    Visit the start message and all its follow-ups by clicking here.

    I wasn't aware of this (as you can read in the thread) and will now rework my LR(1) parser to use cached static compiled regular expressions to avoid this memory leak. I thought GC would take care of the compiled regular expression but this is not the case. Since the parser parses UBB code and HTML in a webapplication, it is instantiated on every parse request, which will then cause the generation of a new set of compiled regular expressions, which will never be unloaded. Who would have thought of that!

    Ryan Heath mentioned an article about this in the MSDN: [here]

    Happy Patching ;)

    Read more...

  • VS.NET 2003 isn't all that bad (sort of)

    A few days ago, I blogged about a serious flaw in the ASP.NET editor in Visual Studio.NET 2003. Now, don't get me wrong, I really find it stunning this Bad BoyTM is still around, and according to the follow ups to Mike Moore's posting in the usenet thread about this issue, more people are seriously offended by this lack of customer support, but... Visual Studio.NET 2003 isn't all bad. In fact, some great enhancements are finally available.

    The two features I like the most (I'm referring to C# here, so if you hammer out VB.NET all day and you get all warm inside by reading about these two features: I don't know if they are available to you, but they're rather RADdish so I think you'll find them in your editor as well) are: automatic stubbing of interface implementations (using the Interface StubberTM (more on that later)) and the easy way to hook up / create event handlers from inside the code-editor.

    The first time I tried out the interface stubbing code, I automatically wondered if it would work with abstract classes as well. Well... no. Nothing happened. Digging into the "What's new, Bubba!"-pages I discovered that for abstract / classes with virtual methods another feature was added: override help. Although this feature is very OK, trust me, why doesn't a derivation of a class with abstract methods (or an abstract class altogether) stub out all the abstract methods/properties with overrides similar to the Interface StubberTM? It would have been a nice addition, and the code is already there. Ah well forget it, I'm Dutch, we always find something to nag about :)

    The Interface StubberTM is a true life-saver. Because since Obi-wan Kenobi once told me "Program against Interfaces, Luke, not against Classes", I use them everywhere, and a nice Interface StubberTM just below your fingertips then truly makes your day, I guarantee.

    Of course for the tens of thousands of people who were not yet able to lay their hands on a VS.NET 2003 copy, f.e. because of a laggy ordering system over at Microsoft, you have a nifty little class template for Visual Studio.NET 2002, called Inherited Class Skeleton Generator, which does what VS.NET 2003's cool Interface StubberTM does and much more.

    [Listening to: HDD Symphony - Maxtor Orchestra feat. Screechy Rotations - Allegro (15:23)]

    Read more...

  • Serious ASP.NET Editor flaw lives on in VS.NET 2003

    Several (ok, a lot) of people have been bitten by the "I-format-your-HTML-code-as-I-please"-bug in the ASP.NET HTML editor in Visual Studio.NET v2002. To recap what the bug does to you: you are working in HTML view on a page. You have to do this because f.e. you are working on a repeater template. Because you are a good developer, you create a readable layout of your code, so also for this HTML. When you come to the point where you have to add a control to your ASP.NET page, you have to switch over to design view, drag it from the toolbox onto the page, switch back to HTML view to add f.e. templates or additional properties or even position it correctly. When you switch back from design view to HTML view, your code sometimes (not allways) gets reformatted randomly, even if you switch off any auto-formatting feature in Visual Studio.NET. Hitting control-z undo's this formatting, but it gets annoying over time.

    When Visual Studio.NET 2003 hit the streets, people were expecting a fix for this bug. However... it's still there. Someone started a thread about this in the microsoft.public.vsnet.general newsgroup, demanding an explanation about this behaviour and why it wasn't fixed (because literally hundreds of people have posted this bug in that said newsgroup before). The complete thread can be found here.

    Today, Mike Moore of Microsoft posted a new posting in that thread about his investigations about this matter, and it might be of interest to some of you so I'll post the message directly.

    (c) Mike Moore/Microsoft.
    SUMMARY
    This thread is discussing a problem with the Visual Studio .NET development environment for ASP.NET. If you type directly into the HTML view of the ASPX page, sometimes VS will reformat what you have typed. As is shown by the other posts in this thread, this bug is affecting many people.

    We already discussed two "partial" workarounds. These help, but do not fix the problem. I also wrote that I would try to find out more about why we did not fix this problem with the release of VS 1.1.

    These are the workarounds.

    1) Undo
    Each time you return to HTML view, immediately run Undo. The formatting changes mostly take place when you switch from design view to HTML view and these changes are mostly in a single undo entry. Calling undo then reverses many of the changes.

    2) Save
    Saving the ASPX page just prior to switching to design view reduces the amount of reformatting the next time you switch back to HTML view.

    ---
    Regarding the explanation, I first want to apologize for taking so long to get back to you all.

    I found that the development team did seriously consider this bug. The first thought was to add an option to turn off the reformatting feature. Unfortunately, it turned out to be deeply integrated in the code that makes the editor useful. So, it could not be turned off. Nor could it easily be fixed. Any changes made to this area of the code would definitely impact many aspects of the editor.

    If they could go back in time and rethink that decision, things might be otherwise. However, at the time, the development team looked at all the information they had available and decided that trying to fix this for the 1.1 release would cause more harm than good.

    My opinion about this can be read in the thread linked, but to sum it up: I'm very dissapointed. It's a text-editor for crying out loud. If you can't preserve tabs in text, something is seriously wrong. And what's worse: this won't be fixed until next major release of Visual Studio.NET, which will not see the light of day before Q2 2004.

    Read more...

  • The 'benchmark' code

    I've decided to post the code I used to test what's faster: dynamic queries or stored procedures with optional parameters. The code can be found here. Let me add a disclaimer here, that I'm not pretending to have done scientific research or other scientific benchmarking. All I've done is wrote a couple of routines which represent for me a real life situation using either one of the techniques. Of course the routines can be sped up and recoded in other forms, and perhaps I've made a mistake in the code which results in the slow speed of either one of the used techniques. Feel free to comment :)

    Read more...

  • The SP Benchmark code

     

    All benchmarks are using the Northwind database on SqlServer. Northwind is shipped with every SqlServer installation as also the MSDE installation.

    The Stored Procedure.
    The stored procedure used is the following. Add this one to the Northwind database.

    CREATE PROCEDURE pr_Orders_SelectMultiWCustomerEmployeeShipper
    	@sCustomerID nchar(5),
    	@iEmployeeID int,
    	@iShipperID int
    AS
    SELECT 	*
    FROM	Orders
    WHERE	CustomerID = COALESCE(@sCustomerID, CustomerID)
    	AND
    	EmployeeID = COALESCE(@iEmployeeID, EmployeeID)
    	AND
    	ShipVia = COALESCE(@iShipperID, ShipVia)
    

    The C# code.
    The following code was used to run the benchmarks. There are two routines, one tests the dynamic query, the other tests the stored procedure. Call either one to see how they are performing. You have to change the server name in the connection string constant.

    using System;
    using System.Data;
    using System.Data.SqlClient;
    using System.Data.SqlTypes;
    using System.Text;
    
    namespace Benchmarker
    {
        public class Benchmarker
        {
            private const int   iAMOUNT_LOOPS=10000;
            private const string sEXAMPLE_CUSTOMERID="CHOPS";
            private const string sCONNECTION_STRING="data source=MyServer;initial catalog=Northwind;integrated security=SSPI;persist security info=False;packet size=4096";
            private const bool bPRINT_ROWCOUNT=false;
    
            public void Start()
            {
                BenchmarkSelfBuildQuery();
                //BenchmarkStoredProcedure();
                Console.ReadLine();
            }
    
            private void BenchmarkSelfBuildQuery()
            {
                Random rdmGenerator = new Random(unchecked((int)DateTime.Now.Ticks)); 
    
                Console.WriteLine("Dynamic query benchmark");
                DateTime daStartTime = DateTime.Now;
                Console.WriteLine("Benchmark started on: {0}.", daStartTime);
                int iMaxAmountRowsRetrieved = 0;
                for(int i=0;i<iAMOUNT_LOOPS;i++)
                {
                    SqlString sCustomerID = SqlString.Null;
                    SqlInt32 iEmployeeID = SqlInt32.Null;
                    SqlInt32 iShipperID = SqlInt32.Null;
    
                    // determine random value to check which values should be null. 0 means no value, 1 means shipper is NULL, 
                    // 2 means shipper and employee are NULL, 3 means all are NULL
                    int iNullValueDeterminer = rdmGenerator.Next(4);
                    string sWhereClause="";
                    SqlConnection scoCon = new SqlConnection(sCONNECTION_STRING);
                    SqlCommand scmCom = new SqlCommand();
                    SqlDataAdapter sdaAdapter = new SqlDataAdapter(scmCom);
    
                    switch(iNullValueDeterminer)
                    {
                        case 0:
                            // All parameters have a value
                            sCustomerID = sEXAMPLE_CUSTOMERID;
                            iEmployeeID = rdmGenerator.Next(1,10);
                            iShipperID = rdmGenerator.Next(1,4);
                            sWhereClause = " WHERE CustomerID=@sCustomerID AND EmployeeID=@iEmployeeID AND ShipVia=@iShipperID";
                            // add parameters
                            scmCom.Parameters.Add(new SqlParameter("@sCustomerID", SqlDbType.NChar, 5, ParameterDirection.Input, false, 0, 0, "", DataRowVersion.Current, sCustomerID));
                            scmCom.Parameters.Add(new SqlParameter("@iEmployeeID", SqlDbType.Int, 0, ParameterDirection.Input, false, 10, 0, "", DataRowVersion.Current, iEmployeeID));
                            scmCom.Parameters.Add(new SqlParameter("@iShipperID", SqlDbType.Int, 0, ParameterDirection.Input, false, 10, 0, "", DataRowVersion.Current, iShipperID));
                            break;
                        case 1:
                            sCustomerID = sEXAMPLE_CUSTOMERID;
                            iEmployeeID = rdmGenerator.Next(1,10);
                            sWhereClause = " WHERE CustomerID=@sCustomerID AND EmployeeID=@iEmployeeID";
                            // add parameters
                            scmCom.Parameters.Add(new SqlParameter("@sCustomerID", SqlDbType.NChar, 5, ParameterDirection.Input, false, 0, 0, "", DataRowVersion.Current, sCustomerID));
                            scmCom.Parameters.Add(new SqlParameter("@iEmployeeID", SqlDbType.Int, 0, ParameterDirection.Input, false, 10, 0, "", DataRowVersion.Current, iEmployeeID));
                            break;
                        case 2:
                            sCustomerID = sEXAMPLE_CUSTOMERID;
                            sWhereClause = " WHERE CustomerID=@sCustomerID";
                            // add parameters
                            scmCom.Parameters.Add(new SqlParameter("@sCustomerID", SqlDbType.NChar, 5, ParameterDirection.Input, false, 0, 0, "", DataRowVersion.Current, sCustomerID));
                            break;
                        case 3:
                            // do nothing, they're already NULL;
                            break;
                    }
    
                    // create the query
                    StringBuilder sbQuery = new StringBuilder("SELECT * FROM Orders");
                    sbQuery.Append(sWhereClause);
    
                    scmCom.CommandText = sbQuery.ToString();
                    scmCom.Connection = scoCon;
    
                    // run the query
                    DataTable dt = new DataTable("Test");
                    sdaAdapter.Fill(dt);
                    if(bPRINT_ROWCOUNT)
                    {
                        Console.WriteLine("Run no.: {0}", i);
                        Console.WriteLine("Amount of rows returned: {0}", dt.Rows.Count);
                    }
    
                    if(dt.Rows.Count > iMaxAmountRowsRetrieved)
                    {
                        iMaxAmountRowsRetrieved=dt.Rows.Count;
                    }
    
                    if((i%100)==0)
                    {
                        Console.WriteLine("Amount of runs done: {0}", i);
                    }
                }
                DateTime daEndTime = DateTime.Now;
    
                Console.WriteLine("Benchmark ended on: {0}.\nTotal time: {1}.", daEndTime, (daEndTime - daStartTime));
                Console.WriteLine("Amount of runs: {0}. Max. amount of rows retrieved: {1}", iAMOUNT_LOOPS, iMaxAmountRowsRetrieved);
            }
    
            private void BenchmarkStoredProcedure()
            {
                Random rdmGenerator = new Random(unchecked((int)DateTime.Now.Ticks)); 
    
                Console.WriteLine("Stored procedure benchmark");
                DateTime daStartTime = DateTime.Now;
                Console.WriteLine("Benchmark started on: {0}.", daStartTime);
                int iMaxAmountRowsRetrieved = 0;
                for(int i=0;i<iAMOUNT_LOOPS;i++)
                {
                    SqlString sCustomerID = SqlString.Null;
                    SqlInt32 iEmployeeID = SqlInt32.Null;
                    SqlInt32 iShipperID = SqlInt32.Null;
    
                    // determine random value to check which values should be null. 0 means no value, 1 means shipper is NULL, 
                    // 2 means shipper and employee are NULL, 3 means all are NULL
                    int iNullValueDeterminer = rdmGenerator.Next(4);
                    SqlConnection scoCon = new SqlConnection(sCONNECTION_STRING);
                    SqlCommand scmCom = new SqlCommand("pr_Orders_SelectMultiWCustomerEmployeeShipper", scoCon);
                    scmCom.CommandType = CommandType.StoredProcedure;
                    SqlDataAdapter sdaAdapter = new SqlDataAdapter(scmCom);
    
                    scmCom.Parameters.Add(new SqlParameter("@sCustomerID", SqlDbType.NChar, 5, ParameterDirection.Input, false, 0, 0, "", DataRowVersion.Current, sCustomerID));
                    scmCom.Parameters.Add(new SqlParameter("@iEmployeeID", SqlDbType.Int, 0, ParameterDirection.Input, false, 10, 0, "", DataRowVersion.Current, iEmployeeID));
                    scmCom.Parameters.Add(new SqlParameter("@iShipperID", SqlDbType.Int, 0, ParameterDirection.Input, false, 10, 0, "", DataRowVersion.Current, iShipperID));
    
                    switch(iNullValueDeterminer)
                    {
                        case 0:
                            // All parameters have a value
                            sCustomerID = sEXAMPLE_CUSTOMERID;
                            iEmployeeID = rdmGenerator.Next(1,10);
                            iShipperID = rdmGenerator.Next(1,4);
                            break;
                        case 1:
                            sCustomerID = sEXAMPLE_CUSTOMERID;
                            iEmployeeID = rdmGenerator.Next(1,10);
                            break;
                        case 2:
                            sCustomerID = sEXAMPLE_CUSTOMERID;
                            break;
                        case 3:
                            // do nothing, they're already NULL;
                            break;
                    }
    
                    scmCom.Parameters["@sCustomerID"].Value = sCustomerID;
                    scmCom.Parameters["@iEmployeeID"].Value = iEmployeeID;
                    scmCom.Parameters["@iShipperID"].Value = iShipperID;
    
                    // run the query
                    DataTable dt = new DataTable("Test");
                    sdaAdapter.Fill(dt);
    
                    if(bPRINT_ROWCOUNT)
                    {
                        Console.WriteLine("Run no.: {0}", i);
                        Console.WriteLine("Amount of rows returned: {0}", dt.Rows.Count);
                    }
    
                    if(dt.Rows.Count > iMaxAmountRowsRetrieved)
                    {
                        iMaxAmountRowsRetrieved=dt.Rows.Count;
                    }
    
                    if((i%100)==0)
                    {
                        Console.WriteLine("Amount of runs done: {0}", i);
                    }
                }
                DateTime daEndTime = DateTime.Now;
    
                Console.WriteLine("Benchmark ended on: {0}.\nTotal time: {1}.", daEndTime, (daEndTime - daStartTime));
                Console.WriteLine("Amount of runs: {0}. Max. amount of rows retrieved: {1}", iAMOUNT_LOOPS, iMaxAmountRowsRetrieved);
            }
        }
    
    
        /// <summary>
        /// Starts up a test
        /// </summary>
        class Startup
        {
            [STAThread]
            static void Main(string[] args)
            {
                Benchmarker bm = new Benchmarker();
                bm.Start();
            }
        }
    }
    

    Read more...

  • 'You want a Dataset with that DAL, sir?' 'No, thank you.'

    Yesterday, I saw several blogs about the datalayer phenomenon and how to construct one. Now, there are several roads which lead to a glorifying solution: some muddy, some paved with the finest asphalt. Today, I'd like to talk about one of the more muddy ones: ye olde 'DataSet Route'.

    When I started with .NET and C# in January 2002, I was eager to try out the new data-oriented objects and tools build inside Visual Studio.NET. Like many of you, I too have experienced that vast, hurting feeling of deep disappointment when I discovered the Data Component in Visual Interdev was totally unusable in an n-tier solution (and thus resulted in long, boring weeks of typing Data Access Layer (DAL) code calling stored procedures for Your Basic Database OperationsTM: the Create, Retrieve, Update, and Delete actions). Boy, was I excited when I read that Microsoft had included new tools in Visual Studio.NET which would easily create all that boring code automatically, plus enabled you to bind the results to any control you wanted!

    I started a little test application, nothing big, using ASP.NET, C# and SQL Server. Following one of the many walkthrough tutorials, I started creating classes, dragging and dropping SqlConnection objects, DataAdapters and what have you. After a couple of minutes of dragging and dropping objects on canvasses, I got this wobbly feeling deep inside me that I've been here before, but I thought "Naah.. this is new stuff, it can't be".

    However, after completing the tutorial and seeing the goopy code constructing a nice datagrid on my shiny ASP.NET page, filling it with data leeched from the Northwind database, that annoying feeling inside me was still there. To get rid of it, to convince myself my mind was playing hide and seek with me, I started to look better at the code I constructed using the tutorial. "See? It looks totally different!"

    It didn't help. It was Visual Interdev and its winner of the Useless Object Of The Year-award: Data Component Design Time Control (DC DTC) all over again. Well, not totally. There was a difference: the VS.NET / .NET approach runs at the server using disconnected sets of data. For the rest, it was all the same: connection string definitions hardcoded (generated!) in the code, no middle tier. No middle tier? Not even a DAL tier! And where was the stateless paradigm Microsoft once preached to us developers?

    To use the tools inside Visual Studio.NET to work with the new data related objects , you have to sell your sole to the IDE so it can demand you to do as you're told. With 'using' I mean: using the tools so you save precious time by utilizing the tools provided to generate code for you you otherwise would have to type in. For starters: to use the drag/drop features in Visual Studio.NET, you have to derive your class, which will utilize these dropped objects, from System.ComponentModel.Component. This can have serious implications for your object model, especially when you consider that .NET sports single inheritance and not multiple inheritance. If you want to use a different object model, you can't use the nice features: sell your sole or burn your fingers typing it all out yourself.

    Like in Visual Interdev, Visual Studio.NET doesn't help you a hell of a lot when you want to develop an n-tier application using the new data-oriented objects. This is a disappointment in some degree, but after a while you get over it, eventually, and decide to start typing the code using these data-oriented objects by hand. This is tedious work, defining all those data-adapters with the queries they have to run, the parameter declarations the command objects need etcetera, etcetera, but better that than selling your sole, right? So I chose the typing route. I started a new class, which would represent a DAL class, and which would retrieve the same data as I used in the tutorial, but now using hand-typed code, because I wanted a slim, lean class to work with.

    The tutorial I used worked with the DataSet. Reading through all the material Microsoft released about using data in .NET, this object must be the best you can get since sliced bread. Looking at its features, it is an impressive piece of work: versioning, XML support, multi-table support, relations, you name it. It is disconnected, so in fact, it's a cached version of the data held by the original datasource. This sounds like an advantage over the ADO recordset object. Being an in-memory cache is also its biggest disadvantage however. Lets investigate when this is a disadvantage and why.

    In single-user applications, often one-tier or two-tier, the DataSet is not a problem. The problem starts when multiple users are using your application utilizing DataSets. In multi-user applications build with the n-tier paradigm, like a basic webapplication, the different users, each running their own application version in their own thread, do share only a few things with each other: the application code and the application state. The application state is the complete state the application is in, thus the state shared by all users. Normally this is the current state stored in the shared persistent storage used by the application, in many situations this is a database. Because this application state is shared among all user-threads in the application, when it's altered in thread T1, all other threads should immediately work with the changed application state, otherwise you'll have multiple versions of that application state: each thread will have its own, and the data used by a given thread can be out of sync with the application state.

    To achieve fast propagation of changes to the application state to other threads, developers learned that you should make the changes directly to the physically shared repository where the application state is stored: the database. If you then also make sure whenever you require data from the application state which can be changed by other threads, you read it from that same physically shared repository, you can be sure you have the latest data read from a correct application state. This is and was the idea behind Windows DNA and stateless development using ASP and MTS/COM+.

    As said, using DataSets to cache data from that application state can make threads using that cached data go out of sync with the application state that is seen by other threads. The main reason for this is that DataSets created in different threads use their own connection to the database, using a different adapter. This means that when two users, U1 and U2 (no, not that rockband), working with the same webapplication, run their own threads on the server, and when they independent of each other request the same rows from the database, they use two different connections to do that, holding two different DataSets. However, semantically it's the same data: each row in a DataSet represents an entity instance and now two users have each a copy of the same entity in memory.

    When U1 changes something in a given row in his DataSet and saves it back to the database, U2 will not see these changes. If U2 changes something else in his DataSet, U1 will not see these changes. When U2, always the smartest, fastest kid on the block, is already done with her work and saves the DataSet's changes, U1 will be left in the cold later on when he wants to save his changes back to the database. And this can be confusing, because the changes made by U1 did succeed, that is: when the changes were made in the DataSet. They were not propagated back to the application state in the database, but that was a matter of time. Code run by U1 has to deal with a failed update of the DataSet, which was not expected, since the updates to the rows in the DataSet did go well.

    The DataSet as a concept propagates this kind of development of applications, bringing developers into trouble if they don't understand the possible stateful-related disadvantages of the DataSet. However too many times, helped by the not-helping utilities in Visual Studio.NET starting developers and also developers who are new to n-tier, stateless, multi-user applications fall into this trap. And it's unnecessary, however Microsoft doesn't help here.

    When you investigate the MSDN library which comes with Visual Studio.NET, you'll notice that Microsoft uses two main approaches for dealing with data in .NET applications: (a) the DataSet way and (b) the DataReader way. The DataSet way is well documented and supported by a big gang of utilities inside the Visual Studio.NET IDE. The DataReader way is not, it is meant for the die-hard developers who want to type in everything themselves. As illustrated above, that means that as a developer you have two choices: use the Visual Studio.NET tools and the DataSet approach or type it in yourself and pick either one of the approaches available.

    Because of the vast amount of documentation, most developers go for the DataSet approach, while the DataReader approach is more suitable in most cases: the DataReader approach uses read-only ways to read data for viewing/consuming by Business Logic processes and uses command objects to execute changes directly in the application state, which means there is no caching in-memory, code will not make two times the same change (first in the in-memory object, the DataSet, then the database itself), and when requiring data for some process, it is read from the database directly instead of read from an in-memory cached version of a piece of the application state. However, the DataReader approach requires much more typing, and therefor is not the prefered choice for a lot of developers.

    How can this be solved? Frankly, with the current DataSet this is hard to do. Because it caches data in-memory, to avoid two or more copies of the same data in DataSets used in more than 1 thread, it should hold objects with the entity data which is stored per row instead of plain data in rows. These objects then could be shared among the DataSets created in the different threads. This means that when thread T1 reads row R from table T, and T2 does the same, both threads will hold a DataSet which share a single object which contains the data of row R. If T1 changes the data in that row, T2 will automatically see those changes. This approach is common among O/R mapper frameworks in the Java world and which are now slowly but steadily entering the .NET world. Another solution would be to not cache the data extensively, but make changes directly on the application state in the database, plus read data from that application state when it is required, and thus not from cached DataSets in-memory, which can be out-of-sync with the application state.

    Microsoft pulled their ObjectSpaces framework off the net, and what I heard from several people, is now reworking it into an O/R-mapping like framework, together with the next generation DataSets. Lets hope MS does it right this time so novice developers, who do not know the dark ages of Windows DNA, will be driven towards functionality which helps them the way they expect it helps them, this time without caveats.

    Read more...

  • Relationships

    No, this is not a piece of text about a broken/just started relationship between two people :). 'Relationships' should be read in a geeky context: relationships between attributes and entities in databases / object hierarchies, you know, the kind of relationships a computer-savvy person feels comfy with.

    On April 18, Edgar (Ted) Codd died. This man is probably not the man you remember as the 'hero' you adored and who convinced you computers were your future, but it is the man who invented the concept of 'relationships' in our world, the software developers world, which resulted in the concept of Relational Databases. Now, as we say in The Netherlands, "Over de doden niets dan goeds", which means something like "Say only good things about the people who died", I shouldn't be saying this but in the last couple of days I'm seriously starting to doubt the quality of the form Codd defined the concept of a relational database as we know today. I think it is seriously flawed, and when I say seriously, I mean: really seriously, as much that we should start using other models today, instead of keeping this flawed model around any longer.

    Read more...

  • Borland's prices are too high?

    S.B Chatterjee wrote: "Borland has announced their C#/.NET Builder tools along with the prices. $69 for a personal edition and $999 for a professional version. That's fourteen times as much! jeez... I don't see developers flocking for that one. I'll stick with my VS.NET (under MSDN Universal, of course)."

    [rant]
    Well, I don't know. This week, Microsoft admitted that the ASP.NET editor in VS.NET 2003 is seriously flawed and can't be fixed easily (thus: will not be fixed 'till the next VS.NET upgrade, probably in 2004). $999.- is a lot of money, but if you consider it might be a tool which offers more functionality (f.e. refactoring tools) than VS.NET does, it isn't a lot of money. If you consider that VS.NET Enterprise Architect is also very expensive but doesn't deliver all what it promises (ever tried to refresh an ORM model from visio? Tried to edit UML within Visio and tried to use the .NET types? Also wondered why you had to click a zillion tabs/windows before you could add an attribute? ), I don't think Borland is way off base with their prices: the prices reflect the quality you buy.

    I'm looking forward to see what Borland will come up with. If the editor will have more features than the VS.NET editors (and in the ASP.NET area: it simply works without killing your code), it will be a top-seller. It's hard to judge their product just by looking at the pricetag, because if I take that analogy, I can honestly say: for $159,- Visual C# delivers what it costs.
    [/rant]

    PS: ever calculated what that MSDN universal subscription costed you? If this isn't your first year, you payed a lot of money for just VS.NET 2003 and Win2k3 server. Think about that.

    Read more...

  • Hotmail: 48 spammails per day, per mailbox

    According to News.com, Microsoft blocks per day 2.4 billion (that's 2,400,000,000) email messages targeting Hotmail subscriber in-boxes. If you take into account that Hotmail has roughly 50,000,000 active users, that means that per mailbox, 48 (2,400,000,000 / 50,000,000) spam-messages were on their way to enlighten the eagerly awaiting mailbox-owner that those days of a whimpy penis and denty tits can be over for good.

    48.

    I think it's time for another type of email system. (Call it a hunch)

    Read more...

  • It's not all bizznizz apps

    Every day I check the image of the day over at flipcode.com, which is the place for Joe Geek and Family to show off their sunday-afternoon graphics software. It's amazing what people come up with from time to time, and today, a programmer from Belgium send in his 3D-terrain renderer/generator using OpenGL and C#. It's a great way showing off what you can do with C# and .NET besides the always charming n-tier / remoting / webserices / webapplication-oriented software. It utilizes the great work of the csgl guys who, as you all know, wrote a library for C# to work with OpenGL. As a former OpenGL junkie myself, I always get a "those were the days" feeling, while reading about graphics software. ;)

    (Not that databases aren't interesting of course. Nothing beats violating a FK-constraint at 9:00 AM)

    [Listening to: Yngwie Malmsteen - Concerto Suite For El. Guitar and Orc. in E Flat Minor - Icarus Dream Fanfare (5:26)]

    Read more...

  • Speaking of patterns...

    A lot of patterns do share a 'trick' that can be hard to grasp for new OO-developers. It's the trick which is made possible by polymorphism: you define a virtual method in a base class B, override it in a derived class D, and suddenly code in class B will call code in class D. To me this sounded confusing at first (hey, I'm raised with Kernighan-Ritchie C, the non-ANSI version, bite me :P). However it's a very neat way to 'plug in' code later (in a derived class) while defining the flow of the code now (in the abstract / base class).

    A simple example for the few confused ones might help:

    
    public class B
    {
    	public void Foo()
    	{
    		Bar();
    	}
    
    	public virtual void Bar()
    	{
    		Console.WriteLine("B.Bar");
    	}
    }
    
    public class D:B
    {
    	public override void Bar()
    	{
    		Console.WriteLine("D.Bar");
    	}
    }
    

    Now, when we create an instance of 'D', and call 'Foo', we're actually calling the base class B's method Foo. However, B's code will not call it's own method 'Bar' but will call D's version. For die-hard OO-programmers, this is common knowledge, for new developers arriving in the wonderful world of OO-development (like, hopping over from VB6 to C# ;) ), this is new ground and can result in reactions as "whoa, that's clever". This polymorphism-result forms the basis for many patterns known. It's because of this common OO-knowledge that patterns are thus 'common knowledge', the situation the pattern is used in (the 'semantics') are different from situation to situation, hence the different names.

    Read more...

  • A pattern in patterned confusion.

    Chad Osgood talked this morning about a semi-rant from Bruce Eckel about patterns and the Gang of Four (GoF) book in particular. Reading the snippet from Bruce, I do not see why he (Bruce) critisizes the GoF book that much. As an example he shows the diagrams of the strategy pattern and the state pattern. They look familiar indeed, but they are semantically different, and that's why they form different patterns. When swapping algorithms you use the strategy pattern. When swapping state objects you use the state pattern. A different world altogether. The UML-ish diagrams may look familiar, but they're not.

    Patterns are not about neat algorithms, they are about giving names to common used ways to solve solutions. At first I looked at patterns in a way to solve programming problems in a clever way. This is not the right way. Patterns are common, well known solutions, like quicksort and double-linked list are common well known ways to do things. They have names, we know what kind of routine(s) are associated with these names. That's also the case with patterns: common used ways to solve problems have been given a name, so when we hear 'Factory pattern' we know it's about an abstract class which creates a new object but using a virtual method which is overriden in a derived class.

    What really disturbed me was that Martin Fowler came up with a whole new set of patterns which are most of the time simply re-implementations of the GoF patterns. Allthough Fowler did a wonderful job investigating all the patterns, it will add to the confusion. If that wasn't enough, our buddies over at Sun Microsystems have done their part in adding more confusion to the fragile pattern-movement.

    Patterns are a great help to the developer's way of organising knowledge. Now we only need a new pattern: to organise the patterns. :)

    Read more...

  • T-SQL Tip of the day

    Just to test the w.bloggar tool with this blog and because it's always nice to have something to say, I thought why not post a nice T-SQL Tip. (It works on Oracle too btw)

    Optional parameters
    When you have a table, say Orders (as in the Northwind database which comes with SQLServer), which has more than 1 foreign key (FK), it is typical that developers will query the Orders table based on a combination of these FK fields. However, as with the Orders table, this can be quite cumbersome when there are a number of FK fields. It would be nice if you could pass along any combination of these FK fields to a single stored procedure which would use these parameters to query the table in a uniform manner, so there will be no recompiles (most people who try to use optional parameters end up concatenating SQL strings in a stored procedure, which is not that good).

    The idea is this: for every parameter you do not need, you pass in 'NULL' as value. For every parameter you do need, you pass in the value you want to filter on. Let's get back to the example table, the Orders table in the Northwind database. This table has 3 foreign keys: CustomerID, EmployeeID and ShipVia. If we want all Orders of a given CustomerID which are taken by a given Employee we normally wouldn't be able to use the same stored procedure which would query for all Orders for a given Customer which are shipped via a given ShipVia value. But you can! Here's how:

    CREATE PROCEDURE pr_Orders_SelectMultiWCustomerEmployeeShipper
     @sCustomerID nchar(5),
     @iEmployeeID int,
     @iShipVia int
    AS
    SELECT  *
    FROM Orders
    WHERE CustomerID = COALESCE(@sCustomerID, CustomerID)
     AND
     EmployeeID = COALESCE(@iEmployeeID, EmployeeID)
     AND
     ShipVia = COALESCE(@iShipVia, ShipVia)
    


    That's it! This stored procedure will query for Orders on any given combination of CustomerID, EmployeeID and ShipVia. If we f.e. want to select all Orders for Customer 'CHOPS' and ShipVia '1', pass these 2 values to the stored procedure and pass NULL for @iEmployeeID. This will result in the requested rows.

    Caveats.
    Of course there are drawbacks. One of them is that this is slower than a query which is taylored to the columns you want to filter on. It also needs a clustered index to work well, but every table should have a clustered index anyway to support fast retrievals of data.

    Read more...

  • I'm alive!

    These days, it's hard to claim you're truely a geek when you don't have your own blog. And, if I might add, the blog has to be placed on a blogspace which holds some value. I'm very happy that I can conquer those suits at upcoming birthday parties again, when they start ventilating their coolness because they 'blog': I now too have my blog, and on one of the major blogspaces!

    Needless to say I'm very happy to finally be able to say: I'm alive, I exist! (Not that this is of any real value to anyone, but that doesn't matter ;) )

    I'll write mostly about developing with .NET, my daily fights with Visual Studio.NET and other semi-interesting stuff. For the people who do not know me, I wrote LLBLGen which is a DAL generator for .NET/SQLServer/C#/VB.NET and am currently working on the next version of this tool, which will be an O/R mapper on steroids.

    Read more...