Several developers have asked me about the use of User Controls within the ASP.Net MVC, so I decided to write down what I think about it.
A User Control is a piece of the User Interface which is sometimes created to be reused on several of Web Forms, some are created as a result from a Refactoring of the User Interface code. A User Control has a code-behind and the User Control is living its own life mostly if the Web Form and User Control is well defined. Some people tend to access the Web Form from a User Control and I think that is bad practice, it will make the User Control dependent to a specific Web Form, so a better solution is to use events which a Web Form can subscribe to, if we need to pass data from the Web Form to the User Control, we can do it by adding properties to the User Control. Now how about a User Control in the ASP.Net MVC?
As you all may know there is a View and the View has a Controller, the Controller work like a contract between the View and the Model. The Model is where we have our Business Logic and domain entities.
NOTE! I have seen several developers using the Controllers as a replacement of Business object, that is not the role of a Controller, the Business logic should be in the Model.
I will assume that a View has only one Controller and should only be dependent on one, so in the ASP.Net MVC the User Controls will take a "new" kind of role. A User Control in the ASP.Net MVC is part of the View and should reuse the same Controller as the View. If not, our View will have dependency to several different Controllers and we will need to take this into consideration when writing our test. We should try to avoid as many dependency as possible, too many dependency will make the code harder to read and maintain etc. So what role does the User Control take in a View? They will be a result of Refactoring of the User Interface code, and will more or less be like a simple "include" file used by the VIew (Like a extracted method in a Class). A User Control and a View together will create a complete View and this complete View should only work against one Controller. Some developers tend to create and use a User Control after the same pattern they are used to when they work with Web Forms. Instead of using a code-behind, they bind the User Control to it's own Controller. By doing so, a View which will use the User Control, will have a indirect dependency to several Controllers. Instead of making sure a View has its own Controller, it now has several, if we have several User Controls on the same View and all User Controls uses their own Controllers, we will sooner or later create a Death Star. If we should use a User Control in the ASP.Net MVC it's the Controller of the View which will return the Model which the User Control and View will use, right?! So it's the same Controller that should handle the User Controls actions. If we want to create a User Control and see it as a self lived unit, I think it should be important to let its own Controller give the Model to the User Control, but how it's designed today, we can't. It's the Controller used by the View which must also get the Model for the User Control.
How about reusing a User Control on different Views if they should more or less serve as a "include" file and use the same Controller as the View? For me it's all fine, because the User Control will share the same Controller as the View, and the User Control should try to avoid having its own form element. If we want to reuse a User Control on several of Views, where the User Control has its own Controller, the Controller of the View will have a indirect dependency to the User Control and suddenly the Controller of a View will have several reason to change (If the User Control need to be updated and that may lead to an update of the Controller.). To solve this, we shouldn't try to reuse a User Control as long as it can't server as a simple include of HTML element, where the User Control will not have a form element pointed to a separate Controller served only by the User Control.
I hope some of you can try to convince me if my assumptions is totally wrong, or at least you can try ;)
If there wasn't any computers in the world, what should I do right now? It's not an easy question to answer. When I was about 6 years old, I was dreaming about being a farmer, have three kids at the age of 20. At the age of 20, I was working with a Business 2 Business solution, no kids and almost single. I don't think I would like to be a farmer now, maybe some kind of a designer.
I have educated over 1000 developers the last 1 1/2 year and about 5% of them raise their hands when I ask them if they writes test, like unit test or if they use Test Driven Development (TDD). 5%! That is not much. When I asked them why they didn't write any tests, the answer is often, we should but don't have time to do it, some answer "we have test but no one maintain the test and update them".
"Test code is just as important as production code." - Robert C. Martin "Clean Code".
If you don't have any unit test, you will spend more time to test your changes (probably manually by using the applications user interface). When you need to do changes to your production code you must make sure your changes work, and all other part of the system will still work after the changes. Most of you would probably not even change existing code because of being afraid to "destroy" something. In this case you add more code that will mostly make the code smell and sooner or later rot. WIth a test in place, you don't need to be afraid of changes, it will make it easier to maintain your application. The interest of writing clean code among developer have increased lately, have in mind that writing unit test is writing code, and that code is also important to have clean. If it's not clean, people will sooner or later not maintain it, it will be harder to change and will rot. When a test start to rot, the production code will also start to rot. There aren't any excuses to skip unit test, or?
During the age of the Windows DNA, most of us developers used COM/COM+ and a 3-tier architecture. Where we separate concerns by placing the views to present data into a layer called PL (Presentation Layer), we add all business logic into one single layer called BLL (Business Logic Layer), all code that handle data access was also placed into its own layer called DAL (Data Access Layer). During this time when Windows DNA was a hot topic, most application used this architecture.
By using DCOM it was easy to distribute the different components in the layers into distinct tiers. DCOM had its advantage and also disadvantage (DCOM hides the "distribution" and distribution could be done after application was written without the developers awareness, most applications wasn't designed for distribution purpose.). There was few application I was involved with that used DCOM, and I will in this article focus on the application that didn't use DCOM. The 3-tier architecture was well defined and common used. Several applications today used this architecture. When .Net arrived several developers was scratching their heads, it was a whole new platform for most of them and a difficult and confused time began, "how should this 3-tier architecture be applied on the .Net platform?". Web Services was introduced somewhere around this time and that was a bright light for some solution architects and developers, now they understand how to apply the 3-tier architecture on the .Net platform, the answer was to replace COM/COM+ components with Web Services. So instead of using a binary standard for the communication, XML and HTTP was used.
Even today several applications uses the 3-tier architecture with Web Services as a replacements for COM/COM+ and even a replacements for DCOM and that is understandable. But replacing COM/COM+ with Web Services for applications on a single machine, with no reuse or integrations with other applications in mind, is something that make me sad. Now when we have WCF (Windows Communication Foundation), developers start to replace Web Services with WCF. There are several applications running on a single machine with no attention of integration with other systems that uses different WCF Services for the Business Logic and the Data Access. I will not say that this is totally wrong decision made by architects, but it will affect performance and scalability, and can affect it badly. There is absolutely no reason to use WCF Services if the application is running locally or on a single machine where no integration with other systems are of interest. Even if there are integration in mind, there is no reason to have the Data Access Layer as a WCF Service, because most applications will reuse the business logic, so the layer top of the Business layer should be a WCF Service not the underlying layers. There are some exceptions, if we have moved the Data Access Layer into a distinct tier, and several applications should reuse or have access to the same data source; but can't or aren't allowed for a direct connection to the data source, the Data Access could be implemented as a WCF service. In that case all communication will go via the WCF Service.
If there is no reason of using Web Services or WCF Services as a replacement for COM/COM+ in applications with not interest in integration, what is the solution? A simple Class Library! Best performance, easy to add, easy to implement. Since .Net arrived I toke the advantage of the platform and use Object Oriented Programming. Now during the past few years I have adopt Domain Driven Design. How about you, do you still use the 3-tier architecture and using Web Services of WCF, or something else?
A friend of mine had a question about IList<T> and List<T>. He had noticed that some developers are returning IList<T> from methods, for example:
There is a discussion on the net about LINQ to SQL and if it should be removed in the future. My collogue and friend Patrik Löwendahl wrote a post about what he think, you can read about it here. I don't care if LINQ to SQL will be removed, honestly I want it to be removed. As Patrik wrote in his blog post, it's not the ADO.Net Team that created the LINQ to SQL. I think the team that focus on data access etc should be the team that build data access framework and the ADO.Net Team are doing a great job. After watching a session in the PDC about EF in the future, I must say that I will not use nHibernate or other OR-mapper as I have mention before in some old posts, I will now use EF. If we compare LINQ to SQL today with the current version of EF, LINQ to SQL is best suited for RAD, but in the future even EF will be suited for RAD. What I don't like with LINQ to SQL, is that developers are using the database first approached, they generate a model out from a database schema. A database is not object oriented.