My wish-list for the next Visual Studio.NET / .NET api release, part II

I've blogged before about wishes I had for the IDE I use on a daily basis: Visual Studio.NET (currently v2003). Since that blog I've made a new list of new wishes not mentioned in the previous wish-list. I've also included wishes for .NET and the .NET API. Here we go.


  • Generation of a makefile (C#, VB.NET)
    Because the Visual Studio.NET IDE is hard-wired to one single .NET version, it is not possible to f.e. compile your code with another compiler version. You can do that manually of course, or use a build tool. The .NET SDK comes with a tool called nmake, which can help you with the build of .NET sources, however you still have to write the makefile by hand. It would be nice if Visual Studio.NET would help with the creation of a makefile, for a solution or project or list of projects, so you can use that makefile to build your source code with another version of the compiler, f.e. the 1.0 version. This is important, because if you sell or distribute an assembly in library format, users still using Visual Studio.NET 2002 can't run their code when they reference your library which is compiled with Visual Studio.NET 2003.
  • The ability to define post-build steps which call plug-ins.
    When you take a closer look at build tools like NAnt, you realize that a lot of steps in the build process can be automated: not only the build itself but also the deployment. NAnt does this by calling into extensions, which take care of the actions to perform. Visual Studio.NET offers just 1 post-build step for C# projects: you can call an external program. It would be very handy if you could define a list of post-build steps and use the same mechanism as NAnt uses, thus calling into extensions, in this case Visual Studio.NET add-ins. Why not just use NAnt? Well, NAnt uses an XML file to control the complete build process, and that file is not coming with a lot of documentation, you can make mistakes in a text file and it therefore takes a lot of work to complete the process without errors, which is unnecessary when you realize a good GUI can guide you through all the steps so you won't make mistakes at all. A GUI with underlying wizardcode can also do a lot of work under the hood based on a few input values, the reason why computers are invented in the first place. Visual Studio.NET lacks seriously in this complete build-process control area, where it should be the choice, since it is considered the work-environment to develop complete applications, not just a compiler executer with a text editor build in.
  • Cache the Help Index list between coding sessions.
    When you have set a filter on the help index, f.e. only '.NET framework', and you open the contents browser of the help or the index in a Visual Studio.NET session, each first time in a session, Visual Studio.NET will re-filter the complete help. On my box this takes 10 seconds or more. Doing this once is not a concern, but the filter-result isn't cached. Therefore, every day when you open the help, it has to do the filtering again, also when the filter hasn't changed. This is unnecessary, since it can easily store the filtering result, which will speed up the opening of help contents and index dramatically. When the filtering changes, VS.NET can, f.e. in a background thread, re-filter the help contents.
  • Visual Studio.NET should contain a lot more wizards and helpers for real applications.
    Visual Studio.NET contains a lot of different designers: you can drag and drop components on a form, set some properties and viola, you have a working form with some components. However, when developing software as a professional, f.e. business applications, web applications, web services, multi-user applications with multiple clients or distributed applications in general, dragging and dropping some components on a form, setting some properties and it will work is not the way that kind of software is developed. Most of the time, the application is written using multiple layers, without a GUI, connecting to databases in multiple classes which are re-used by more than one business logic component, and dragging some component on a form is not helping, on the contrary, using that technique will give you code you most of the time can't use in a multi-layer application where you want to re-use classes, have to consider remoting, have multiple clients on top of your business logic layer etc. In short: the average professional will not use the designers a lot that are available in Visual Studio.NET and will have to write a lot of plumbing code by hand. It would be great when Microsoft took the bold step to investigate how n-tier developers write their software, how developers who work on software that is more complex than a single form with a data-adapter and a connection dragged on top of it develop their software. Then perhaps Microsoft will realize that what professional developers need is more helpers, more small wizards, which require a few input values to produce a lot of plumbing code. That way, Visual Studio.NET will let the developer focus on what he/she initially wanted to do in the first place: implementing the functionality of (part of) the application. Productivity for professionals, it would be a great slogan.
  • Expose a better object model for macros.
    If you have ever written a macro for Visual Studio.NET you probably too were surprised how weird the object model is you have to program against. Why can't I browse the class tree from inside the macro? Why can't I request a list of code elements when I have a reference to the instance of the active document object? Why are there just a small set of the total amount of different code elements exposed to the macro? At the moment, you can't find a region element, while the editor knows it's there. You can't find the namespace using definitions/imports; the editor knows it's there. It would be very nice if Visual Studio.NET exposed a decent, logical object model so macros could be implemented more easily and macros could have more power. The documentation is also not that good; it's different from the .NET class reference f.e., which is very good. Macros are also interpreted, which is a bummer, since this can hurt performance, and it would be great if you can write a macro in any .NET language which then would be compiled and JIT-ed, resulting in more performance.
  • A callers / callees graph would be nice.
    Anyone who has worked with VC++ 6 knows that it has a nice little feature that gave great insight in your code, especially when the code became huge and very complex: the call graph. You could browse through the classes and methods and see where they're used, which method called which method etc. This would be another big helper wizard for day to day software development when you are developing C# or VB.NET, because these languages don't come with a call graph utility.
  • Appdomain instance reference viewer.
    When you're debugging a complex application you can only get a limited view of your total object tree in your current appdomain: the 'locals' view or some watch on an instance which can be reached from the local scope. Stepping into methods will narrow the scope of the objects visible. However, it can be very handy if you can see the values of objects outside the current scope, f.e. when you have events defined or delegates and you step over a call of a delegate or an action which triggers an event, because stepping into these calls would probably be time consuming. At the moment you have to step into these calls, and in the case of events, have to set breakpoints in the handler, to see if the results are what you want, because you can't see the objects changed in the narrow scope of the method the event was raised in. It would also be a big help to see if the application state is conform what you think it should be, from any place in your application. With an appdomain instance reference viewer, you could view every instance of a living object in your application at that moment, also outside your current scope.

C# Editor

  • Give Tooltip intellisense a better brain.
    Now, consider yourself typing a call to a method Foo which has 3 overloads: one with no parameters, one with one parameter and one with 3 parameters. Visual Studio.NET has intellisense so when you type the '(' after the name of the method name, it will pop up a tooltip which shows you the various overloads and if possible help with every parameter. You can even scroll through the various overloads by using the keyboard. However, when you typed: 'Foo(bar, ' (without the quotes), there is just 1 overload that will match, the one with the 3 parameters. However, Visual Studio.NET will keep the intellisense tooltip showing the (apparently) random overload that was initially chosen, it will not update the tooltip with the overload that will match the current typed in call. This is annoying because you have to move your hand to the cursor-keys to show the tooltip that matches your typed-in call.
  • Coding helpers should be smarter.
    Last time I talked about interface stubbing and abstract method stubbing, I didn't mention the event handler stubber. In v2003, when you do 'Foo.SomeEvent +=' , the code editor will help you with the implementation of the event handler. Finally someone was clever enough to understand that the less typing, the sooner the programming is done is true. However, the implementation of the idea is rather poor: you can't choose the name of the event handler routine. The editor will create a name for you and if you unselect that name to change it to your needs, you can't re-activate the stubber to use that name when generating the actual event handler routine. This is annoying because you now have to rename the routine and copy/paste the name back in the delegate declaration which activated the stubber. It's a good start, but the current helpers are rather poor. There should be a lot of these helpers in the editor. Typing is time-consuming and error-prone. The more generators or coding helpers, the better.

.NET Controls

  • Replace the winforms datagrid.
    For creating a grid-like editor for a list of custom objects, it's totally useless, unless you invest a lot of time. You have to define special table styles and column styles to display a simple array list which holds objects with simple properties like a boolean. Initially, the boolean property will be displayed as a checkbox, which is great, but it is nullable by default, so it's a tri-state checkbox. I don't know, but a System.Boolean property can have 2 states: true and false. The reason for this tri-state is simple: the datagrid is made to work with DataSets or DataTables (or better: with objects which can supply themselves as DataView objects). However, there are more objects in the world than the DataSet or the DataTable. For example custom lists (typed collections) with custom classes. A simple grid to display and edit them would be great. While at it: re-implement the BeginEdit/EndEdit cycle code. The datagrid calls these methods a lot of times when your binded objects are exposing them, but most of these calls are unnecessary and force you to keep track of any edit cycle that's going on. The component used by the developer should deal with that problem, not the binded code.
  • Replace the winforms treeview.
    If you want to be productive, stay away from the treeview control. The control lacks some serious productivity logic that should have been included since version 1.0. At the moment, you can't sort all nodes which have a given node as parent. You either have to implement this sorting using ugly unsafe win32 messaging code, or you have to use tricks like adding all nodes to a sorted list with their label text, remove them, and read them in the same order as they're in the sorted list. You can set the 'Sorted' flag on the treeview, but it's a mystery to me why this is a property of the treeview and not of the TreeNode class plus why this is a boolean property and not a property which accepts an enum like the Sorting property of the ListView control. It is also impossible to supply a sorting method like you can do with the ListView control. The sorting routine of the TreeView is also not paying attention what's going on, because when you set Sorted to true, and you edit a label on a node, it is not resorted according its new label value. Multi-select is also not implemented, you have to do this yourself, which is not what I call productivity: .NET is positioned as a productive platform, then it should act like one, if every normal feature should be added manually, it's not productive. Let's continue with our friend the TreeView. Every tree used in windows and window applications will allow you to edit a label in the treeview when you press F2. In .NET you have to implement this yourself. No offence, but why do I have to add that myself and in code outside of the control (the form)?
  • ListView: sorting means sort it, not fake it.
    Imagine you have some data in a ListView control set to details. You want to automatically sort it. This works great. However, after you filled the control you want to set the focus on the first item in the listview, i.e. the top row. You do this normally by setting the Selected property of that ListViewItem to true. However, when you sort a listview, the view is sorted, but the underlying data is not. You can't determine which row is the top row, Items[0] will bring you to the first item in the listview, but that's the item you added first, not the item that is visible at the top, resulting of the automatic sorting. Make sorting sort the collection of items too please.


  • Make all classes unsealed.
    This has been discussed by a lot of people and I add my 2 cents by saying that I think no class, that is 0 classes, should be sealed. The reason for this is that it is now very hard sometimes to add some functionality, simply because you can't inherit from some classes, which forces you to re-implement the interfaces exposed by the class you wanted to inherit from and aggregate the sealed class. Aggregation is a Visual Basic / COM technique, however in an OO world, aggregation is not a common pattern, and inheritance is, together with polymorphism. Sealing API classes hurts productivity and code-reuse.
  • Make more classes serializable.
    There are classes in the API which are used a lot in code that is serialized. One of the classes are the SqlTypes classes (structs actually). These are not flagged as serializable, which hurts productivity, since you have to implement these classes / structs yourself when you want their functionality and serialize them at the same time. Not making them serializable is hurting productivity and is therefore wrong.
  • More classes should have abstract methods and virtual methods.
    Microsoft moves towards more pattern usage. This is good. However, it's a real shame that the current API doesn't offer a lot of extensibility by offering methods as virtual or abstract, so inheritors can extend the logic of the class by plugging in custom code through the Strategy pattern. Of course this has to be done in combination of the removal of the sealed statement.
  • Implement more base classes.
    Microsoft argues in its guidelines for library developers (located in the SDK reference manual), that you should program against classes, not interfaces. The [GoF] book, "Design Patterns", argues that you should program against interfaces, not classes. Now, try to implement database independent code, that f.e. handles database independent connection and command objects. You simply can't. The reason for this is that f.e. SqlConnection and SqlCommand are not derived from a common base class, but from Component. SqlDataReader is derived from MarshalByRefObject. The SqlDataReader implements two interfaces, IDataReader and IDataRecord. Because there are no common base classes, you have to use interfaces. However, which one? IDataReader? It's an option, but then you can't have the IDataRecord methods. You also can't cast from one interface to the other. The only solution is the cast to SqlDataReader and gone are your ambitions to write database independent code. There are a lot of these kind of examples in the .NET API and it's hurting productivity when you are forced to write code which is not independent of a given target implementation, but locks itself with the one possible target implementation, like a given database provider. The API should be designed with interfaces and base classes so people who want to work with classes based on an interface or base class can do so.
  • Pay more attention when designing namespaces.
    I of course refer to the System.Data.Sql* namespaces. Why are the SqlServer namespaces SqlTypes and SqlServerCe not in the SqlClient namespace? How it is done now clutters the namespace tree.
  • Add an API to work with XML the OO way.
    When you want to work with XML data using .NET, you have to use the objects in the XML namespace. The functionality is there, it's just that the objects force you to work like the W3C thinks you should. I never thought the W3C should mess around with any standard, and also not with XML and definitely not with how a developer should build its XML DOM object. One of the things that is weird with the current XML namespace code is that it's not usable like any other class in the .NET API (the dataset-related classes being an exception): you can't simply create an XmlNode object and add it to an XML DOM document. Creating an XmlNode object using an XML DOM document doesn't add that node to that document (like the TreeView nodes collection will do when you create a node using that collection), you have to add it manually. This causes the addition of a lot of code to work with the XML code in an efficient way. Again you are forced to write a lot of code, plumbing code, to ease the usage of XML in your application, which should have been provided by the .NET API in the first place. I heard Microsoft is working on this, I truly hope they realize that at the moment working with XML in code is a real pain, very complex and unnecessarily cumbersome.
  • If serialization to disk succeeds, deserialization should succeed too.
    Ever serialized a class to a file and discovered that deserialization didn't work for some fuzzy reason? I have. Finding the error why deserialization fails is hard, especially when you just mark your classes with [Serializable] and serialization succeeds. It should be a goal for the serialization code to make sure that deserialization will succeed. Serializing delegate instances for example will definitely make deserializing the data back to fail in 100% of the cases.
  • Make event handlers / delegate definitions maskable with [Nonserialized].
    When you define events in a class you want to serialize, you have to implement ISerializable, simply because you can't mark an event handler definition as [Nonserialized] like you can with a member variable. Deserializing a class that contains an event definition and is serialized with that event handler initialized will fail. You therefore are forced to implement ISerializable by hand to make sure the event handler(s) are not serialized, which can be cumbersome. It should be possible to simply mark an event handler definition as [Nonserialized] and supply a method that re-wires events after deserialization.


  • An easy way to get a reference of an object in the current appdomain.
    This applies to winforms. When you are developing a winforms application with for example several docked windows, where's your application logic going to live and your general data? Most people define the initial references to global data in the main window instance, but this violates basic rules about application development: the GUI is not the core of the application, it's an interface to the core of the application. You can now not easily store global data for your application in some sort of repository for your appdomain which is accessible by all objects in your appdomain. (You can by implementing a singleton repository yourself through static members, but indeed, this is again extra work). If you are f.e. implementing a handler for a menu item in a subwindow which has to work on an object that is held by the application core (in most current winform apps this will be in the main window), how would you get a reference to that object? You or have to pass on an event to the main window, which is cumbersome, you can also pass on a reference to the objects held by the main window to the child window, which can be cumbersome, you can also pass on a reference of the main window to the child window and expose methods to work on the object (the MDI way), which also can be cumbersome. It would be nice if the child window can get a reference of the global object through a repository and work on that object. If GUI code relies on the state of that object, you can define change events for that objects which will automatically then make subscribed controls to update their view of the state. This way you can also create a more MVC like application, because you can define a general repository where the objects are held and viewers and controllers are not packed with synchronization code when you are not storing them in the main window. This is an add-on to the OO model, but I think it is an advantage to refer to an instance living in the appdomain from any scope, since the object is already global and enabling global access forces a developer to pass along a lot of references. Admitted, there should also be a mechanism to excluded objects from access to the repository, but if you need it in a situation, it should be there. (Web applications can use the Application object as a repository or the session object, so that's why this applies to winforms).
  • Where are the monikers?
    With COM you had monikers to instantiate objects, find object locations or object factories and make them produce a given instance. With .NET this is much harder to do. It should be possible to define a class repository within a distributed application so you can ask the repository to instantiate an object on whatever server that participates in the distributed application and which deals with the remoting as well. This can of course be implemented manually, but .NET should supply this kind of functionality so developers can be more productive. When you can communicate with other appdomains which form the same application (f.e. a web farm or a distributed application which implements a complete business logic tier spanning several machines) in a transparent way, you can set up reliable object caches for f.e. entity classes. This can ease development and can bring big performance gains to your application. At the moment you can only implement a semi-reliable object cache when all logic is running inside the same appdomain. This isn't the case with a multi-user application which uses winform clients: developers developing such an application are forced to use move all entity related logic to a central server and make the winform client as dumb as possible to avoid concurrency issues.
  • Make assembly code unloadable from an appdomain.
    Some people don't know it, but when you load an assembly in your appdomain, you can't unload it. Even when no reference is made to any instance of a class from that assembly, you can't unload it. What's even worse is that code generated behind the scenes is also not unloadable. I'm referring to code that is generated when you compile a regular expression or use script in an XSLT transformation. These actions force the CLR to generate and compile code which will stay in your appdomain forever. When performing a lot of these operations in an appdomain will cause a memory leak, even when you don't know it. It should be possible to unload assemblies and the GC should clean up generated code that is generated behind the scenes, when the referencing object is cleaned up as well, or when a statement forces the CLR to unload an assembly.

It might look like I'm whining about details. This was not my intention. Competing techniques like Java with EJB's and the concept of application servers which can span more than 1 machine, already have some of the techniques discussed above. Also is it unclear to me why Microsoft hasn't implemented more tools to make developers of real software more productive. The idea of creating an application which embeds a lot of tools can be interesting, but the result of that idea can be a waste of time when a developer can't use it as a single tool but has to use it as a toolbox. What does the developer want to do? What kind of functionality does the developer want to implement? Simple questions with complex answers. Nevertheless, answers to that kind of questions will give the answer to the question: How should Visual Studio.NET be build to behave like a tool instead of a toolbox and be able to help the developer in ways that go further than starting a compiler or enabling a debugger. A simple example: there is just one property window. Switching to another object or designer will clear the current contents of the property window. Why? Is it stupid to keep the properties of a given object open? Why are some properties displayed in their own window and why are some properties displayed in some property window located somewhere in the IDE, not related to the object I'm currently in?

Perhaps these questions are not common for a developer, however developers are also users, they also are more productive when the programs they work with have a logical gui, help them where they want or need help and expose functionality that make the developer work and think more productive than he/she ever could have imagined.

At the moment, I find Visual Studio's GUI reflect a lot of 'programmer art', if I may use that metaphor from the games industry. I hope this will change dramatically in the future. Thanks for your time.


Comments have been disabled for this content.