Andru's WebLog

//Comments about technology and software architecture
Asynchronous Streaming in ASP.NET WebApi

 Hi everyone, if you use the cool MVC4 WebApi you might encounter yourself in a common situation where you need to return a rather large amount of data (most probably from a database) and you want to accomplish two things:

  1. Use streaming so the client fetch the data as needed, and that directly correlates to more fetching in the server side (from our database, for example) without consuming large amounts of memory.
  2. Leverage the new MVC4 WebApi and .NET 4.5 async/await asynchronous execution model to free ASP.NET Threadpool threads (if possible). 

So, #1 and #2 are not directly related to each other and we could implement our code fulfilling one or the other, or both. The main point about #1 is that we want our method to immediately return to the caller a stream, and that client side stream be represented by a server side stream that gets written (and its related database fetch) only when needed. In this case we would need some form of "state machine" that keeps running in the server and "knows" what is the next thing to fetch into the output stream when the client ask for more content.

This technique is generally called a "continuation" and is nothing new in .NET, in fact using an IEnumerable<> interface and the "yield return" keyword does exactly that, so our first impulse might be to write our WebApi method more or less like this:


        public IEnumerable<Metadata> Get([FromUriint accountId)
            // Execute the command and get a reader
            using (var reader = GetMetadataListReader(accountId))
                // Read rows asynchronously, put data into buffer and write asynchronously
                while (reader.Read())
                    yield return MapRecord(reader);

While the above method works, unfortunately it doesn't accomplish our objective of returning immediately to the caller, and that's because the MVC WebApi infrastructure doesn't yet recognize our intentions and when it finds an IEnumerable return value, enumerates it before returning to the client its values. To prove my point, I can code a test method that calls this method, for example:

        public void StreamedDownload()
            var baseUrl = @"http://localhost:57771/api/metadata/1";
            var client = new HttpClient();
            var sw = Stopwatch.StartNew();
            var stream = client.GetStreamAsync(baseUrl).Result;
            Debug.WriteLine("Elapsed time Call: {0}ms", sw.ElapsedMilliseconds);


So, I would expect the line "var stream = client.GetStreamAsync(baseUrl).Result" returns immediately without server-side fetching of all data in the database reader, and this didn't happened. To make the behavior more evident, you could insert a wait time (like Thread.Sleep(1000);) inside the "while" loop, and you will see that the client call (GetStreamAsync) is not going to return control after n seconds (being n == number of reader records being fetched).

Ok, we know this doesn't work, and the question would be: is there a way to do it?

Fortunately, YES!  and is not very difficult although a little more convoluted than our simple IEnumerable return value. Maybe in the future this scenario will be automatically detected and supported in MVC/WebApi.

The solution to our needs is to use a very handy class named PushStreamContent and then our method signature needs to change to accommodate this, returning an HttpResponseMessage instead of our previously used IEnumerable<>. The final code will be something like this:


public HttpResponseMessage Get([FromUriint accountId)
            HttpResponseMessage response = Request.CreateResponse();
            // Create push content with a delegate that will get called when it is time to write out 
            // the response.
            response.Content = new PushStreamContent(
                async (outputStream, httpContent, transportContext) =>
                        // Execute the command and get a reader
                        using (var reader = GetMetadataListReader(accountId))
                            // Read rows asynchronously, put data into buffer and write asynchronously
                            while (await reader.ReadAsync())
                                var rec = MapRecord(reader);
                                var str = await JsonConvert.SerializeObjectAsync(rec);
                                var buffer = UTF8Encoding.UTF8.GetBytes(str);
                                // Write out data to output stream
                                await outputStream.WriteAsync(buffer, 0, buffer.Length);
                    catch(HttpException ex)
                        if (ex.ErrorCode == -2147023667) // The remote host closed the connection. 
                        // Close output stream as we are done
            return response;


As an extra bonus, all involved classes used already support async/await asynchronous execution model, so taking advantage of that was very easy. Please note that the PushStreamContent class receives in its constructor a lambda (specifically an Action) and we decorated our anonymous method with the async keyword (not a very well known technique but quite handy) so we can await over the I/O intensive calls we execute like reading from the database reader, serializing our entity and finally writing to the output stream.


 Well, if we execute the test again we will immediately notice that the client line (var stream = client.GetStreamAsync(baseUrl).Result;) returns immediately and then the rest of the server code is executed only when the client reads through the obtained stream, therefore we get low memory usage and far greater scalability for our beloved application serving big chunks of data.









MongoDB usage best practices

The project I'm working on uses MongoDB for some stuff so I'm creating some documents to help developers speedup the learning curve and also avoid mistakes and help them write clean & reliable code.

This is my first version of it, so I'm pretty sure I will be adding more stuff to it, so stay tuned!

C# Official driver notes

The 10gen official MongoDB driver should always be referenced in projects by using NUGET. Do not manually download and reference assemblies in any project.

C# driver quickstart guide:

C# Language Center:

MongoDB Server Documentation:

MongoDB Server Downloads:

MongoDB client drivers download:

MongoDB Community content:


Safe Mode Connection

The C# driver supports two connection modes: safe and unsafe. Safe connection mode (only applies to methods that modify data in a database like Inserts, Deletes and Updates.

While the current driver defaults to unsafe mode (safeMode == false) it's recommended to always enable safe mode, and force unsafe mode on specific things we know aren't critical.

When safe mode is enabled, the driver internal code calls the MongoDB "getLastError" function to ensure the last operation is completed before returning control the the caller. For more information on using safe mode and their implicancies on performance and data reliability see:

If safe mode is not enabled, all data modification calls to the database are executed asynchronously (fire & forget) without waiting for the result of the operation. This mode could be useful for creating / updating non-critical data like performance counters, usage logging and so on. It's important to know that not using safe mode implies that data loss can occur without any notification to the caller.

As with any wait operation, enabling safe mode also implies dealing with timeouts. For more information about C# driver safe mode configuration see:

The safe mode configuration can be specified at different levels:

  • Connection string: mongodb://hostname/?safe=true
  • Database: when obtaining a database instance using the server.GetDatabase(name, safeMode) method
  • Collection: when obtaining a collection instance using the database.GetCollection(name, safeMode) method
  • Operation: for example, when executing the collection.Insert(document, safeMode) method

Some useful SafeMode article:

Exception Handling

The driver ensures that an exception will be thrown in case of something going wrong, in case of using safe mode (as said above, when not using safe mode no exception will be thrown no matter what the outcome of the operation is).

As explained here!topic/mongodb-user/mS6jIq5FUiM there is no need to check for any returned value from a driver method inserting data. With updates the situation is similar to any other relational database: if an update command doesn't affect any records, the call will suceed anyway (no exception thrown) and you manually have to check for something like "records affected".

For MongoDB, an Update operation will return an instance of the "SafeModeResult" class, and you can verify the "DocumentsAffected" property to ensure the intended document was indeed updated.

Note: Please remember that an Update method might return a null instance instead of an "SafeModeResult" instance when safe mode is not enabled.

Useful Community Articles

Comments about how MongoDB works and how that might affect your application:

FourSquare using MongoDB had serious scalability problems:

Is MongoDB a replacement for Memcached?

MongoDB Introduction, shell, when not to use, maintenance, upgrade, backups, memory, sharding, etc:

MongoDB Collection level locking support:

MongoDB performance tips:

Lessons learned migrating from SQL Server to MongoDB:

MongoDB replication performance:

Posted: Oct 24 2012, 12:14 PM by andresv | with no comments
Filed under: , , ,
Argentina Microsoft Users Group Software Architecture Day

Hi Guys, last Friday I was invited to be a speaker in a very nice Software Architecture Event organized by the Argentina Microsoft Users Group.

The event (spanish spoken) was named "Jornada de Arquitectura" (something like "Software Architecture Day") and included the presence of very notable local speakers in the Software Architecture field like: Martín Salías, Diego Gonzalez, Hernán Wilkinson, Diego Fontdevila, Roberto Schatz and, of course, me. If you want more information about the event click here.

The slides and videos for the other presenters should be uploaded in the MUG site shortly and into the Code & Beyond site as well, so check for them in a couple days if interested.

Just for the impacient, I'm uploading my slides here, in Skydrive for your convenience.

Best regards,

Andrés G Vettori, Vmbc, CTO

Dynamic (runtime) Generation of a WCF Service contract

 In my last post I talked about how to register dynamically (at runtime) a WCF instance without the need to have an existing .svc file or something in the configuration file (<serviceHostingEnvironment><serviceActivations>).

In this post, we are going deeper in the "Dynamic" domain and we would like to not only host our service dynamically, but also to create the actual service contracts and service operations (and its metadata to support WSDL generation) dynamically (at runtime).

You might be asking yourself: Why on earth anyone would like to create a WCF service (and contracts with methods) at runtime?

Well, there is a big chance that you never need to do something like this (I have lived without this so far, and our current projects are running in production very well) but we are building the next version of our development platform (the elusive project codename "E2" I talked about in my previous post) and in that context we need a way to generate the API that our "Business Models, Modules and Process" define. Those artifacts are completely created by our Business Analyst users and they don't require ANY CODING at all, so why we would settle for less when talking about the exposed APIs of those things.

In the sample code you can download below there are not one but two different approaches to this:

  • Using Reflection.Emit to create at runtime a service class that implement the desired operations (methods).
  • Using  the "ContractDescription" class to create and inject at runtime the service metadata that resembles the desired operations (methods).

Both approaches resolve the service metadata creation, the first by creating a class and letting the WCF runtime to generate the service description metadata in the normal way, and the other build and inject this service description metadata from scratch. Both methods works equally well but I tend to preffer the second because generating and loading a dynamic type in the running process (or AppDomain) have the drawback that is difficult to unload those generated Types and replace them with new versions when the business metadata changes (remember that Business Analysts are doing that, and they are free to change anything they need).

Having said that, I know that it would be possible to unload a generated Type by means of custom AppDomains, but is more work and there are security and performance issues associated with this. By using a metadata only approach we eliminate this problems altogether.

So far we resolved the metadata generation part, but the actual execution of those pesky "Business Process" is not mentioned anywhere, and that's because that is the easy part!  :)

WCF already provides all the extensibility you need to intercept the execution of ServiceOperations and do wherever you need, and that's exactly what I do in both examples. I have created an "OperationInvoker" class implementing some WCF interfaces (IOperationInvoker and IOperationBehavior for the injection part) so feel free to explore them, but they are pretty simple. When constructing the metadata we inject it into the "OperationDescription" behaviors collection, check the "CreateOperationDescription" method in the "Service5.cs" project file. 

 Well, enjoy the code and let me know if have any comments.

Source code download from here:

Best regards,

Andrés G Vettori, CTO, VMBC

Registering a file-less WCF Service Dynamically

 I know...  I know...   it has been a while since my last post, but I never forgot about it, just having a blast doing some fun things. Well, some of that stuff wasn't really THAT fun, but that's part of being a CTO and always there is room for improvement in some processes and structures in the company, and so...  after a lot of hard work, we are now in a much better position to actually start doing some fun stuff.

And the fun part has begun, in thre form of a couple new and REALLY interesting and fun projects, where I participate not only as CTO but also I'm fulfilling a role of an Chief Architect as well, overseeing aspects of general architecture from development and infrastructure standpoints, and more important, doing a LOT of research and proof of concepts to hand over to our Architect Team.

So, for this new project (let's say, Codename "E2") I'm in charge of researching some stuff, and here I will present the results of one of those topics: dynamic (runtime) registering of WCF services.

Before jumping in today's topic, let me talk a little about the other topics I'm researching, just to paint a broader picture and set the stage for future posts. Our E2 project is a little ambitious in some aspects, and it's main motto is "Configurable Dynamic", so we are exploring ways to make this happen and the list of things to explore first are:

  • Dynamic registration of WCF services (today's topic)
  • Dynamic Data Access (ORM without Entities)
  • Dynamic Business Rules (or logic, if you want)
  • Dynamic Business Processes (workflows are a possibility here, but not the only one)
  • Performance of of the above (mainly IIS, ASP.NET, WCF) and how to optimize the platform.
  • Performance techniques to use: caching, profiling, monitoring, etc.

 Of course there are other topics (like Security, Scalability, Fault Tolerance, etc) but we will advance over those in time. For today's topic, let's explain a little what I'm talking about. For WWCF services there are two ways of let know the runtime environment that we have a Service class and we want to expose it to the world:

  1. The plain old .svc file approach: we need a file with .svc extension and this file will contain the Type information needed to activate the service.
  2. The new CBA (configuration based activation) approach: this is new in NET4, and therefore is possible to create a WCF service WITHOUT the svc file, using only a section in the web.config file (<serviceHostingEnvironment><serviceActivations>).

While this second option is very interesting, we cannot do it at runtime and so the idea of this post was born. We can register HttpModules at runtime (the MVC3 project do that to register the HttpModule that handles Razor views) and so wwe can try to do something similar. That functionality (dynamically register an HttpModule) could be found in the "Microsoft.Web.Infrastructure" assembly using the "RegisterModule" method found in the "DynamicModuleUtility" class.

We are going to take a similar approach to this helper method and use reflection to inject our service configuration somewhere so the runtime thinks it have a CBA Service (file-less) and can active it as any normal service. This is achieved in the "DynamicServiceHelper" class in the attached sample project, using the "RegisterService" method.

 This is the source code for the "RegisterService" class:  (some lines where removed for brevity)

namespace System.ServiceModel
    public static class DynamicServiceHelper
        static object _syncRoot = new object();
        static FieldInfo hostingManagerField;
        static MethodInfo ensureInitialized;
        static FieldInfo serviceActivationsField;
        static DynamicServiceHelper()
            ensureInitialized = typeof(ServiceHostingEnvironment).GetMethod("EnsureInitialized"BindingFlags.Static | BindingFlags.NonPublic | BindingFlags.InvokeMethod);
            hostingManagerField = typeof(ServiceHostingEnvironment).GetField("hostingManager"BindingFlags.Static | BindingFlags.NonPublic | BindingFlags.GetField);
        public static void EnsureInitialized()
            ensureInitialized.Invoke(nullnew object[] { });
        public static void RegisterService(string addr, Type factory, Type service)
            lock (_syncRoot)
                object hostingManager = hostingManagerField.GetValue(null);
                if (serviceActivationsField == null)
                    serviceActivationsField = hostingManager.GetType().GetField("serviceActivations"BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.GetField);
                Hashtable serviceActivations = (Hashtable)serviceActivationsField.GetValue(hostingManager);
                string value = string.Format("{0}|{1}|{2}", addr, factory.AssemblyQualifiedName, service.AssemblyQualifiedName);
                if (!serviceActivations.ContainsKey(addr))
                    serviceActivations.Add(addr, value);

This class have two methods: "RegisterService" and "EnsureInitialized". The first method is pretty self explanatory, it receives the service activation information (endpoint, service type and factory type) but the "EnsureInitialized" method deserves a little explanation. As you can see, we use the "ServiceHostingEnvironment" class, and this class is used internally to manage all WCF services running in our process. It uses an internal class named "HostingManager" that is in charge of doing the discovery of existing WCF services (by looking for ".svc" files in the application folder and reading the <serviceActivations> configuration section, so by using a little reflection we can see that this information is processed in the private method "LoadConfigParameters", and in that method we see that the list of services to activate is added to the private field named "serviceActivations" (a Hashtable).

If you see the example code, you would also see there is another class named "Startup" that contains an assembly attribute named "PreApplicationStartMethod":

[assembly: PreApplicationStartMethod(typeof(WcfService1.Startup), "Init")]

 This attribute says to the runtime environment (WAS) that should call the "Init" method in the "Startup" class, and this method is called to give you the chance to initialize your code so it's called almost before anything else so the WCF / WAS infrastructure is not yet initialized at this point. So, in this moment we cannot use anything on the "ServiceHostingEnvironment" class. Hopefully, this class also have an "EnsureInitialized" method (private, of course) so we can try to invoke it and hope it doesn't have any other dependency and will initialize what wew need to register our service. Luckily for me, it worked without any evident side-effects, so let's use it!

Another method to register our services would be not calling the "EnsureInitialize" method and wait for something else to do it. In that case we can create (and dynamically inject) an HttpModule to register our service in it's "Init" method. That will work and I'm including that code in my sample as well (in fact it was my first approach) but it has an important drawback: it needs to something else to initialize the Http stack, and therefore our WCF Service (using TCP to make things more interesting) will not be activated if the hosting application doesn't receive an Http request first.

Well, that's it, hope you enjoyed the reading and find the code useful (by the way, you can use it in any way you want, so I have to include here the usual "No warranties" disclaimer), and if you make some improvement or have any comments about my code I would like to know.

You can download the source code from here:

Best regards,

Andrés G Vettori, CTO, VMBC

Posted: Aug 29 2011, 08:50 AM by andresv | with 1 comment(s) |
Filed under: , , ,
TFS 2010 and the missing Area & Iterations (stale data) Issue

The symptom is this: you change some area or iteration in a TFS Project, but the change is not reflected (or updated) in VS or any other TFS Client.

Well, it happens that TFS now has some clever caching mechanisms that need to be updated when you make a change like this, and those changes are propagated by some scheduled jobs TFS is continuously running in the Application Tier. 

So, you you get this behavior, please check (and possibly restart) the "Visual Studio Team Foundation Background Job Agent" service. In my case, this service was logging a very odd "Object Reference Not Set" into the Windows Event Log, and a simple restart fixed it.

Hope this is fixed by RTM...   (we are using the RC version).

And by the way, if the job agent is broken there are some other things that stops working like email notifications.

Best regards,

Andrés G Vettori, CTO, VMBC


Posted: Mar 30 2010, 01:32 PM by andresv | with no comments
Filed under: , , ,
Cool VS2010 free extensions

Here I'm posting a short list of cool extensions I found for VS2010, all are published in the Visual Studio Gallery

Reactive Extensions for VS2010 Tangible T4 Editor (Template editor for VS2010) Power Commands for VS2010 Atomlineer Comment Generation for VS2010 Goto Definition Extension for VS2010 Resource Refactoring for VS2010 TFS PowerTools for VS2010



 Andres G Vettori, VMBC, CTO

Posted: Dec 14 2009, 07:38 PM by andresv | with no comments
Filed under: , ,
Very good post about solving Sharepoint problems with TFS 2010

 The Visual Studio WIT tools team has published a very nice post pointing the most common issues with Sharepoint and TFS 2010.

Check it out at


Andres G Vettori, VMBC, CTO 



IDFX -> Zermatt -> Geneva -> WIF RTM

At the PDC 09 Microsoft announced the Release to Manufacturing (RTM) of the Windows Identity Foundation, previously known as "Geneva", "Zermatt" before that, and "IDFX" before that.

Grab the latest bytes from

Best regards,

Andres G Vettori, VMBC, CTO

Posted: Dec 01 2009, 09:01 AM by andresv | with 1 comment(s)
Filed under: , , , , ,
VM Prep Tool for Visual Studio Team Lab Management 2010

Microsoft has released the first version of the Virtual Machine Preparation Tool for Visual Studio Team Lab management 2010. What a mouthfull! Try saying that three times in a row..

Well, the tool function is to prepare existing VMs to be compatible with VS 2010 Lab Management requirements, and believe me, there are a few. Configuring an existing VM by hand is a tedious and VERY error prone task, and so this tool was born.

Download it from, this version is prepared to work with VSTS 2010 Beta 2 and Windows Server 2008 X86 SP2 VMs. They will be adding more options as soon they finish testing of different versions (and flavors) of Windows. Perhaps R2 is on the pipeline?

Best regards,

Andres G Vettori, VMBC, CTO


More Posts Next page »