WCF Client Performance

One of the easiest performance traps when using WCF services is constant creation of new client proxies when accessing those services. In WSE land, this was common place:

MyProxy prox = new MyProxy();

For example, you might have a class with some instance methods, and in each method, instantiate a proxy, call the service method, life is good.

However, doing this in WCF can incur some performance penalty. Ideally, you should create a proxy class once and open it, then use that same proxy class in all your methods (note: this is a contrived example) so that you dont have to re-create and open the proxy.

Its not the creation of the proxy itself that is the issue, its when you call the "Open" method on the proxy, which opens the corresponding channel, and typically initiates a secure conversation (when using this feature) and this costs quite a bit.

Now in ASP.NET, the constant creation of proxy objects, opening and calling of methods is pretty common, especially given that at the end of each request, most resources are discarded, ready for the next request (generally speaking).

I found this issue somewhat odd as its an easy thing to miss, again especially in the ASP.NET world where creation, usage and destruction of objects is common place.

So, I have been tinkering with the concept of a traditional object pool of proxies ready to use, managed for you by an underlying object pool mechanism, much like the thread pool. I tried maintaining a proxy pool but found this to be not very beneficial and harder than I had liked. What I have settled on is an actual Channel pool that underlies the proxy itself. Initial tests (only using wsHttpBinding at this stage) show an approximate 25%-50% speed improvement over traditional methods, with hardly any extra work.

In WCF you might create a proxy like so:

public class MyProxy : ClientBase<ISomeInterface>, ISomeInterface
   public void MyInterfaceMethod(string s)

Using my Channel pool implementation, you simply do:

public class MyProxy : ClientBasePool<ISomeInterface>, ISomeInterface
   public void MyInterfaceMethod(string s)

And you use it as you normally would:

MyProxy prox = new MyProxy();

Behind the scenes, a pool of channels are kept ready, opened, and ready to use. The pool is refilled asynchronously in the background at a oredefined trigger point.

Here are some very initial and totally unscientific performance tests:

The tests themselves are really basic and the services are those generated by the default "create WCF service" project template (with a few mods). This first screen shot shows the tests being run for 50 iterations and also are being run twice so that the second set of results is probably the more realistic ones (everything has been jitted, warmed up etc....)

You'll see the ClientPoolBase tests have executed in half the time.

You'll also notice I include the same series of tests but using only 1 proxy object for comparison (and as expected, its much faster).

Here are the same tests run for 2000 iterations.

Again, another 50% approximate improvement.

I am pretty happy with the results thus far but need to do a lot more testing under various conditions. Initially it may be restricted to only the wsHttp binding as I haven't tested at all with netTCP but ofcourse, its a faster protocol anyway so it may not be necessary. Also, I need to ensure that it is reliable when unused for long periods (take into account timeouts) and so forth. We shall see.

I will post the code for this soon if anybody is at all interested once I get a chance to put it through some more tests.


  • Innovative idea, pooling of channels much like ADO pools connections to the database. Assuming that your services are stateless (best practice), then this makes a lot of sense. Similar to database connections, the overhead of establishing a connection via WCF is worth mitigating.

  • Hi Glavs,
    How did you manage finally the proxy pooling?
    Did you deploy this strategy in a real system and how is performing? I'm really interested on this subject.
    Thank you for your answer.

  • Hi John,

    No I haven't really put any great perf tests against this but I did find out that the benefits are not as good as expected. It will provide benefit in certain scenarios where latency is low and the security negotiation is excessive, but this is not all scenarios so its a little hard to put real figures against. additionally, there is a bit of background work going on, so weighing that up against the actual scenario (ie. is it actually creating more work just to make simple requests faster?).

  • Hi,
    very cool article!!!
    I have a question: what about CPU work?
    I tried the code but i see my CPU to 100% when I run it. Is it for the number of threads created?

  • Simone,

    Your absolutely right, too many threads will affect CPU activity. There are configuration items you can tune to match your needs tho. I would be err'ing on the safe side and using a little less than a little more threads. As with all performance things tho, measure against your system.

Comments have been disabled for this content.