WCF Client Channel Pool - Improved Client Performance

Not long ago, I posted about WCF client performance and some work I have been doing around improving that with a "Channel Pool" type implementation.

Well its finally ready for some public consumption. You can grab the code here. (http://www.theglavs.com/DownloadItem.aspx?FileID=55)

You can grab the download from here. Its very "apha" at this point and my testing has been limited, but it has been showing consistently beter results than using a standard ClientBase proxy class in WCF.

So first a quick usage example:

public class MyProxy : ClientBasePool<ISomeInterface>, ISomeInterface
{
   public void MyInterfaceMethod(string s)
   {
      Channel.MyInterfaceMethod(s);
   }
}

And you use it as you normally would:

MyProxy prox = new MyProxy();
prox.MyInterfaceMethod("Hello");
prox.Close();

And thats it. The same way you would use a normal ClientBase proxy class.

Using this proxy class will typically yield better performance by approx. 20%-50% by optimising the client side of the communication process.

What it Does.

Using the ClientBasePool proxy class will utilise a pool of pre-opened channels behind the scenes. Negotiating the WS-SecureConversation is expensive, so this class manages a pool of channels, that have already done this negotiation before hand, in the background on a separate, low priority thread.

The pool will automatically get refilled in the background (on a separate thread) as channels are removed from the pool. In its default configuration, the pool has a size of 50, and a maximum of 126. The pool is refilled when a "threshold" value is hit. By default this is half the pool size, ie. 25. So when there are 25 or less channels in the pool, the process refills to pool with pre-opened channels.

Additionally, the channels will periodically be checked to see when they were opened, and if they exceed a pre-defined time period . If so, they are closed and removed from the pool. This is also done in the background on a separate thread. Think of this as the Garbage collector process. This is to prevent clients using a channel from the pool that may have been opened half an hour ago, after which the security conversation is not valid (token expired) and the channel is faulted when using it. This process will pro-actively close and remove end-of-life channels, and the refill process will kick in if requied. By default, the channels have a "life" of 90 seconds and are considered end of life after that. The clean up process runs (by default) every 45 seconds to check the pool. This is half of the channels life period.

All these settings are configurable.

My previous post shows some performance tests I initially did. I also did a more recent one using a 20,000 call test for a standard ClientBase proxy, my ClientBasePool proxy, and a comparative test using a single proxy for all calls. The results are:

Normal Proxy (ClientBase): 6:31:647
My Channel Pool ClientPoolBase Proxy: 4:48:618
Single Proxy: 1:5:522

You can see that my ClientBasePool took 4 minutes and 48 seconds compared to a standard proxy time of 6 seconds and 31 seconds. Not huge, but still a significant difference. Obviously, the times are far longer than using a single proxy only (without creating a new proxy for each set of service calls) with a time of 1 minute and 5 seconds.

Caveats

1. I have only tested this using the wsHttpBinding. I think that the netTcp binding wont really need it or wont benefit that much because of increased performance of the protocol. However, it may benefit somewhat, just haven't tried it.

2. Its early days and my time is limited so extensive testing is not possible. If you use it and have an issue, let me know. I'd love to hear about it.

3. During idle times, this channel pool will be periodically closing channels that have expired, and re-opening new ones in preparation for them to be used. Obviously this is extra traffic where there may be none at all.

4. This is NOT faster than using one single proxy class that is held statically for all your service calls. if you can do that, and manage the timeout issues and whatever, then that will easily be the fastest option.

Configuration and Usage

You dont need to set anything explicitly to make this work, but you can change it to better suit your needs.

Namespace: System.ServiceModel.ChannelPool

Main Classes to use (among others):

ClientBasePool : The proxy class to use which interacts with the ChannelPoolFactory and the ChannelPool to get the job done.

ChannelPool : The Channel Pool implementation for each chanel. Normally you should not have to interact directly with this class.

ChannelPoolFactory : Takes care of instantiating and destroying ChannelPool instances for a channel/service interface. To initialise a channel pool and start off the processing threads use:

ChannelPoolFactory<ISomeInterface>.Initialise();

To destroy a channel pool and terminate all background threads, use:

ChannelPoolFactory<ISomeInterface>.Destory();

Configuration Options:

All Configuration options exist in a class called (not surprisingly) Config. I haven't setup reading from a configuration file or anything like that. I'll let the specific implementation take care of that.

The properties in this class of relevance are:

PoolSize : Defaults to 50. The capacity of the channel pool. Max 127. The refill thresold is automatically set to half of the pool capacity, so by default, when 25 channels or less exist in the pool, the refill process is triggered to refill it.

PoolRefillTrigger: Defaults to 25. This value determines how low the pool can get before a refill is triggered. When a new size is set for the PoolSize, then this value automatically gets adjusted to half of the PoolSize.

ChannelLifetime: Defaults to 90 seconds. Channels are closed and removed from the pool after this time.

CleanupInterval: Defaults to half of the ChannelLifetime, ie. 45 seconds. When a new ChannelLifetime period is set, this value is automatically set to approximately half the ChanelLifetime period.

Examples:

System.ServiceModel.ChannelPool.Config.PoolSize = 100;
System.ServiceModel.ChannelPool.Config.ChannelLifetime = 300;

Final Notes:

There is a console application included in the source that I used for some testing. I chopped and changed this many times to try different tests so its a bit of a mess, but not too complex.

I have cleaned up the code somewhat but there are still bits and pieces lying around. I'll get to them. Happy for people to point them out to me though.

Love to hear any feedback.

Remember, grab it from here.

12 Comments

  • Cool stuff!!! Done any performance test using this from web clients where a serviceagent encapsulating the proxy would live for only the request/response lifetime?

  • No not anything I can quote, but this is the exact scenario it was designed for. I have been meaning to post some figures but I have just been too busy lately.

  • Download link down?

  • Sorry Jason, one of the links I had in that post was bad. All fixed now.

    goodman: Thanks for te encouraging words. I shall endeavour to get some perf figures up very soon.

  • Hi Galv
    Nice work.
    One question .. why did you feel the need to use a pool of channels rather than use one client proxy object made accessable to each ASP page request via a static class method ie effectively the proxy is a singleton object. Each page that accesses any of the methods on the client proxy object would get it's own thread & channel anyway ... wouldn't it ?

  • Hi Robert,

    The static object method you mention is actually the preferred, and by far the best performing approach so I am actually advocating that where possible. However, if you have a sessionfull object and not a PerInstance mode server object, you have to worry about timeouts, which then affect the whole application (as its all using the same proxy). Once the prox channel has been faulted, the entire application comes to a halt until you re-initialise that prox object. Its possible, but tricky. This approach simply takes care of all that for you.
    Bear in mind, the usage scenarios are limited for this. As I have mentioned static re-use of a proxy is easily the best performing and if you are not using security, then no neotiation overhead is really required so you may be better off using a standard proxy, and not getting the overhead of managing the pool.

    To answer your final question, all threads will access the same channel for a static proxy. Thats why it ends up being so far as the negotiation gets done once, from then on, its lightning fast.

  • Nice work! Did u get any results on load testing your code?

    Cheers from London

  • I didn'd checked the code...however I assume this is Pooling where One Proxy will be used by one thread at a time.

    Does the Application waits if all the Proxies in the Pool are busy and max limit for creating the proxies in the Pool is reached?

  • Hi Glavs,
    I'm newbie in WCF so can we use this channelpool for NetTcpbinding

    thanks

  • Perhaps, I haven't really tested it with netTCP, only wsHttp. Mostly designed for the web scenario. You'd have to test it.

  • Hi Glavs,

    In a previous post you mentioned that "As I have mentioned static re-use of a proxy is easily the best performing". I didn't quite understand from this discussion how the proxy behaves when used in a multi-threaded environment, i.e. same instance of the proxy being used by multiple threads at once.

    Does it block for each request until it writes the load over the channel?

    Thanks,
    Florin Neamtu

  • Florin, when a static instance is used, yes its shared across the channel but a static instance has its dangers. Its just that the session and/or security negotiation has already been performed so you gain from not having to do it each call (by way of proxy recreation). The default calls are synchronouos, but you can use async calls to mitigate these.

Comments have been disabled for this content.