On a current project I was interested in using VSTS for Database professionals to see if it could add any value with database management. Traditionally its been a very manual process as Visual Studio has not had any real support for it.
I am a fan of the Web Client Software factory. Its a pretty good implementation of the MVP pattern within ASP.NET. Pretty lean and clean and doesn't involve lots of effort to get the simple stuff done, unlike a lot of other MVC based implementations/frameworks.
Thought I would install some optional updates to try and resolve some issues I was having with Vista and Visual Studio 2005. It seems I continually get the VSTestHost.exe has stopped unexpectedly error when trying to run any tests within Visual Studio. It seems there are not too many instances of this around as I can't find too much beyond simple fix suggestions that haven't worked.
Found via Mike Harsh’s blog.
Its easy to use and helps me a lot in finding out about how things are done.
Like a lot of people lately, I have been playing with Silverlight (Alpha 1.1 version), trying to understand the nuts and bolts of it, and when that fails, just diving in and seeing how I go. I can’t say I am much of a WPF/XAML guru so things have been slow.
What is hard is just diving in and trying to implement stuff, then realising that I am trying to use an supported feature. A few examples:
I wanted to add some MouseOver effects to particular Canvas (while lamenting the absence of any grid controls in Silverlight), and eventually realised that the “MouseEnter” routed event is not supported as an event trigger, only the “Loaded” event is. Kinda weird I thought, a little painful but not too bad. It means you need to resort to code, and in my case, I wanted to do some scale animations, so needed to put those ScaleAnimations in the Resouce section of my canvas (that exists within my Page1.xaml file) like this:
<DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="txtFullScreenIcon" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children).(ScaleTransform.ScaleX)">
<SplineDoubleKeyFrame KeyTime="00:00:00.3000000" Value="2.646"/>
<DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="txtFullScreenIcon" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children).(ScaleTransform.ScaleY)">
<SplineDoubleKeyFrame KeyTime="00:00:00.3000000" Value="3.75"/>
<DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="txtFullScreenIcon" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children).(TranslateTransform.X)">
<SplineDoubleKeyFrame KeyTime="00:00:00.3000000" Value="-65"/>
<DoubleAnimationUsingKeyFrames BeginTime="00:00:00" Storyboard.TargetName="txtFullScreenIcon" Storyboard.TargetProperty="(UIElement.RenderTransform).(TransformGroup.Children).(TranslateTransform.Y)">
<SplineDoubleKeyFrame KeyTime="00:00:00.3000000" Value="27.5"/>
My Canvas object definition looked something like:
<Canvas x:Name="fullScreenCanvas" Width="157" Height="72" Canvas.Left="515"
MouseLeftButtonUp="FullScreenClick" RenderTransformOrigin="0.5,0.5" Canvas.Top="-7"
// ... rest of definition continues.....
You’ll notice the MouseEnter="FullScreenMouseEnter" event definition. Now in my code behind (Page1.xaml.cs) I have this:
public void FullScreenMouseEnter(object sender, EventArgs e)
Obviously it would be nicer to hook all this up in XAML (wel at least thats how I would like to do it) but that requires support for RoutedEvents other than Loaded as I mentioned earlier.
I should also note that I am using a combination of Blend and Visual Studio (Orcas) to get the XAML how I want it. Timelines are easier within Blend (IMHO) but in a lot of other cases I feel more confortable going into the XAML itself to edit. If I had XAML intellisense in Blend, I’d be much happier.
Apart from the long list of great, free controls in the AJAX Control toolkit, there are numerous other reasons to use the toolkit. One of which is the great support functions that come with the toolkit.
My USB 250Gb external drive is about ready to pack it in. When I plug it into a USB port, I get the popup asking what I want do (browse, transfer files etc...). Then after not long using it, the drive "disconnects" from the system, and very quickly reconnects again, and I get the popup asking what I want to do as if I had just plugged it on. Now this is happening ever more frequently (sometimes every 10 seconds, sometimes after a few minutes).
I can minimise it by plugging it into a USB 1.1 port, but it still happens (just a lot less). Now I have a lot of stuff on there I want to keep as I use this as my "first step backup" (from here, it goes to DVD if the information is valuable enough to me).
Problem is, I can't copy anything of any size because it keeps dropping out. Its just like the USB plug being pulled out from the system and reconnected again. I have tried other USB devices with no issue (both 1.1 and 2.0).
So what to do.....
Well, here is what I am currently doing.
I want to get this data off the drive so I have setup a FTP server that points to this drive. I am using SmartFTP to queue the downloads from this removable drive to a brand new USB external drive which is functioning beautifully. SmartFTP works well because it auto-resumes the downloads for me.
Is it slow? Oh yes.
Is it painful? Indeed.
Is it working? Remarkably well (albeit quite slow).
If I can bear with the long duration its going to take to get my important data off, then I am done and I can turf this dying drive. Its transferred 1Gb of data so far, and I can hear the drive whirring as it disconnects and reconnects which it has done about 10 times during that transfer.
Update: It appears this post and the comments associated with it disappeared? So here it is reposted.
Update2: Fellow colleague Grant Holliday suggested I use Robocopy. I had completely forgotten about that one.
Update3: Whomever sent me the comment on enclosures was spot on (the comment has also gone, see Update1). It is the enclosure, however the drive is still part of the problem. Its r-e-a-l slow and making weird noises. At least I can copy my data over a bit easier.
Not long ago, I posted about WCF client performance and some work I have been doing around improving that with a "Channel Pool" type implementation.
Well its finally ready for some public consumption. You can grab the code here. (http://www.theglavs.com/DownloadItem.aspx?FileID=55)
You can grab the download from here. Its very "apha" at this point and my testing has been limited, but it has been showing consistently beter results than using a standard ClientBase proxy class in WCF.
So first a quick usage example:
public class MyProxy : ClientBasePool<ISomeInterface>, ISomeInterface
public void MyInterfaceMethod(string s)
And you use it as you normally would:
MyProxy prox = new MyProxy();
And thats it. The same way you would use a normal ClientBase proxy class.
Using this proxy class will typically yield better performance by approx. 20%-50% by optimising the client side of the communication process.
What it Does.
Using the ClientBasePool proxy class will utilise a pool of pre-opened channels behind the scenes. Negotiating the WS-SecureConversation is expensive, so this class manages a pool of channels, that have already done this negotiation before hand, in the background on a separate, low priority thread.
The pool will automatically get refilled in the background (on a separate thread) as channels are removed from the pool. In its default configuration, the pool has a size of 50, and a maximum of 126. The pool is refilled when a "threshold" value is hit. By default this is half the pool size, ie. 25. So when there are 25 or less channels in the pool, the process refills to pool with pre-opened channels.
Additionally, the channels will periodically be checked to see when they were opened, and if they exceed a pre-defined time period . If so, they are closed and removed from the pool. This is also done in the background on a separate thread. Think of this as the Garbage collector process. This is to prevent clients using a channel from the pool that may have been opened half an hour ago, after which the security conversation is not valid (token expired) and the channel is faulted when using it. This process will pro-actively close and remove end-of-life channels, and the refill process will kick in if requied. By default, the channels have a "life" of 90 seconds and are considered end of life after that. The clean up process runs (by default) every 45 seconds to check the pool. This is half of the channels life period.
All these settings are configurable.
My previous post shows some performance tests I initially did. I also did a more recent one using a 20,000 call test for a standard ClientBase proxy, my ClientBasePool proxy, and a comparative test using a single proxy for all calls. The results are:
Normal Proxy (ClientBase): 6:31:647
My Channel Pool ClientPoolBase Proxy: 4:48:618
Single Proxy: 1:5:522
You can see that my ClientBasePool took 4 minutes and 48 seconds compared to a standard proxy time of 6 seconds and 31 seconds. Not huge, but still a significant difference. Obviously, the times are far longer than using a single proxy only (without creating a new proxy for each set of service calls) with a time of 1 minute and 5 seconds.
1. I have only tested this using the wsHttpBinding. I think that the netTcp binding wont really need it or wont benefit that much because of increased performance of the protocol. However, it may benefit somewhat, just haven't tried it.
2. Its early days and my time is limited so extensive testing is not possible. If you use it and have an issue, let me know. I'd love to hear about it.
3. During idle times, this channel pool will be periodically closing channels that have expired, and re-opening new ones in preparation for them to be used. Obviously this is extra traffic where there may be none at all.
4. This is NOT faster than using one single proxy class that is held statically for all your service calls. if you can do that, and manage the timeout issues and whatever, then that will easily be the fastest option.
Configuration and Usage
You dont need to set anything explicitly to make this work, but you can change it to better suit your needs.
Main Classes to use (among others):
ClientBasePool : The proxy class to use which interacts with the ChannelPoolFactory and the ChannelPool to get the job done.
ChannelPool : The Channel Pool implementation for each chanel. Normally you should not have to interact directly with this class.
ChannelPoolFactory : Takes care of instantiating and destroying ChannelPool instances for a channel/service interface. To initialise a channel pool and start off the processing threads use:
To destroy a channel pool and terminate all background threads, use:
All Configuration options exist in a class called (not surprisingly) Config. I haven't setup reading from a configuration file or anything like that. I'll let the specific implementation take care of that.
The properties in this class of relevance are:
PoolSize : Defaults to 50. The capacity of the channel pool. Max 127. The refill thresold is automatically set to half of the pool capacity, so by default, when 25 channels or less exist in the pool, the refill process is triggered to refill it.
PoolRefillTrigger: Defaults to 25. This value determines how low the pool can get before a refill is triggered. When a new size is set for the PoolSize, then this value automatically gets adjusted to half of the PoolSize.
ChannelLifetime: Defaults to 90 seconds. Channels are closed and removed from the pool after this time.
CleanupInterval: Defaults to half of the ChannelLifetime, ie. 45 seconds. When a new ChannelLifetime period is set, this value is automatically set to approximately half the ChanelLifetime period.
System.ServiceModel.ChannelPool.Config.PoolSize = 100;
System.ServiceModel.ChannelPool.Config.ChannelLifetime = 300;
There is a console application included in the source that I used for some testing. I chopped and changed this many times to try different tests so its a bit of a mess, but not too complex.
I have cleaned up the code somewhat but there are still bits and pieces lying around. I'll get to them. Happy for people to point them out to me though.
Love to hear any feedback.
Remember, grab it from here.
I blogged just recently about my 360 getting the red ring of death after only a year and 1 month of operation. I purchased an extended warranty agreement with the retailer for on the spot replacement of the unit for 2 years from the date of purchase. So luckily, I now have a brand new Xbox360 to replace the dead one. The same hard drive was moved to the new unit so I don't lose anything which is good.
Just awesome. I have owned 3 Xbox units in my time. Every single unit has died a little after a year of operation. My first original XBox console died around 1 year and 2 months after I bought it. My second original XBox console died another year and a bit after purchasing it.