Ready, Compiler, Switch...

I just re-learned a valuable lesson.  I was preparing for an upcoming class that I’m teaching on .NET for my team, and I was reviewing the .NET Garbage collector.  I thought that I’d throw together a few quick apps using WeakReferences to show when objects are garbage collected, but I wasn’t getting the behavior I was expecting.  Unless an object was explicitly set to null, the objects were not being collected.  I was amazed, as this really isn’t how GC is documented, and both of my Heroes, Software Legend Jeffrey Richter and Father of COM (is Love) Don Box clearly state in their books that objects are eligible for GC after their last reference.

Well, I threw together the following code in the Main() method of a C# console app:

 

Console.WriteLine("Starting GC Example App...");

object gcTestObjectWithNull = new object();

object gcTestObject = new object();

WeakReference wr = new WeakReference(gcTestObject);

WeakReference wrWithNull = new WeakReference(gcTestObjectWithNull);

Console.WriteLine( "wr.IsAlive= " + wr.IsAlive );

Console.WriteLine( "wrWithNull.IsAlive= " + wrWithNull.IsAlive );

gcTestObjectWithNull = null;

GC.Collect();

Console.WriteLine("\nWe've just garbage collected, and \n");

Console.WriteLine( "wr.IsAlive= " + wr.IsAlive );

Console.WriteLine( "wrWithNull.IsAlive= " + wrWithNull.IsAlive );

 

My (unexpected) output was consistently:

 

Starting GC Example App...

wr.IsAlive= True

wrWithNull.IsAlive= True

 

We've just garbage collected, and

 

wr.IsAlive= True

wrWithNull.IsAlive= False

 

Well, this would certainly contradict the GC documentation, wouldn’t it?  Note that the object that is getting collected is set to null, while the object that sticks around is not.

 

As I’m writing up my blog findings, ready to jump on the Paul Wilson “null is necessary” bandwagon, it suddenly strikes me – I’ve been doing all this testing in DEBUG mode.  A quick build of the same project in RELEASE results in:

 

 

Starting GC Example App...

wr.IsAlive= True

wrWithNull.IsAlive= True

 

We've just garbage collected, and

 

wr.IsAlive= False

wrWithNull.IsAlive= False

 

Ahh, now there’s the predictable behavior that I’ve been looking for!  What a difference a compiler switch makes!  I'll step back off Paul's null bandwagon for the moment ;-)

 

--

 

[Editorial Note: I'm not arguing with Paul's case that null may be necessary in some circumstances, but I'd claim that it is the exception rather than the rule.]

3 Comments

  • I totally agree that setting objects to null is the exception, so I'm actually not very fond of my explanation being called "null is necessary". I actually even tried to go out of my way to say setting objects to null is not "necessary". What did I say then? I said that setting objects to null can sometimes make a difference, in particular when there are large objects involved. By the way, all my tests were done in release mode, in case anyone was wondering.



    For all those reading this, Jerry and I are actually great friends, and we had this same discussion once before, but without any real tests of our own. I was reading Simon Robinson's "Advanced .NET Programming", and he kind of implied that setting objects to null could make a difference in some cases. Jerry of course countered that Richter stated the opposite -- and everyone knows that Richter's book is the .NET Bible. Well, that's my problem -- too many experts make statements that are generally true, but not always, and they take on a life of their own.



    I've had several people look at my example and say its flawed. Why? They don't argue that my numbers are wrong or that they can't be reproduced -- no, they clearly are correct and reproducible. So how can it be flawed? They claim its a poor design! But that doesn't in any way change the conclusion that setting my objects to null "can" in some cases make a huge difference. I agree that they are very correct that you should try to get a better design though, but I coded my example the way I did for two reasons. First, I intentionally made the worst case possible to prove the point that it can make a difference. Next, it actually simulates a very real-world practice, although that may not be obvious since I have abstracted that out. I'm holding on to my dataset until I'm ready to replace it in my example, which is where the arguments go that I shouldn't have done this. But in the real-world you do have grids that hold on to your datasets for quite some time, and then they are sometimes immediately refreshed and never "released" from a user's perspective at all. Thus, this is not just a weird case that will never happen -- although I fully agree that designing a system with such large objects as to make this big of a difference is a poor design.



    My apologies to Jerry for usurping his post.

  • I hate to say it, but...



    Where we work, we tend to leave everything compiled in Debug mode. Why? We like to know that when the application crashes (not if, as many people dream about), we can at least get a line number of where the problem happened. Not just the method name + some offset. We really like to know where the error happened to try to fix things for the next time.



    So, compiling in Release mode isn't an option for us. And, setting things to null is the next best thing.

  • About compiling in Debug mode - try setting the "Generate Debugging Information" option in your project properties to "true". This should give you the symbol files (pdb files) that give you those row numbers. It works for us with our release builds as well.

Comments have been disabled for this content.