Miscellaneous Debris

Avner Kashtan's Frustrations and Exultations
Attaching on Startup
Here’s a neat trick I stumbled onto that can make life a lot easier – in our development workstations, certainly, and even in a pinch on staging and testing machines: How many times have you wished you could attach to a process immediately when it launchs? If we’re running a simple EXE, we can launch it from the debugger directly, but if we’re trying to debug a Windows Service, a scheduled task or a sub-process that is launched automatically, it isn’t that simple. We can try to press CTRL-ALT-P as quickly as possible, but that will almost always miss the very beginning. We can add a call to System.Diagnostics.Debugger.Launch() to our Application_Startup() function (or equivalent), but that’s polluting production code with debugging statements, and it isn’t really something we can send over to a staging or QA machine. I was all set to write a tool to monitor new processes being launched and latch onto them when I discovered that there’s no need, and Windows provides me with this facility automatically! All we need to do is create a new registry key under HKEY_LOCAL_MACHINE\Software\Microsoft\Windows NT\CurrentVersion\Image File ExecutionOptions and name it after the EXE we want to attach to. Under this key, we create two values: A DWORD named PauseOnStartup with a value of 1, and a string named Debugger, with a value of vsjitdebugger.exe. Here’s a sample .REG file
Code Snippet
  1. Windows Registry Editor Version 5.00
  2. [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Image File Execution Options\MyApplication.exe]
  3. "PauseOnStartup"=dword:00000001
  4. "Debugger"="vsjitdebugger.exe"
  Now, when MyApplication.exe launches, it will automatically launch the familiar “Choose the debugger to use” dialog, and we can point it to our solution of choice.
Not-So-Lazy Static Constructors in C# 4.0

A coworker pointed me towards an interesting blog post by John Skeet about the changed behavior of static constructors in C# 4.0 (yes, I know, it’s been a few years now, but I never ran into it).

It seems that C# 4.0 now tries to be lazier when instantiated static fields. So if I have a side-effect free static method that doesn’t touch any members, calling it will not bother initializing the static fields. From Skeet’s post:

  1. class Lazy
  2.     {
  3.         private static int x = Log();
  4.         private static int Log()
  5.         {
  6.             Console.WriteLine("Type initialized");
  7.             return 0;
  8.         }
  9.         public static void StaticMethod()
  10.         {
  11.             Console.WriteLine("In static method");
  12.         }
  13.     }

Calling StaticMethod() will print “In static method”, but will not print “Type Initialized”!

This was a bit worrying. John Skeet said it shouldn’t impact existing code, but we were not so sure. Imagine a class whose static constructor does some unmanaged initialization work (creates a Performance Counter, for instance) while the static method  writes to the counter. This could potentially cause a hard to find exception, since our expectation was that static constructors can be relied upon to always be called first.

So we ran a quick test (and by “we” I most mean Amir), and it seems that this behavior isn’t as problematic as we thought. Look at this code, adding a static constructor to the class above:

  1. class Lazy
  2.     {
  3.         private static int x = Log();
  4.         private static int Log()
  5.         {
  6.             Console.WriteLine("Type initialized");
  7.             return 0;
  8.         }
  9.  
  10.         static Lazy()
  11.         {
  12.             Console.WriteLine("In static constructor");
  13.         }
  14.  
  15.         public static void StaticMethod()
  16.         {
  17.             Console.WriteLine("In static method");
  18.         }
  19.     }

The only difference is that we have an explicit static constructor, rather than the implicit one that initializes the field. Unlike the first test case, in this case calling StaticMethod() did call the static constructor, and we could see “In static constructor” printed before “In static method”. The compiler is smart enough to see that we have an explicit constructor defined, so that means we want it to be called, and it will be called. This was reassuring.

But wait, there’s more! It seems that once type initialization was triggered by the presence of the explicit static constructor, it went all the way. Even the x parameter was initialized, the Log() method was called, and “Type Initialized” was printed to the console, even before the static constructor. This was the behavior I was used to, where static field initializations are added to the beginning of the .cctor.

To summarize, the new lazy type initialization behavior for C# 4.0 is interesting, since it allows static classes that contain only side-effect free methods (for instance, classes containing popular Extension Methods) to avoid expensive and unnecessary initializations. But it was designed smartly enough to recognize when initialization is explicitly desired, and be a bit less lazy in that case.

(And thanks again to Igal Tabachnik and Amir Zuker)

האם עוד צריך את VB.NET לתכנת מול אופיס?

עברו כמה שנים מאז שעבדתי עם Visual Basic.NET, והייתי בטוח שבימינו, ב-2013, נגמרו כבר הויכוחים של “איזו שפה יותר טובה”. שתיהן שפות עם יכולות דומות, והבחירה ביניהן היא בעיקר העדפה סגנונית. אבל מידי פעם אני עדיין רואה בפורומים או ב-Stack Overflow שאלה בנוגע ליתרונות של שפה אחת על השניה, וספציפית על יכולות ה-COM Interop של VB. האם היא באמת שפה נוחה יותר?

המצב טרום .NET 4.0

לפני .NET 4.0 (שיצאה, יש להזכיר, לפני כמעט 3 שנים), התשובה היתה “כן”, אם כי “כן” קצת דחוק. VB הגיעה מתוך עולם ה-Late Binding, ולא היה לה בעיה להעביר לרכיב רק חלק מהפרמטרים שהם צריכים, ולדאוג לשאר לברירות מחדל מאחורי הקלעים. היתרון הזה היה בולט במיוחד כשעובדים מול אובייקטים של אופיס, שמכילים מתודות שמקבלות חמישה, עשרה, ולפעמים שישה-עשר פרמטרים אופציונאליים, כשברוב המקרים אנחנו נרצה להעביר רק אחד או שניים מהם, אם בכלל. ההבדל בין C# ל-VB במקרה כזה הוא ההבדל בין הקוד הזה ב-C#:

Code Snippet
  1. myWordDocument.SaveAs(ref fileName, ref missingValue,
  2.                  ref missingValue, ref missingValue,
  3.                  ref missingValue, ref missingValue,
  4.                  ref missingValue, ref missingValue,
  5.                  ref missingValue, ref missingValue,
  6.                  ref missingValue, ref missingValue,
  7.                  ref missingValue, ref missingValue,
  8.                  ref missingValue, ref missingValue);

לבין זה ב-VB:

Code Snippet
  1. MyWordDocument.SaveAs(FileName: fileName)

שזה, אתם חייבים להודות, הבדל לא קטן.

אבל מצד שני, לא כל ממשק COM עובד ככה. למען האמת, בכל שנותי אני חושב שנתקלתי בסגנון כתיבה הזה רק בספריות של אופיס, ולא בשום רכיב אחר, בין אם של מיקרוסופט (כמו רכיבי DirectShow שאיתם עבדתי לאחרונה) או של חברות אחרות. ברוב המקרים, לעשות COM Interop ב-C# פשוט באותה מידה כמו ב-VB.

.NET 4.0 – גם C# זוכה ליחס

היתרון הקטן הזה של VB נהיה גם הוא פחות רלבנטי החל מ-C# 4.0 ו-Visual Studio 2010, משתי סיבות:

הראשונה היא מילת הקסם שהתווספה לה לשפה, dynamic. כשאנחנו מגדירים אובייקט כדינאמי, אנחנו מוותרים על הרבה מה-static type checking שהקומפיילר עושה בשבילנו, ונותנים למנגנון ה-late binding (שסופח ל-C# באמצעות ה-DLR, ה-Dynamic Language Runtime שנבנה לתמוך בשפות דינאמיות כמו Python או Ruby) לעשות בשבילנו את העבודה ב-runtime. מכיוון שיש late binding, אפשר להתעלם מהפרמטרים המיותרים ולתת ל-DLR להבין לבד מה לחבר לאיפה. אפשר למצוא הסבר כללי על שימוש ב-dynamic במאמר הזה, והסבר יותר ספציפי על השימוש בו ל-COM Interop במאמר הזה.

אבל אם אנחנו שקלנו לעבור ל-VB רק בשביל לעבוד מול רכיבי אופיס, אז C# 4.0 מאפשר לנו להקל על החיים גם בלי לצלול לעולם העכור של dynamic languages. אנחנו עדיין לא יכולים להתעלם מ-ref parameters במתודות של מחלקות, אבל אנחנו כן יכולים להתעלם מהפרמטרים הללו בקריאות למתודות על interfaces. אני לא יכול להגיד שאני מבין לגמרי למה זה ככה (טרם התעמקתי) אבל זה אומר שאם מה שיש לנו ביד זה לא אובייקט DocumentClass של אופיס, אלא interface מסוג Document, אז אפשר לקרוא למתודות שעליו כמו ב-VB.

לנוחותינו, ספריות ה-Interop של אופיס חושפות לנו את ה-interfaces האלה, דרך property בשם InnerObject שיש על רוב האובייקטים המעניינים. (כן, זה property בשם InnerObject שחושף את ה-Interface. אני יודע. זה מבלבל. זה מה יש). אז את הקוד שכתבנו למעלה  אפשר להחליף, ב-C# 4.0, בזה:

Code Snippet
  1. myWordDocument.InnerObject.SaveAs(FileName: fileName);

וזהו! אמנם לא כל אובייקט של אופיס זוכה ל-InnerObject הזה, אבל המרכזיים והגדולים (שהם גם אלה עם המתודות המפלצתיות) דווקא כן. ואם אין ברירה, נעבוד גם עם dynamic – או עם Missing.Value.

אז מתי בכל זאת VB?

כל המלל לעיל לא נועד לשכנע מישהו לבחור דווקא ב-C# או דווקא לא ב-VB. שתי השפות יכולות לעשות פחות או יותר כל אחת את מה שהשניה יכולה לעשות, והבחירה בה צריכה להיות של העדפה אישית לסגנון כתיבה, לא בגלל יכולת קריטיות לכאן או לכאן. זה מה שניסיתי להעביר כאן – כמו ש-VB יישרה קו עם C# לאורך השנים בכמה פיצ’רים שהיו חסרים, כך גם C# השלימה את החסר לה בנוגע לעבודה עם אופיס. הבחירה היא, שוב, סגנונית בלבד.

Remote Debugging through fire, snow or fog

Remote Debugging in Visual Studio is a wonderful feature, especially during the later stages of testing and deployment, and even (if all else fails) when in production, but getting it to work is rarely smooth. Everything is fine if both the computer running VS and the one running the application are in the same domain, but when they aren’t, things start to break, fast. I

So for those stuck debugging remote applications in different domains, here’s a quick guide to east the pain.

  1. On the the remote machine, install the Visual Studio Remote Debugging package, which comes with the version of Visual Studio you’re using.
  2. On the remote machine, create a new local user account. Give it the exact same name and password that you use on your development machine.
    If you’re developing when logged into MYCORPDOMAIN\MyUserName, with password MYPASS123, you wil have to create a local user REMOTEMACHINE\MyUserName with the same password. It doesn’t matter that the domains are different and there’s no trust relationship and so forth. Just have them be the same username and password.
  3. Give REMOTEMACHINE\MyUserName permissions. Administrator permissions is the safest, though you should be able to get it to work with a more restricted group, but I haven’t checked it yet.
  4. Run the application you want to debug using the MyUserName credentials. In Windows 7, this means using Shift-Rightclick and choosing Run As Different User. For Vista, there’s a shell extension to enable it.
  5. Run the Remote Debugging program, again under MyUserName credentials, same as above. The Remote Debugger will start with a new session named MyUserName@REMOTEMACHINE.
  6. Copy the remote debugging session name from the remote machine (you can copy it via the Tools –> Options menu).
  7. In your development machine, open Visual Studio, and go to Debug –> Attach to Process.
  8. In the Attack to Process screen, paste the remote debugging session name into the Qualifier textbox (the second one from the top).
  9. Voila! You are now debugging remotely!
ArcGIS–Getting the Legend Labels out

Working with ESRI’s ArcGIS package, especially the WPF API, can be confusing. There’s the REST API, the SOAP APIs, and the WPF classes themselves, which expose some web service calls and information, but not everything. With all that, it can be hard to find specific features between the different options. Some functionality is handed to you on a silver platter, while some is maddeningly hard to implement.

Today, for instance, I was working on adding a Legend control to my map-based WPF application, to explain the different symbols that can appear on the map.

This is how the legend looks on ESRI’s own map-editing tools:

 

but this is how it looks when I used the Legend control, supplied out of the box by ESRI:

 

Very pretty, but unfortunately missing the option to display the name of the fields that make up the symbology.

Luckily, the WPF controls have a lot of templating/extensibility points, to allow you to specify the layout of each field:

   1: <esri:Legend>

   2:     <esri:Legend.MapLayerTemplate>

   3:         <DataTemplate>

   4:              <TextBlock Text="{Binding Layer.ID}"/>

   5:         </DataTemplate> 

   6:     </esri:Legend.MapLayerTemplate>

   7: </esri:Legend>

but that only replicates the same built in behavior. I could now add any additional fields I liked, but unfortunately, I couldn’t find them as part of the Layer, GraphicsLayer or FeatureLayer definitions. This is the part where ESRI’s lack of organization is noticeable, since I can see this data easily when accessing the ArcGis Server’s web-interface, but I had no idea how to find it as part of the built-in class. Is it a part of Layer? Of LayerInfo? Of the LayerDefinition class that exists only in the SOAP service?

As it turns out, neither. Since these fields are used by the symbol renderer to determine which symbol to draw, they’re actually a part of the layer’s Renderer. Since I already had a MyFeatureLayer class derived from FeatureLayer that added extra functionality, I could just add this property to it:

   1: public string LegendFields

   2: {

   3:     get

   4:     {

   5:         if (this.Renderer is UniqueValueRenderer)

   6:         {

   7:             return (this.Renderer as UniqueValueRenderer).Field;

   8:         }

   9:         else if (this.Renderer is UniqueValueMultipleFieldsRenderer)

  10:         {

  11:             var renderer = this.Renderer as UniqueValueMultipleFieldsRenderer;

  12:             return string.Join(renderer.FieldDelimiter, renderer.Fields);

  13:         }

  14:         else return null;

  15: }

For my scenario, all of my layers used symbology derived from a single field or, as in the examples above, from several of them. The renderer even kindly supplied me with the comma to separate the fields with. Now it was a simple matter to get the Legend control in line – assuming that it was bound to a collection of MyFeatureLayer:

   1: <esri:Legend>

   2:     <esri:Legend.MapLayerTemplate>

   3:         <DataTemplate>

   4:             <StackPanel>

   5:                 <TextBlock Text="{Binding Layer.ID}"/>

   6:                 <TextBlock Text="{Binding Layer.LegendFields}" Margin="10,0,0,0" TextStyle="Italic"/>

   7:             </StackPanel>

   8:         </DataTemplate>

   9:     </esri:Legend.MapLayerTemplate>

  10: </esri:Legend>

and get the look I wanted – the list of fields below the layer name, indented.

Posted: Mar 27 2012, 11:18 AM by Avner Kashtan
Filed under: , , , ,
The Case of the Unexpected Expected Exception

“NUnit is being problematic again”, they told me when I came to visit the project. “When running unattended it’s not catching assertions properly and the test is coming up green, but when stepping through in the debugger, it works fine.”. It’s nice, when getting a passing test is acknowledged as a bad thing, at least when you don’t expect it to be. In this case, though, the fault wasn’t really with NUnit.

  1: [Test]
  2: [ExpectedException]
  3: public void DoTheTest()
  4: {
  5:     _myComponent.RunMethod();
  6:     Assert.IsFalse(_myComponent.EverythingIsFine);
  7: }

“It’s simple. Either the method throws an exception, or at the very least – the EverythingIsFine property won’t be set to “True”, so the assert will catch the problem. But in their case, no exception was thrown and Everything wasn’t Fine,  but the Assert call wasn’t raising a red flag – unless they stepped through, in which case it did. What’s going on?

The basic problem is that to many developers, NUnit is a kind of magic. You write a self-contained little bit of code, the [Test] method, but you don’t call it yourself, you don’t get a feel for the whole execution flow. The result – developers don’t exercise the same sort of judgement they do on their own application code.

The root of the problem here is that the [ExpectedException] attribute told NUnit to pass the test if an exception is thrown. NUnit’s Assertion utilities, however, use exceptions as the mechanism for failing tests – when an assertion is hit, it raises an exception – it can be an AssertionException. For various mock frameworks, it can be an ExpectationException. It doesn’t matter – it’s these exceptions that make the test fail, and not some behind-the-scenes magic. Because the test had an open-ended [ExpectedException] attribute, these exceptions were caught, fulfilling the condition, and NUnit was happy.

What can we do to avoid this?

  • Be explicit. Don’t try to catch ALL exceptions with [ExpectedException]. If you’re expecting an exception, you’re probably expecting a specific exception. Specify it.
  • Be aware of how your tools work. If NUnit works by throwing an exception, don’t wrap it with a try/catch. Your tests are C# code too, as is the plumbing to enable it. It plays by the same rules.
Open Source Vigilantes?
<Placeholder for StackOverflow Weekday counter>
Moq, Callbacks and Out parameters: a particularly tricky edge case

In my current project we’re using Moq as our mocking framework for unit testing. It’s a very nice package – very simple, very intuitive for someone who has his wrapped around basic mocking concepts, but tonight I ran into an annoying limitation that, while rare, impeded my tests considerably.

Consider the following code:

  1: public interface IOut
  2: {
  3:    void DoOut (out string outval);
  4: }
  5: 
  6: [Test]
  7: public void TestMethod()
  8: {
  9:   var mock = new Mock<IOut>();
 10:   string outVal = "";
 11:   mock.Setup(out => out.DoOut(out outVal))
 12:       .Callback<string>(theValue => DoSomethingWithTheValue(theValue));
 13: }

What this code SHOULD do is create a mock object for the IOut interface that, when called with an out parameter, both assigns a value to it, and does something with the value in the Callback.

Unfortunately, this does not work.

The reason it doesn’t work is that the Callback method has many overloads, but they are all to various variants of the Action<> delegate. Either Action<T1>, which receives one parameter, or to Action<T1, T2, T3>, which receives three. None of these support out parameters. But none of these delegates has a signature with an out parameter. The result, at runtime, would be an error to the effect of

“Invalid callback parameters <string> on object ISetup<string&>”

Note the highlighted bits – The Setup method referred to a string& (a ref/out param), while the Callback inferred an Action<string> delegate, which expectes a regular string param.

So what CAN we do?

The first option is submit a patch to the Moq project. It’s open-source, and a solution might be appreciated, especially since the project’s lead, Daniel Cazzulino, pretty much acknowledged that this is not supported right now. I might do that later, but right now it’s the middle of the night and I just want my tests to pass. So I manage to hack together this workaround which does the trick:

  1: public static class MoqExtension
  2: {
  3:    public delegate void OutAction<TOut> (out TOut outVal);   
  4:    public static IReturnsThrows<TMock, TReturn> 
  5:                         OutCallback<TOut, TMock, TReturn>
  6:                                  (this ICallback<TMock,T4> mock, 
  7:                                   OutAction<TOut> action) 
  8:                                      where TMock : class
  9:    {
 10:       mock.GetType()
 11:           .Assembly.GetType("Moq.MethodCall")
 12:           .InvokeMember("SetCallbackWithArguments", 
 13:                         BindingFlags.InvokeMethod 
 14:                             | BindingFlags.NonPublic 
 15:                             | BindingFlags.Instance, 
 16:                         null, mock, new object[] {action});
 17:       return mock as IReturnsThrows<TMock,TReturn>;
 18:    }
 19: }

 

We’ve created a new delegate, OutAction<T>, that has out T as its parameters, and we’ve added a new extension method to Moq, OutCallback, which we will use instead of Callback. What this extension method does is use brute-force Reflection to hack into Moq’s internals and call the SetCallbackWithArguments method, which registers the Callback with Moq. Yes, it even works. I’m as surprised as you are.

Note and Caveats:

1) This code works for the very specific void Function (out T) signature. You will have to tailor it to the specific methods you want to mock. To make it generic, we’ll have to create a bucket-load of OutAction overloads, with different combinations of out and regular parameters.

2) This is very hacky and very fragile. If Moq’s inner implementation changes in a future version (the one I’m using is 4.0.10827.0), it will break.

3) I don’t like using out params, usually, but this is a 3rd party library, so I have to work with what I got.

Comments? Questions? Scathing criticisms on how ugly my code is? A different, simpler method of accomplishing this that I totally missed? Bring’em on.

WPF: Snooping Attached Properties
When developing WPF applications, Snoop is a wonderful tool that can let us see our visual tree at runtime, and find errors in data binding that are otherwise hard to track down in debugging. However, Snoop has an annoying limitation – it can’t show us data-bound Attached Properties. Let’s say we have the following XAML:
<Canvas>
   <Button Canvas.Left="{Binding Location.X}"
           Canvas.Top="{Binding Location.Y"}/>
</Canvas>

Snooping this application won’t show Canvas.Left and Canvas.Top among the Button’s properties, since they’re not a part of the Button object. If we have a bug in our binding, we won’t be able to find it.

So what do we do? We can set various TraceLevel attributes to save the binding errors to a log, but a quick workaround during debugging is to bind our data, in addition to the attached properties, to the element’s Tag property, a generic object that can bind to anything, and allow us to Snoop the binding – and find out any problems:

<Canvas>
   <Button Canvas.Left="{Binding Location.X}"
           Canvas.Top="{Binding Location.Y}"
           Tag="{Binding Location}" />
</Canvas>

This allowed me to find out, quickly, that my bug was simple – I had forgotten to make my Location property public, causing the binding to fail silently.

Saving Your Settings

Tomorrow I am bidding my laptop farewell, and will soon buy me a new one. This results in what I like to call Settings Anxiety. I spend months tuning various aspects of my computer to my liking, and starting afresh with a new machine is wearisome. There are solutions to this, of course. I can make a full settings backup using Windows Easy Transfer or some such tool to store my settings and registry information and reload them on my new computer. I am, however, leery of it. First of all, I don’t trust WET to backup everything, especially 3rd party settings and application data. Secondly, I don’t want to copy all the cruft that’s accumulated in my user settings. One advantage to leaving the old computer is to start fresh.

So my compromise is to do a set of manual exports for various pieces of software, to carry it with me to the new computer, whenever it arrives. I’m going to list the various programs I use here that I bothered to customize, and detail the steps needed to do a manual export. This way I can come back to it the next time I reformat.

1. Outlook 2007

I use Outlook to connect to an Exchange server and to Gmail via IMAP. Both of these use OST files for local offline caching, but the primary storage is on the server. No real need for me to back this up.

Unfortunately, there’s no way to export my Outlook profile’s entire Accounts list to a file, and reimport it later. It means I always have to go to Gmail’s help page to remember the port numbers they use for encrypted IMAP and SMTP, but it’s not too much of a hassle.

2. Visual Studio 2008

Two important customizations here – color scheme (I like grey background) and keyboard shortcuts (single-click build-current-project ftw!). Both are easily exported using Tools –> Import and Export Settings –> Export selected environment settings.

3. Google Chrome

Bookmarks, cookies and stored passwords – things I really don’t feel like retyping again and again after I got my Chrome to memorize it for me. Basically, with all major browsers importing favorites and settings from one another, you only really need to backup one browser’s settings.

Unfortunately, Google haven’t quite gotten around to implementing Export Settings in Chrome – like a lot of other things. To backup my settings, I am copying the C:\Users\username\AppData\Local\Google\Chrome\User Data folder. There’s a lot of useless junk there – the cache, Google Gears data – but it should contain all the interesting bits too.

4. RSSBandit

RSSBandit’s “Export Feeds” option will probably export the feed definitions, but not the actual text of the actual feeds that I already have, offline, in my reader. It does support exporting the list of items in a feed, including read/unread state, though. But a better solution, assuming I am unbothered by space or time, is to use RSS Bandit’s “Remote Storage” option to save the entire feed database to a ZIP file that can be “downloaded” into a new installation. Under Tools –> Options –> Remote Storage set the destination (I use a file share), then do Tools –> Upload Feeds.

When recovering, I’ve found that RSS Bandit doesn’t always succeed in reading the ZIP file – I got a “file was closed” error. What I did was simply open the ZIP file and copy all the XML files into the %AppData%\RssBandit folder, et voila.

5. Windows Live Writer

Another useful tool that wasn’t given an Export Settings option. There are two things to backup here:

1. My blog settings – I have 5 different blogs I manage through WLW. Here I had to go back to the Registry. Funny how it seems so old-fashioned. Export all settings from HKEY_CURRENT_USER\Software\Microsoft\Windows Live\Writer.

2. Blog drafts and recently-written posts are stored in My Documents\My Weblog Posts. Would be a good idea to back those up as well.

3. When restoring settings, it initially appears that while the blog settings are properly recovered, the drafts aren’t. All you need to do, though, is double-click on one of the .wpost files in the My Weblog Posts folder, and the Drafts and Recently Posted lists will get populated.

6. Digsby

Digsby, the multi-protocol IM and Social Networking tool, uses a centralized login to store the settings of my different protocols – so I only need to log in with my Digsby ID to have all my IM networks working. There are some tweaks to the client itself which I would like to not lose, but they don’t provide an Export Settings feature either, and it’s nothing I can’t live without.

 

For most other things, I’d rather not bother until I actually need it.

More Posts Next page »