J e r o e n ' s   w e b l o g

New year, new blog

Happy New Year everyone! 2006 is already upon us and to keep myself busy in this new year, I've decided to move my weblog to a personal domain at jvdb.org. The first post over there is already up, dealing with Software Transactional Memory and titled: Why developers think spiffy dual-core processors suck.

Posted: Jan 02 2006, 02:16 PM by jvdbos | with 1 comment(s)
Filed under:
Yahoo customer care frustrations

Why can't Yahoo customer care simply do what their name suggests? A couple of days ago, I sent an e-mail to Yahoo customer care asking about whether it's possible for me, as a resident of The Netherlands, to subscribe to a small business web hosting package. All I really wanted to know was whether it's a U.S.-only service or not, because I had read in some affiliate FAQ on their site that the web hosting packages are U.S.-only, while their general terms of service don't mention this at all. I was expecting to get a short reply stating that either the FAQ was incorrect or that they have some reason to only offer these plans to people in the U.S. The answer is actually pretty simple but I just wanted to avoid confusion in the future.

So I sent an e-mail and, just to prevent them from asking me about some additional information before answering the question, I specifically used the designated question form on their website. So I enter what they want: my Yahoo ID and my name and enter the question. A day later, when I get the reply, it states (along with some politeness):

In order to review your account, please reply with the following information:

* Yahoo ID
* URL
* Date of Birth
* Any one of the following: [Zip Code, Alt Email Address, Street Address, or Phone Number]

Once we have this information, we'll be happy to investigate the matter that you have described.

Why do they need my date of birth in order to figure out if they sell web hosting packages to Europeans? I can understand that they might be afraid of telling an eight-year-old kid to buy their stuff, but isn't that covered by their general terms of service? Also, it can't be to check my Yahoo ID, because I've never provided my date of birth when signing up for it. The other questions are just as weird. They already have my Yahoo ID, in fact, the e-mail was sent to my Yahoo account.

After asking about this in a reply they ended up sending me a list of their current hosting packages, which was in fact outdated compared to their offering on the website. If Yahoo believes in social platforms so much (at least, that's how I interpret buying companies like Flickr and Del.icio.us), maybe they should start right at the actual point of contact with (potential) customers and have customer care drop the auto-generated replies.

Posted: Dec 20 2005, 12:36 PM by jvdbos | with 124 comment(s)
Filed under:
Anyone up for annual software tax?

I've been fairly critical of the Virtual Server 2005 R2 plans, mostly because I felt it to be unreasonable for current customers to have to invest in a new license just to be able to use their $999 Virtual Server 2005 Enterprise license effectively with Microsoft's own then current server platform. There have been some positive developments however, such as a substantial lowering of the price of Virtual Server (both editions have dropped 80%, the Standard went from $499 to $99 for instance) and the availability of updated additions to use with Server 2003 SP1 on the original edition of VS2005. Furthermore, participants in the Virtual Server 2005 R2 program have received a free Enterprise license of the final R2 release.

So all in all, it's not as bad as it looked, but I still wonder if this new pricing strategy is the way to go. I mean, perhaps the original plan was to sell VS for $499 (Standard) and then just update it for a couple of years with free service packs. Somewhere along the way, SP1 turned into R2 and the strategy was modified from releasing lots of free service packs, to releasing lots of relatively cheap new versions of the product. In the end it doesn't make such a big difference, because if you buy three $200 products or a single $600 product in the course of a couple of years, you end up spending about the same.

The good here is that you don't have to commit to the entire lifespan of the product in the new scheme, since if you don't like R2, you can skip the 2006 release and cut your losses. I just wonder if, from Microsoft's perspective, it's a smart idea for them to periodically remind their customers that their stuff needs to be paid for every once in a while instead of just getting the whole lot up front. Especially in this era of free software. Let's enjoy it while it lasts.

Minimalist or Humane interfaces? Both ofcourse

For some reason everybody has way more time on their hands to read blogs than I do, because whenever I feel like responding to a post from someone it turns out that it was written at least half a week ago and already has dozens of responses. Again here, concerning a recent post by Martin Fowler discussing humane interfaces. There are lots of responses, most of them listed at the bottom of his original posting. Trying to read through them I noticed that what I would consider the most important argument in the discussion is somehow present but not really discussed, so I decided to bring it up myself.

This whole discussion centers on whether an object should expose only the bare minimum functionality (called the minimalist approach) or additional functionality that may or may not be difficult to implement, but is typically required by programmers and can therefor be a great help by being included (called the humane approach). The example is Ruby's Array class which has 78 methods defined, including methods such as first and flatten that can reasonably quickly be implemented by the client code, but can still be useful to have anyway.

A major argument for the humane approach is that if a class only defines the absolute basic operations, that users will end up duplicating a lot of code for common tasks. The major argument for the minimalist approach is that classes that define more than they have to will require more maintenance than they should.

Personally, I like it when objects can help me out with more than just returning a handle that I can use to query someone else for the next handle, all part of over ten different steps in order to complete a single functional step (anyone remember setting up something in DirectX? Is it still that bad?) That doesn't mean maintenance is not an issue. I don't think it's reasonable to require that implementing an optimized version of a class means you have to rewrite 78 methods.

Instead I think it's good to seperate the two: always create a class that exposes a minimalist API and then encapsulate it to add the humane interface on top of that. This way you can easily seperate the derived (humane) behaviour from the implementation-specific (minimalist) behaviour. Implementing a new minimalist version of a class will give all clients access to the humane interface with the new back-end. Is this really such a fundamental issue? I feel like everybody's always been doing it like that.

Saving time with refactoring

When I wrote about refactoring being free I mentioned only a typical situation where you have a (large) body of code and you want to modify it. This happens a lot, but refactoring is especially useful when you're enhancing small pieces of code that you just wrote, as in earlier the same day or somewhere last week. The point here being that the time you gain is not from the fact that you keep modifying your own code (which will always cost some time) but from the much greater amount of time you save from having a bigger chance that you're investing precious development time in the right features.

I remember that a couple of years ago, I was asked to write a small parsing application to visualize a messaging protocol for debugging purposes. The protocol defined several messages, but because the specs were not entirely clear, I decided to just jump in and parse what I could. The end-users did remark that the protocol consisted of incoming and outgoing messages and that it was probably a good idea to display those in seperate columns next to eachother.

I just started parsing the messages and defined a simple Message-class. Every time I found a chunk of data in the stream that consisted of a complete message, I created an instance of this class that contained mostly some basic get-operations. Before attempting to parse anything else, I decided to create a basic GUI-application to go along with it, so I could deliver a first spike to my users to get some additional feedback on where the GUI should be headed in terms of design, since I'm fairly crappy at coming up with good user interfaces on my own.

I released the first version of this tool the same day I started working on it. The users were very happy with it, as they were looking at the data streams with a hex editor before that and figuring out where messages started and ended was their biggest problem - and it was solved already! Furthermore, they told me I should forget about the multiple column display they wanted earlier that day, because they now noticed that the communication was not always consistent in the order that messages were being sent and received. In other words, the single column view provided some interesting facts that a multi-column view may have obfuscated, unless it would have been much more sophisticated, in a way that I would never be able to deliver within a short period of programming anyway.

And that's the most important point about refactoring: sure, modifying your code always takes more time than writing it once and never changing it. But that's only possible if your specs are 100% clear up front and never change. And how often does that happen? It has never happened to me.

Refactoring is free

I'm a little late to the party, but I wanted to give my perspective on some of the discussion surrounding the cost of refactoring I've been reading about in the past week, most notably some posts by Paul Gielens, starting with Refactoring is Not Free, where he writes:

A couple of months ago we where up for the challenge to refactor a large portion of a badly performing application. The application had no supporting unit tests, cruddy code and Swiss cheese specifications (its developers only had a sense of what the application had to do). It is in this example where refactoring became costly. At start we tried to cover as much existing code possible [...]. Although the confident level increased we weren’t able to preserve the behavior of the code while refactoring small pieces at a time. Eventually we had clear signs that the progress we made just wasn’t enough. We then decided to rewrite it from scratch and than refactor it.

What I don't understand in the described situation is what the goal of refactoring here was. Refactoring itself doesn't increase performance, in fact, often refactorings reduce speed in order to make the design more clear. What I do understand from the above was some kind of decision to just "refactor the whole thing" and then see what could be done with performance, which is a strange approach in my opinion.

When a system performs poorly, there's a basic approach to figuring things out: first find out what the bottlenecks are, find solutions to them, test them independently to verify them in the situation at hand and finally, modify the code to incorporate these changes. Now, refactoring comes along in the final step, because in order to modify the existing code, you'll probably need to do some refactoring. Not all of the existing code, just the pieces where you want to modify things to incorporate the higher performing approach. Now even then, writing from scratch can often be quicker, so the conclusion may be the same. But either way, the refactoring is never "costly", it's always the sanest way to change code. So that's why I'm stating: refactoring is free.

Suppose you have a class that works with a type code that modifies its behaviour. Every method may consist of a switch/case-construct that decides which behaviour to expose. Modifying this into a class hierarchy where polymorphism takes care of the type-specific behaviour does take time, but it's always better than just adding another case-label to all the existing switch/case-constructs. It only takes a couple of 2:00 AM debugging sessions to figure that out.

Switching to e-books, sort of

I've never been a fan of e-books. I don't have trouble reading large articles on the web though, or even entire chapters of books (usually example chapters provided by publishers) in Adobe Reader. It's not so much the reading on a monitor that I don't like, but more the idea that you buy a book, you read it and then you don't have a used slot on your bookshelf to show for it. But that's not all, I sometimes like to take a really good book with me when I have some travelling to do and I usually pick technical books for that. It's probably doable to read e-books on a PDA, but I don't use one and don't know if I'd enjoy reading from such a device. It's probably about perspective: if you don't associate books with your computer, then you simply don't want them to mix too much.

Anyway, I did make the switch a couple of weeks ago, but decided that instead of switching to e-books completely, I've switched completely in a single category of books: technical books on subjects that will lose relevance relatively quickly. The first e-book I bought is Foundations of AJAX by Ryan Asleson and Nathaniel Schutta, published by APress. Not that I feel that this book will lose relevance quickly, but I don't suspect I'll dig it up a couple of years from now, simply because either AJAX will be something of the past by then, or if it's not, then everything around it will have evolved so much that the book will be fairly useless anyway. By that time I'll expect to still have a look through Design Patterns however, as well as Refactoring. So they'll remain on my physical bookshelf.

The e-book experience is actually quite good: APress basically lets you download a full PDF file as well as seperate PDFs of each chapter. When you open it up you have to enter your e-mail as a password (to prevent you from spreading the file) and every other page there's a subtle serial number visible through the text (in case you start spreading lots of printed copies I suppose). The good thing is that you can easily print a couple of chapters and that the text is now fully searchable when you need the book as a reference. Anyone care to share his experiences with other publishers' e-books?

Posted: Dec 04 2005, 05:48 PM by jvdbos | with 1 comment(s)
Filed under:
Now all I need is a title snippet

I have to admit, when I first saw the VS2005-integrated code snippets during the TechEd Europe 2004 (yes 2004) keynote, it sounded like the typical feature that I would never use. Well, to make a long story short, I've been using VC#2005 for a couple of days and it's already one of my favorite features!

Sometimes I would omit handling an exception in some code during development just because it's such a drag to keep writing all that try/catch/finally-stuff. Sure, the handling would make it into the code in the end - if only because some of the unit-tests triggering the exceptions simply wouldn't pass, but still, it's a bit of a risk not including them right away when all the context is still clear to you. So what I do now is this:

try[TAB][TAB]

And there it is: a try-block with a catch-clause. And the best part is that you get this in-code-wizard-type functionality to go through all the variable stuff in all these snippets. After that the cursor ends up in the logical next place to start typing. Great stuff!

Some more cool snippet-shortcuts (just type them in and follow with double TAB): cw (Console.WriteLine), mbox (MessageBox), for/forr (regular and reverse for-loop) and prop/propg (property get/set and just with a get, along with the backing variables).

Posted: Oct 31 2005, 06:54 PM by jvdbos | with 4 comment(s)
Filed under: ,
Doing it right means putting it left

The first thing that hit me when firing up Visual Studio 2005 for the first time was that the solution explorer was docked on the right side of the screen. Funny thing is, it seems the same thing happened to Frans! His motivation of why the solution explorer should be docked on the left side of the screen is correct ofcourse, but while reading the comments one by Bruce Johnson made me laugh out loud, so I'll just paste it here:

If you plan on keeping your Solution Explorer open, your suggestions make sense. On the other hand, I always keep mine collapsed, so I want it on the right side. Otherwise, when it expands, it covers the code that, as you wisely noted, sits mostly on the left.

I'm guessing the decision as to the 'default' positioning was one made by committee. "If we collapse it by default, people won't know where it went to. But if we put it on the left, it makes the code jump around. Why don't we just put it on the right". Hence getting the worst of both worlds. ;)

I remember noticing the default right-side docking when I first installed VS.NET 2003 so it's not a new decision. I remember seeing it on the left by default as well though. It appears to have something to do with the language selection you make when you first start it up: if you select VC++, you'll get the old left-side look from VC++6 (and before), but any .NET-language including C# will give you the VB6-type right docking position. Just goes to show that Microsoft is more concerned with keeping all those migrating VB-users happy than a bunch of those (always-whining anyway) C++ types :)

Posted: Oct 29 2005, 12:00 PM by jvdbos | with 3 comment(s)
Filed under:
Microsoft joins the web search API world

Microsoft has released an API to use MSN Search at last. Instead of changing the license to their XML-formatted results, they've published a WSDL file for use with SOAP. I don't mind that at all though, I'm just happy I can finally use MSN Search output in my tools. The most important part of the license (for me as a developer anyway) is that every IP is allowed to retrieve 10k results per day for free, which is a much more elegant solution than Google's API key mechanism, where you have to get your application's users to register with Google and paste some key into a configuration screen or file. The 10k results are the same amount Google allows, both are dwarfed by Yahoo's 250k though.

What's interesting is that MSN chose to go the SOAP route, after Yahoo picked REST and Google said that in retrospect, they'd rather have gone with REST as well. MSN could have released a REST API by changing the license text in the XML-results, but instead they went through the trouble of creating a WSDL, which although it's not an incredible amount of work, does seem to signal that Microsoft still believes that SOAP is the way to go. I'm not sure I agree though, I tend to think that REST is great for utility APIs where you want easy access from high-level tools like scripting languages and that the extra weight of SOAP is only worth it when composing applications based on multiple services. Personally I see searching the web more as a utility than as a base service.

Posted: Sep 14 2005, 08:48 AM by jvdbos | with 1 comment(s)
Filed under:
More Posts Next page »