Tales from the Evil Empire

Bertrand Le Roy's blog


Bertrand Le Roy

BoudinFatal's Gamercard

Tales from the Evil Empire - Blogged

Blogs I read

My other stuff


December 2009 - Posts

My video setup

As I’m in vacation, I thought I’d make a post on something different but still quite geeky. I really like to see how people set-up their video systems: there isn’t just one way to do it right and I can’t think of two friends of mine who have something even remotely similar. So I’ll describe my setting and invite you to drop me a comment and describe yours. I’ll also tag a few friends and ask them to describe theirs. I’ll post links here.

TV reception

HD Homerun I almost stopped watching TV over the last few years. I certainly never, ever watch it live. But there is still a handful of shows that I still record in order to get them early and in HD. For that, I installed a simple HD antenna on my roof, that I pointed to the best Seattle relays using AntennaWeb. To receive the signal from the antenna, I use a HD Homerun, which is a great external double HDTV tuner that connects directly to your wired network. This makes it possible for all the computers on your network to show and record HDTV. It is of course perfectly compatible with Media Center.

Media hub

Zalman Media Center The machine where I store all my media (music, TV shows and movies) is a Windows 7 Media Center installed on a fancy Zalman HTPC case with a built-in touch screen. The touch screen is very useful to do stuff on the Media Center without turning on the TV. It would be even more useful if Media Center supported multiple monitors properly and could show the menus on one screen and the video on another. Maybe in a future version…

The Media Center has two hard drives for a total capacity of 1.5TB. It is a little too noisy to my taste and the Zalman software that regulates fan speed is a little clunky by it is a very well-built box that I like a lot nonetheless.

Sony DVD changer Attached to the Media Center is a 200 DVD VAIO jukebox that I hope to retire once I’m done transferring my DVD collection to MP4 using Handbrake (the one and only DVD transfer application that ever worked for me without adding a 1 second delay between image and sound). I hope to retire it because although the integration with Media Center is perfect, it is a very big object and it’s quite slow at loading DVDs.

The software that I installed on the Media Center is kept to a minimum.

On top of Media Center (which comes with Windows 7), I’ve installed enough codecs to be able to read pretty much any video file ever created (except for Quicktime or Real, those I just cannot tolerate).

Media Center keyboardI also have the Hulu desktop software, which unfortunately doesn’t integrate into the Media Center UI. Lame. About Hulu, I also have a license of PlayOn to stream Hulu and YouTube to the Xbox and PS3. Totally worth the $39.99.

Netflix streaming is built-in and integrated in Media Center so nothing to install there.

Finally, I have the Zune software installed to manage all the family’s Zunes and our Zune Pass subscription.

The only thing that the Media Center can’t play is HD-DVD (not a big problem anymore) and Blu-ray.

Gaming and secondary media player

Xbox 360 I’m a gamer, and I play primarily on the Xbox. I love the system and I’m addicted to achievements. But it’s also a more than capable media player that I prefer to use over the Media Center whenever I can because its interface is built for ten-foot operation from beginning to end, whereas the Media Center is a 10’ interface built on the Windows shell, which is a 3’ interface. It doesn’t leak most of the time, but when it occasionally does, it kinda sucks and you have to use the mouse or keyboard. No such thing ever happens with the Xbox.

HD-DVD The Xbox 360 has:

Blu-ray (and more gaming)

PlayStation 3 There are some movies that do deserve the HD treatment, and my Netflix subscription has the Blu-ray option. There are now very affordable Blu-ray players on the market, but the PS3 is pretty much guaranteed to always be updated to the latest spec, and there’s no doubt that it’ll have the processing power to follow them. Because of this and because I enjoy PS3 exclusives just as much as I enjoy Xbox exclusives, I opted to buy a PS3 rather than a dedicated Blu-ray player.

Logitech hardware patch for PS3 But one major design flaw of the PS3 for video playback is that it only supports Bluetooth remotes. I just can’t understand why they made that choice, beyond trying to look cute. The number of remotes I want in my living room is one. Not two. One. Sony’s failure to include a cheap IR receiver in the PS3 means you either need an extra remote (or a controller) that you’re only going to use with the PS3, or you need to work around the design flaw. I opted for the latter: Logitech has a hardware patch that receives conventional IR signals from my Harmony remote and translates them into Bluetooth signals. It’s not cheap, but it keeps the number of remotes to the right number (one, in case you weren’t paying attention).

Bringing it to the screen and speakers

Onkyo HDMI receiver All three devices I use for video (the Media Center, Xbox and PS3) output their signal in beautiful 1080p over HDMI. Unfortunately, my TV only has two HDMI inputs. This is why I opted for an Onkyo receiver that has HDMI inputs and outputs. The HDMI inputs only support the video signal, not the sound signal. Choosing one that does would have been more expensive, and all I had to do to work around the limitation was to pull an additional fiber optic cable for each device.


Logitech Harmony 880 As I’ve mentioned, only one remote is tolerated in the room. As far as I know, the only good choice today is one of the Logitech Harmony. What makes these remotes different is that they work by activity rather than by device. This means that when you’re watching a DVD, you don’t have to know whether the sound volume is being managed by the receiver or the Media Center, you just press the volume keys. The remote knows. It also knows how to turn on and off all the required devices for any given activity with the click of only one button. Finally, it’s very highly configurable. The one I own is the Harmony 880. It’s good enough and has a cradle to recharge the battery when not in use.

That’s it

And that’s it. So how does your video setup look like? Comments are open.

Brad Wilson's setup:

Ludovic Chabant's setup (Ludovic is the author of the excellent NLDD):

Posted: Dec 31 2009, 11:36 PM by Bertrand Le Roy | with 12 comment(s)
Filed under:
Resizing images from the server using WPF/WIC instead of GDI+

(c) 2009 Bertrand Le Roy I and many others have written about resizing images using GDI+, or rather its .NET façade, System.Drawing. It works. But there is just this one sentence at the bottom of the documentation:


Classes within the System.Drawing namespace are not supported for use within a Windows or ASP.NET service. Attempting to use these classes from within one of these application types may produce unexpected problems, such as diminished service performance and run-time exceptions.

Kind of scary, isn’t it? Nobody likes diminished performance and run-time exceptions. But when you need to generate thumbnails from managed code, what other choices do you have?

There used to be two: using interop with native APIs (which won’t work in medium trust) or writing your own image manipulation library from scratch. There might already be some purely managed image manipulation components out there that could replace System.Drawing, but I don’t know of one. If you do, by all means drop me a comment and I’ll update the post. I’m also not sure hoe fast managed code could do this sort of heavy pixel lifting.

Since WPF was introduced into the .NET framework, there has been a third possibility that will be the topic of this post.

Works in medium trustBefore we look at that, let’s put things in perspective: there are LOTS of applications and components out there that are using GDI+ (or more accurately its System.Drawing managed code expression) and they work just fine. My own photo album uses it and I’ve never had a problem with it. Most of the problems I’ve seen were due to improper resource management (not freeing handles and similar bugs) or to abuse of the API (resizing gigapixel images for example). But used reasonably and correctly, it’s an API that really doesn’t pose any serious problem and that is fairly safe. If you use it today and are satisfied with it, there probably isn’t any reason why you should change your code. And as we’ll see, System.Drawing works in medium trust whereas System.Windows.Media.Imaging does not.

So why would you want to use that fancy WPF stuff then? Well, first, it uses Windows Imaging Components, which means that you benefit from the same extensible imaging infrastructure that displays media in the Windows Explorer. This means that if you have your camera manufacturer’s raw codec or an Adobe DNG codec installed on the server, you’ll be able to resize photos using those formats without changing a single line of code. Pretty sweet. You also get a much more complete API. For example, there is support for reading and writing meta-data, which is extremely useful for gallery types of applications.

So let’s go and resize images.

As you know, images on the web are most of the time served from a different request than the page that shows them, through img tags on the page that point back to the server. For that reason, a dynamic image is not computed on the server by the page, but by a separate handler to which the img tag points and to which the page must communicate enough information through the querystring in order to construct the image.

In our case, that information is just the name of the image to resize. I’d like at this point to make a recommendation. I’ve seen many such handlers take the size of the thumbnail as a parameter. I think this is a security flaw as an attacker can easily generate requests for many different sizes, resulting in a flooded cache and/or using up lots of processing power. There are of course other, more brutal ways to launch a DoS attack but why make it easy? You will usually need at most a couple of image sizes, so it’s better to keep that information off the handler’s querystring and to code it as a setting of your application that never leaves the server. In our case, the thumbnail size is a constant that I arbitrarily set to 150 pixels.

The first thing you’ll need to do in order to resize images is to import the references to WPF into your web site. You’ll need the following assemblies:

  • PresentationCore
  • WindowsBase

These go under configuration/system.web/compilation/assemblies in web.config if you’re in a web site, or in project references in a WAP or library project.

There are actually several different ways that you can resize an image using the WPF API.

The fastest way is to create a BitmapImage from the file on disk and to specify the target width and height as part as image decoding:

BitmapImage bi = new BitmapImage();
bi.UriSource = photoPath;
bi.DecodePixelWidth = width;
bi.DecodePixelHeight = height;

This is very efficient because the codec can scale while decoding and render only those pixels that will be in the final image.

Unfortunately, this won’t do here because in order to compute the size of the thumbnail, we need to know the dimensions of the original image: we want the thumbnail to have the same aspect ratio as the image it represents. So we need to read the image -or at least some of the image- before we can determine the size we want for the target.

The second way to resize is to apply a ScaleTransform on the image. In order to do that, we need to first read the image and grab its first (and usually only) frame:

var photoDecoder = BitmapDecoder.Create(
var photo = photoDecoder.Frames[0];

Once we have that frame, we can apply the ScaleTransform using a TransformedBitmap. ScaleTransform is a representation of –you guessed it– a scale transformation: it has horizontal and vertical scales, and an optional center offset. To resize an image, all we have to do is this:

var target = new TransformedBitmap(
    new ScaleTransform(
        width / photo.Width * 96 / photo.DpiX,
        height / photo.Height * 96 / photo.DpiY,
        0, 0));
var thumbnail = BitmapFrame.Create(target);

We compute the scale by dividing the desired width (resp. height) by the original width (resp. height) and then multiplying the results by a DPI factor. That DPI factor, the division of the DPI of most screens by the original image’s DPI, is quite an unfortunate hack. Ideally, you’d be able to specify what DPI you want for the target image. Unfortunately, using this method you can’t, and the default is that it uses the original photo’s DPI and applies it to the target. In other terms, if you were asking for a target 150 pixels wide, and the original image was 600 pixels wide, you’d assume that a scale of 0.25 would get you the desired result: a thumbnail 150 pixels wide. But if the image was 240DPI, what you’ll actually get is a thumbnail 2.5 times bigger (in pixels) at 375 pixels wide. If in a context where the pixel dimension is what counts, this works, but examining the file in Photoshop or Paint.NET will reveal that it is 240DPI and not 96 DPI.

Using this method, you also don’t get a chance to affect the algorithm used to resize the image. Fortunately, the defaults give a pretty good quality with good performance (see below for a comparison in quality and performance).

The last resize method that I want to talk about is using a fuller drawing pipeline, giving us lots of control and additional options, at the price of performance. It is also the only one you can use if you want to do more to the image than just resize it (such as add vector graphics or watermark text).

public static BitmapFrame Resize(
BitmapFrame photo, int width, int height,
BitmapScalingMode scalingMode) {
var group = new DrawingGroup(); RenderOptions.SetBitmapScalingMode(
group, scalingMode); group.Children.Add(
new ImageDrawing(photo,
new Rect(0, 0, width, height))); var targetVisual = new DrawingVisual(); var targetContext = targetVisual.RenderOpen(); targetContext.DrawDrawing(group); var target = new RenderTargetBitmap( width, height, 96, 96, PixelFormats.Default); targetContext.Close(); target.Render(targetVisual); var targetFrame = BitmapFrame.Create(target); return targetFrame; }

Notice that this time, we were able to specify the DPI (96), and also the algorithm to use to resize the image.

One thing I noticed when testing those different algorithms is that the enumeration that WPF uses does not have as many values as it seems:

public enum BitmapScalingMode {
    Unspecified = 0,
Linear = 1, LowQuality = 1,
HighQuality = 2, Fant = 2, NearestNeighbor = 3, }

The explanation I got from the team on that is that at first they were vaguer “unspecified, low and high” despite the fact that the algorithms behind these names were known ones. People kept asking for specific algorithms despite them being already there, so they added the new aliases for the same values, to make it more explicit what algorithm is being used.

In all cases, I’m saving the resized bitmap as PNG as it’s simply the best format to get quality results for small thumbnails (jpg never looks very good at these scales, and GIF doesn’t have enough colors for photos):

byte[] targetBytes = null;
using (var memoryStream = new MemoryStream()) {
    var targetEncoder = new PngBitmapEncoder();
    targetBytes = memoryStream.ToArray();

I haven’t mentioned spitting out the image to the output stream and caching of the image, as this is virtually identical to what you’d do with GDI. You can also check out the code.

At this point, we have a wide number of options, so how do we choose between them?

I wrote a little benchmark application (which you can find attached to the bottom of this post). The application compares the rendering times and sizes for 30 jpg images, each one being 12 megapixels. It also generates different resized versions of two images that are particularly difficult to resize because of their high tendency for moiré using all the different quality settings that we have at our disposal, for GDI and WPF.

First, let’s look at the quality results:

GDI nearest neighbor GDI low GDI bicubic
GDI nearest neighbor GDI low GDI bicubic
GDI bilinear GDI default GDI high
GDI bilinear GDI default GDI high
GDI high quality bilinear GDI high quality bicubic Fast WPF
GDI high quality bilinear GDI high quality bicubic Fast WPF
WPF nearest neighbor WPF linear WPF Fant
WPF nearest neighbor WPF linear WPF Fant
GDI nearest neighbor GDI low GDI bicubic
GDI nearest neighbor GDI low GDI bicubic
GDI bilinear GDI default GDI high
GDI bilinear GDI default GDI high
GDI high quality bilinear GDI high quality bicubic Fast WPF
GDI high quality bilinear GDI high quality bicubic Fast WPF
WPF nearest neighbor WPF linear WPF fant
WPF nearest neighbor WPF linear WPF Fant

I’ve shown before how for GDI, there is no big difference in performance between the modes that look acceptable (the ugly ones run three times as fast, but ew). So for the perf benchmark, I’ll use HQ bicubic for GDI.

For WPF, amazingly the one that looks best is the fast one. Out of the slower ones, Fant/High looks best so that’s what I’ll use to test perf.

Using these best settings, here are the results:

  Read Resize Encode Total Size
WPF 0.05s 7.80s 0.17s 8.0s 964kB
Fast WPF 0.05s ~0s 3.2s 3.3s 864kB
GDI 6.02s 5.65s 0.12s 11.8s 1,250kB

The time spent reading, resizing and encoding might seem weird until you realize that WPF doesn’t do the actual operations until it has to, à la Linq. This explains why reading the image or resizing it looks instantaneous whereas encoding seems to take longer than with GDI.

If we look at what exactly happens, in the fast WPF case reading does almost nothing except extract basic meta-data such as image dimensions. Then we resize, and still nothing really happens. It’s only when we ask for encoding that image data is read, resized and then encoded, resulting in encoding times that look longer than they should, but really that’s the entire operation and overall it’s wickedly fast: more than 3.5 times faster than high quality GDI. On average, it spent about a tenth of a second to resize each twelve megapixel image. That's more than a hundred million pixels processed per second.

In the regular WPF case, the bulk of the work is being done during the resize operation, which does both the decoding and the resizing, but still does so in two thirds of the time it takes GDI to do the same thing. One can see that the encoding time, which this time is only encoding (no catchup from previous operations) is in the same ballpark as GDI. Overall, this case is still 30% faster than GDI.

For completeness and to do your own comparisons based on the quality that you choose to use, here are the same numbers for all quality settings with each technology (numbers differ a little from those above, I haven't computed the statistical uncertainty of those results, but it seems to be roughly +-0.5s; sizes are exact):

GDI Nearest neighbor6.9s1,213kB
GDI Low8.1s1,213kB
GDI HQ bilinear10.5s1207kB
GDI HQ bicubic10s1,250kB
GDI High10.1s1,250kB
GDI Bilinear7.9s1,213kB
GDI Bicubic8.1s1,230kB
GDI Default8.4s1,213kB
WPF Nearest neighbor6.6s1,121kB
WPF Low / linear6.9s1,118kB
WPF High / Fant7.7s964kB
WPF Unspecified6.9s1,118kB
Fast WPF3.2s864kB

On the size front, things look good as well. The quality of the output in all three cases is roughly equivalent, but the fastest method is also the one that gives the most compact results: the fast WPF method gives files that are on average 30% smaller than the same images resized by GDI and that are also about 10% smaller than the ones produced in the full WPF case.

Here are a few resized images that I used for the benchmark so that you can judge the quality for yourself:


So except for the DPI problem, fast WPF is full of win and the one I’d choose for simple resizing.

But of course in some cases you won’t even have the option to use WPF because you just don’t have full trust. In those case, it’s OK to use GDI. But in all other cases, WPF is just faster, more efficient and it doesn’t have the known problems that led Microsoft to display a scary message on the API documentation.

UPDATE: added perf. numbers for all quality settings.

UPDATE 2: I contacted the WPF team to have the final word on whether this is supported. Unfortunately, it's not, and the documentation is being updated accordingly. I apologize about any confusion this may have caused. We're looking at ways to make that story more acceptable in the future.

Follow-up: Resizing to JPEG.

More on medium trust: what permission are you missing?

Yesterday, I asked some questions about your usage of medium trust. Thank you all for the great answers and comments (but don’t read too much into that, I’m just playing with stuff). If you haven’t answered yet, feel free to do so.

Now I have an additional question:

What missing permission is preventing you from running in medium trust?

Please answer in comments. And thanks again for the great feedback.

How important is medium trust to you?

(c) 2009 Bertrand Le roy I would be very grateful if you could drop me a note in comments answering the following questions:

  1. Do you run all, some or none of your web sites in medium trust?
  2. Why do you choose to run in that trust level?
  3. Are your sites externally hosted and if so does your hoster constrain the trust level?

Don’t read anything into this, I’d just like to see some different opinions on medium trust.

More Posts