Tales from the Evil Empire

Bertrand Le Roy's blog

News


Bertrand Le Roy

BoudinFatal's Gamercard

Tales from the Evil Empire - Blogged

Blogs I read

My other stuff

Archives

January 2010 - Posts

How to build 2D glasses

My two pairs of 2D glasses It’s the week-end, which is the perfect time for a slightly off-topic post. It’s still engineering of sorts though in that it provides what I think is an original and cheap solution to a real problem.

3D movies are all the rage recently. But they are not comfortable for everyone. A friend of mine recently went with her family to see Avatar in 3D and instead of enjoying this rather good movie experience, she had to leave the theater after 20 minutes, suffering from a terrible headache. Of course, removing the glasses is not a solution because you then see both of the images –the ones destined for your left and right eyes– at the same time, blurring most of the screen.

Before I explain my solution to this problem, let me explain how modern 3D movies work.

Most 3D viewing technologies rely on providing a different image to the right and left eye. They are not reproducing an actual 3D structure like holography does. Rather, they feed later into the series of events that end up with perceived volumes. It works fine for movies, but you will never be able to walk into or around a scene with any of these techniques. Holodeck they are not.

The problem is to project both images on the same screen but still allow goggles to separate them. Several approaches exist, the most simple being the infamous red and blue glasses that use the plasticity of the brain when perceiving colors. Another interesting approach is to re-use retinal persistence not just to make what is a series of static images look like something that moves, but to multiplex two versions of the same movie into one. In other words, slice time, alternate left and right images and synchronize that with shutters on each eye. This has the advantage that it can work without special screens or projectors, only special glasses and a feed into the sync signal of the screen or of its video source.

What is used in movie theaters is quite different though and relies on a subtle quality of light that our eyes are completely blind to. Our visual sensors (a.k.a. eyes) are fantastic devices but are quite limited in a number of ways: they only perceive three color ranges out of the infinitely fine light spectrum, they frequently go out of tune and require correction and surgical intervention, and they do not see polarization at all. That last quality opens a uniquely neat way of creating stereoscopic vision.

Polarization is a quality of all transverse waves (of which light waves are one example and sound waves are not). Transverse waves are the propagation of a displacement that is orthogonal to the direction of propagation. Because we live in a three-dimensional space, that leaves two directions for the wave to wiggle in addition to the propagation direction. A light ray aimed directly at you may vibrate horizontally or vertically, or in a combination of both, but you won’t see the difference because the eye only sees the amplitude and some frequency information, not that directionality. This means that you can have two completely independent signals at any given frequency (which for light means color) simultaneously propagating in the same direction. You can see where this is leading: you can have the left and right images coexist in the same beam. All you need is to separate those images with glasses that see only one direction to create the illusion of volume.

Early polarized movies were using the linear polarization that I just described in a technique that is older than you may realize. For example, when I was a kid, I saw Hitchcock’s Dial M for Murder in 3D using linearly polarized glasses.

Modern 3D movies use a variation of polarization called circular polarization where light is split not in horizontal and vertical components but in clockwise and counter-clockwise rotating components. This has the advantage of better maintaining the illusion when you rotate your head. It’s not perfect because the images are still shot with a horizontal offset but it definitely helps.

In both cases, the left and right images travel on the two polarized components of light and they are split by the glasses before they reach the eye. As you can see, it’s a good thing for all this to work that we only have two eyes…

So what do we do for my poor friend who can’t watch those movies for any prolonged period of time? Well, that’s fairly easy, we suppress the 3D effect by feeding her only one of the two images. We do that by building her a pair of 2D glasses that filter out and send the same image to both eyes.

To build that pair of glasses, I bought two pairs of circularly polarized glasses from e-Bay (they are quite easy to find and go for a dollar or two) and broke them open to extract a left filter out of the first pair and a right filter out of the second pair. I then exchanged these filters and glued the frames back together. The result is two pairs of glasses, one of which will see only the right image, the other seeing the left image. In effect, my friend can now enjoy the same movie as the rest of her family in the same theater, except that to her and to her only it just looks like a plain old 2D movie. It’s just as comfortable as seeing the movie in a regular movie theater except for the weight of the glasses. No movement of the head affects the experience.

A small note on polarized sunglasses and why they wouldn’t work here. First, wearing sunglasses in a movie theater would further obscure even David Lynch’s Inland Empire. More importantly, polarized sunglasses use linear polarization because they are designed to eliminate specular reflection from the sun off water or ice and to eliminate part of the sunlight scattered by the atmosphere, both of which are linearly polarized.

Understanding polarization:
http://en.wikipedia.org/wiki/Polarization_(waves)

The RealD Cinema technology:
http://en.wikipedia.org/wiki/RealD

Note: feel free to build your own 2D glasses for your own personal use, but please contact me for any bigger-scale use of the idea.

Posted: Jan 31 2010, 12:53 AM by Bertrand Le Roy | with 3 comment(s) |
Filed under:
“Badgifying” an ASP.NET page

IMG_3500 I apologize for the neologism. What I’m going to demonstrate in this post is a technique I prototyped a few months ago to make it very easy to embed an ASP.NET page’s content in another page, even if it’s using another server technology. This of course works cross-domain.

The reason why you would do that is to enable people to embed badges with your contents on their own sites. Examples of such badges can be found in the margin of this blog: there’s the ad badge, a Twitter badge, a Facebook badge, an Xbox Live badge, a Zune badge, and there used to be a Flickr badge. There are even full commenting systems that you can include on your blog this way. All those are Flash or JavaScript, and in both cases there’s a short JavaScript stub that includes it.

What’s really nice about this approach is that people can include your contents from the client-side, no matter what server technology they use, with a single script tag. Here’s what it looks like:

<script type="text/javascript" src="AspNetBootstrap.js"
  xmlns:foo="Remote.aspx"
foo:bar="42" foo:baz="glop & co"></script>

The script tag has two parts. The first is a regular script tag that includes a bootstrapping script. The second part are some additional attributes that specify the page to include and optionally parameters. Those parameters will be transmitted to the page as querystring parameters. For example, the script tag above includes the following ASP.NET page:

<%@ Page Language="C#" %>
bar = <%= Request.QueryString["bar"] %><br />
baz = <%= Request.QueryString["baz"] %>

The system relies on two things. First, we have the bootstrap script. It’s a very small piece of code (664 bytes minified, 453 minified and gzipped) that parses the attributes of its own script tag and creates a new script tag pointed at a special handler, the second part of the system.

The handler extracts the page name from the querystring and processes the request. The result of that processing is then embedded into simple JavaScript that writes the rendered page into the document:

public void ProcessRequest(HttpContext context) {
  context.Response.ContentType = "text/javascript";
  var request = context.Request;
  var pagePath = request.QueryString["__page"];
  var sb = new StringBuilder();
  using (var writer = new StringWriter(sb)) {
    var worker = new SimpleWorkerRequest(
pagePath,
context.Request.QueryString.ToString(),
writer); HttpRuntime.ProcessRequest(worker); } context.Response.Write("document.write('" + sb.ToString().Replace(@"'", @"\'")
.Replace("\n", @"\n")
.Replace("\r", @"\r") + "');"); }

Of course, this is only a simple proof of concept and it could be improved in a number of ways:

  • Relative URLs (of images and scripts) are not re-rooted into the host page. This means you’ll need to use absolute URLs.
  • There is no support for any interactions back with the remote server, including with forms or Ajax. It’s not impossible but you’re on your own to implement it.

Still, to include simple contents, this constitutes an extremely simple approach to making your ASP.NET contents accessible to remote pages, no matter what server technology they use (if any).

I’d love to know what you think and if you think this is worth pursuing.

Download the code from here:
http://weblogs.asp.net/blogs/bleroy/Samples/EmbeddedASPNET.zip

How I got attacked by Windows Update

I was writing a wiki page when it happened. The system restart dialog from Windows Update had been blinking helplessly in the task bar for a few hours as I didn’t have time for a reboot yet.

And then, right in the middle of a sentence, the effing dialog decides that I’ve been ignoring it for too long, puts itself in front and gives itself focus.

You can see what happened then. My fingers were continuing to type, not realizing that the wiki page had gone to the back. Now the thing is, space is a fairly common key to hit when you’re writing English. But in dialogs, that’s also the key that triggers the default button. Which, in the case of that particular Windows Update dialog, is “Restart”.

So before I realized what was going on, I was seeing all my windows close, including of course the wiki page I was working on.

No application should ever be allowed to steal the focus. EVER!

The way I feel about this is exactly as if I had fallen victim to a malicious program doing a clickthrough attack on me. Clickthrough attacks are attacks where a program moves a button in front of the one you really wanted to click. In the most innocuous cases it’s to force you to click on an ad, and in the most severe ones it’s to trick you into making an unwanted security decision that could compromise your privacy or your machine.

And now of course, I have to rewrite my wiki page.

Server-side resizing with WPF: now with JPG

Un I’ve shown before how to generate thumbnails from ASP.NET server code using the WPF media APIs instead of GDI+ (which is unsupported when used in server code).

In the previous article, I’ve been generating the thumbnails as PNG files. The reason for that is that PNG is a lossless format and I wanted to isolate as few variables as possible that impacted output quality and performance. Adding JPEG artifacts and the variable of the quality setting would have just muddied the water.

One commenter (Victor) pointed out that the PNG API in WPF does not compress the output though. That is clearly a problem for a web application.

This is why I decided to write this follow-up post. The previous one is still useful for establishing that the decision to use WPF rather than GDI+ makes sense but I want to complement that with some JPEG comparison.

The algorithm that I’m going to use here is what I consider the best compromise between quality and speed for each technology. I chose HQ bicubic for GDI and “Fast WPF” for WPF. PNG generation will be our baseline, and I’ll generate JPG files for quality settings in increments of 5 from 50% (lower than 50% is just horribly bad). The set of images that I will resize is the same as in the previous post.

The code almost doesn’t change, except that we use a JpgBitmapEncoder that needs to be configured with the quality setting instead of a PngBitmapEncoder:

var targetEncoder = new JpegBitmapEncoder {
    QualityLevel = quality
};

Let’s look at the qualities for my two favorite, moiré-prone images:

Copenhagen_50 Copenhagen_55 Copenhagen_60
50% 55% 60%
Copenhagen_65 Copenhagen_70 Copenhagen_75
65% 70% 75%
Copenhagen_80 Copenhagen_85 Copenhagen_90
80% 85% 90%
Copenhagen_95 Copenhagen_100 copenhagen
95% 100% PNG
IMG_2565_50 IMG_2565_55 IMG_2565_60 IMG_2565_65
50% 55% 60% 65%
IMG_2565_70 IMG_2565_75 IMG_2565_80 IMG_2565_85
70% 75% 80% 85%
IMG_2565_90 IMG_2565_95 IMG_2565_100 IMG_2565
90% 95% 100% PNG

Even at 100%, JPEG doesn’t look as sharp, deep and saturated as PNG but none of those look tragically bad. This is because thumbnails are too small for the defects to jump to the eye. If you use magnification ([Windows] + [+] on Windows), you’ll see them.

The compression defects we are looking for are waves near high-contrast edges:Border artifacts

And zones, usually square, where the picture lacks definition and tends to be too flat to show the details of the original picture:Compression squares

The defects stop being too noticeable around 75% compression on these pictures. Of course, with this kind of lossless compression, it depends a lot on the pictures, so do your own tests and choose the compression that is the best compromise between quality and size for your images. Speaking of size, here’s a graph of size and time against compression level over resizing the same thirty 12 megapixels photos I used for the previous post:Resize Jpg bench results: size and time against compression levels.

The first thing to notice is that the time to compress almost does not depend on the compression level and is a little faster than PNG. Second, we also see that once more, WPF is more than twice as fast as GDI+. Third, the size of the resulting files grows reasonably with compression level up to around 85%, after which it rises faster.

There is a weird peak at 75% that I’m not sure how to explain. I did make some additional measurements around that value and it’s really just for 75%, not 76, not 74, and none of the other values. There seems to be something magical about 75% for some images so it might be a good idea to stay clear of that particular value.

Of course, even at 100%, thumbnails are still more than twice as small as uncompressed PNG but the sweet spot here seems to be between 76% and 90%, where most of the artifacts of compression stop being too visible on most images and yet the size doesn’t grow too fast. I’ll be using 85% myself.

Here are some of the sample images I used, at 85% so that you can compare with the previous PNG thumbnails:

IMG_2734_85 IMG_2744_85 IMG_2228_85
IMG_2235_85 IMG_2300_85 IMG_2305_85
IMG_2311_85 IMG_2317_85 IMG_2318_85
IMG_2325_85 IMG_2330_85 IMG_2332_85
IMG_2346_85 IMG_2351_85 IMG_2363_85
IMG_2398_85 IMG_2443_85 IMG_2445_85
IMG_2446_85 IMG_2452_85 IMG_2462_85
IMG_2504_85 IMG_2505_85 IMG_2525_85

UPDATE: I contacted the WPF team to have the final word on whether this is supported. Unfortunately, it's not, and the documentation is being updated accordingly. I apologize about any confusion this may have caused. We're looking at ways to make that story more acceptable in the future.

UPDATE 2: for an approach that scales better, try this.

My benchmark code:
http://weblogs.asp.net/blogs/bleroy/Samples/ImageResizeBenchmarkJpg.zip

The code for the JPG thumbnail handlers:
http://weblogs.asp.net/blogs/bleroy/Samples/SupportedResizeJpg.zip

The previous article:
http://weblogs.asp.net/bleroy/archive/2009/12/10/resizing-images-from-the-server-using-wpf-wic-instead-of-gdi.aspx

Search Engine Optimization got a lot easier

Dear readers, if you haven’t checked out the SEO Toolkit yet, you owe it to yourself to go there now, download it and start using it. Point it to your sites and it will explore them and give you a full report of all the little problems that are getting in the way of search engines.

I’m 99.99% sure you’ll discover problems in your site. Lots of them. You’ll be surprised.

But it doesn’t stop there. In may cases, it will show you how to fix the problems you find. It’s pure, distilled awesomeness. It is a priceless debugging tool that works at the site level.

And the best thing is, it works on any web site, you don’t have to be running Windows Server, IIS or ASP.NET. LAMP users everywhere rejoice!

http://www.microsoft.com/web/spotlight/seo/

More Posts