July 2009 - Posts
I started a little science project for CoasterBuzz
about a week ago or so. I wanted to build a little Silverlight app that
sucked down updates of all kinds, and make it live outside the browser.
There are constantly new posts, topics, news items and photos hitting
the site, and anything that encourages people to keep coming back is a
good thing. It's not a giant community, but big enough that people like
to be involved as much as possible, even if they're "readonly" types.
As it turns out, the back end of the project was more interesting to
me, as I built a mini-framework that publishes events via any number of
publishing services. In this case there's only one, a database writer,
but I could just as easily pop in a Twitter publisher or Facebook
publisher or what have you. Yay for extensibility and loose coupling!
As far as the Silverlight app itself goes, there are some things that bring me great joy:
- The effects and projection stuff are a nice addition. You can add just a little extra polish to the app with these.
- Support for Binary XML in a WCF call is easy. See previous post.
- I'm thrilled that the app, at 500-ish lines of code, is relatively
small, around 55k. That's smaller than many of the photos on the site.
- Building feels faster. Admittedly, I've not measure this.
- Isolated storage is so ridiculously easy, especially if you use the built-in ApplicationSettings collection for small stuff.
- Auto-updating for out-of-browser is pretty cool (though I wish there was a method to restart the app).
- Out-of-browser experience is very streamlined for the user.
use OOB doesn't seem too bad for this little thing. CPU never goes over
2%, and it's consuming about 35 MB of memory on the Mac. That's not bad
considering it has to run in a sandboxed host on top of a runtime and
framework. A native app like Adium (which admittedly does a heck of a
lot more) uses about 45 MB in OS X.
- OOB window resizing shows
just how awesome Silverlight/WPF layout works, without all of the work
from the bad old days of Windows Forms.
I've got all kinds of frustrations too. Here are some of them:
- Blend 3 seems to take hours to load, and then it frequently doesn't
work. For example, I have a UserControl that inherits from a base
class, and Blend can't load the base type. That's ten times as annoying
since there's no designer in VS2008.
- That you need Blend at all is annoying. "Wait for VS2010" is not a
solution when you want to build stuff today. Making a tool for
designers was a really good idea. Artificially forcing the separation
on developers was not. Worse, it appears the MSDN subscribers at
mid-levels don't get Blend anymore, and that sucks with the
aforementioned lack of a designer.
- No GIF support. OK, perhaps I should have RTFM'd, but I couldn't
understand why my image was coming down the wire and not appearing.
- Documentation in some places leaves a lot to be desired. Sure,
there's a MouseWheel event now, but what do you do with the delta value
returned? It took the 50th result in a Google search to figure that one
out. Oh, and it's frustrating that the scrolling doesn't work in OOB on
a Mac. I assume the bootstrapper is platform specific, so what's the
deal? It does work in-browser, I think.
- No compile-time checking of XAML for rogue attribute values.
Accidentally slip an extra character in that Width attribute? You won't
find it until runtime. That concerns me in all kinds of ways.
- How is this for weird: With virtually any control, you can use
HorizontalAlignment="Stretch" to make the thing fill the space it has.
So I drop a Grid inside of a HyperlinkButton, and it doesn't fill the
space, meaning right-aligned text isn't at the right. I pulled my hair
out on this for way too long. Finally, by accident, I noticed that
HyperlinkButton has a property called HorizontalContentAlignment. Sure
enough, set it to Stretch, and you're good. Why the strange one-off,
- Debugging is still... weird. While I can hit break points, why
can't exceptions hit in the debugger? Piping them out to the browser
blows because you can't inspect the state of anything.
- Out-of-browser debugging is a pain. You can attach to the
bootstrapping app that hosts it, but you have to do it manually. It
feels like everything for OOB is a little half-baked in terms of
tooling, and I'm hoping that VS2010 is better.
- It'd be nice if the OOB app could remember window position and size.
- Using a base class for UserControls requires awkward XAML.
All of that said, I do dig Silverlight. Most of my frustrations come
from the somewhat uncoordinated release schedules. I love that they're
iterating quickly, but the tooling has to catch up. I know we don't
have much longer to wait. I'll post the general link to check out this
app after I've done some refining. Right now it's available to paid
subscribers to try out.
David Betz has a really solid (and really, really long) post on calling a WCF service from Silverlight, without using a Service Reference. I'm certainly not going to try and top that or duplicate it, but I wanted to share my experience using the same methodology only with Binary XML as the medium. I'm not interested in the politics over whether or not it should be used, as I'm using WCF and Silverlight. Interoperability beyond that is not important to me.
The WCF side of things has been pretty well covered, but your web.config should look something like this, with the important parts bold:
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
<serviceMetadata httpGetEnabled="true" />
<serviceDebug includeExceptionDetailInFaults="false" />
<service behaviorConfiguration="FeedBehavior" name="Namespace.ServiceClass">
<endpoint address="" binding="customBinding" bindingConfiguration="binaryBinding" contract="Namespace.IWhateverContract" />
<endpoint address="mex" binding="mexHttpBinding" contract="IMetadataExchange" />
It's important that the binaryMessageEncoding comes before httpTransport. I don't know why, it's just how it is.
The Silverlight side of things weren't as obvious to me at first. Looking at Betz's code, find the part with the basic code for calling the service, and replace the binding part so that it looks something like this:
var elements = new List<BindingElement>();
var binding = new CustomBinding(elements);
var endpointAddress = new EndpointAddress("http://localhost:69/Feed.svc");
var channel = new ChannelFactory<IFeedService>(binding, endpointAddress).CreateChannel();
The first four lines replace the BasicHttpBinding. The rest of Betz's code is about the same.
Is this something that really matters, doing this instead of generating a proxy via the "Add Service Reference.." in Visual Studio? Personally, I think it's worth it, for delivery size alone. The .xap was 3k larger for a single service reference with one simple method. It just seems to me that for a big piece of application thingy that you'd want to keep it as small as possible, or at least use that 3k on something pretty, you know?
One other thing that I should mention is that I had some difficulty getting the client to talk to the service, but couldn't figure out why. No errors, the messages on the wire were correct, I was a little lost. What it ended up being is that I wasn't setting the name and namespace in the DataContract attribute on both ends. It'd be nice if there was something to clue me in that was a problem.
In any case, I'll buy into the speed and size benefits of using the binary mode for now. It's easy enough to create other end points for other data formats.
I decided to take a break yesterday from my efforts toward a new site to "enjoy" a little science project. The short description is that it's a little Silverlight app that I'd like to run out-of-browser, talking to the server via a WCF service. Before I knew it, I felt like there was XML configuration everywhere.
It could have been worse, because I figured I'd use the Entity Framework. I came to my senses realizing that was overkill, and used LINQ to SQL instead (which, by the way, has it's own config quirks when it comes to connection strings). I've been a big fan of what WCF can do, in particular the ability to create various endpoints in different formats, but my use of it has only been in the context of suggesting it to others who have already built stuff with it. Yeah, so I got to learn all about the vast configuration options there.
There are days like this where I feel that XML has been christened the great configurator, able to solve every problem. There are some risks that go along with this, I think, that include a certain brittleness to build processes and more difficult debugging. I mean, exceptions around these things rarely say, "Hey, your config is wrong, stupid!"
And even going beyond my science project, jump in with an IoC framework and find yourself in even more XML configuration hell. (Insert Ninject plug here... awesome stuff.) It gets to be a little out of control.
I wonder how we got to this point. I wonder if ultimate control via XML is actually more trouble than it's worth. My gut is that for 90% of development work done worldwide, it's totally not necessary. That makes me wonder why I'm chasing it around today.
A very long time ago I set up Subversion on one of my servers, and did it the old fashioned way... mucking about with config files and all of that with an instance of Apache. Yuck. I remember it taking a few hours because I hadn't seen Apache since, well, since long before I would've called myself a professional code monkey.
In any case, VisualSVN has since appeared, and I used that with much joy at my last gig because it took virtually no effort to install. The biggest win, however, was the right-clickability to set permissions, which is awesome. So I figured, what the heck, back it up and see what happens. I stopped the Apache service, ran the installer, pointed the repo directory to the one I already had, and like magic, it worked. I had to remove the all users from the root permission first. Then I had to change the IP setting to just use one IP instead of listen on the same port on all IP's, but other than that, it worked. I uninstalled the stand-alone instance of Apache, and now all is well.
+1 to everyone who has worked on that product. I love it when stuff just works.
The Twitterworld or (Twittersphere or whatever silly shit someone made up today) was all abuzz about the release of Silverlight 3 today, and I was shocked at how quickly it made the trends and how overwhelmingly it was positive.
I kind of knew it was coming already for various reasons (spelled NDA), but it makes me happy to see how excited most people are. I think that individually, the new features have been of average importance, but together, this is a really big deal release. My own pet excitement is attached to the out-of-browser feature, and to a lesser degree the H.264 support, but again, it's an overwhelmingly positive reaction.
But of course the haters are, well, haters. There's a lot of noise and hate on Twitter in general, which is why I fire up Tweetdeck once a week and then let it go for a week after that. The militant Adobe and anti-Microsoft camps started their douchebaggery before anyone ever installed the plugin (let alone the dev tools).
What bothers me the most is that it's a sad reflection on our profession. Platform/language zealotry is like a toxic substance that trivializes what we do. I've had a number of gigs where I happily let go of other projects based on Orcale, Java or whatever because they made sense for the situation. My job was to integrate and connect and ultimately make someone more money, and that's what mattered. If the platform met the requirements, I had to roll with it regardless of what I thought of the vendor or people doing the work. Why is that so hard for some people to deal with?
For the record, I'm pretty excited about Silverlight 3, and its new capabilities immediately bring some uses to mind. I recently deployed a small app to one of my sites for limited use, and install stats went from 25% to 35% in a few days. That's encouraging.
I was reading somewhere about some anecdotal evidence that Google doesn't like to index images that don't have some kind of modification time on them. When I relaunched CoasterBuzz last year, I moved all of my coaster pr0n to the database, and I've since noticed that none of the images are in fact indexed. Bummer.
This also pointed out to me that I was doing something annoying. I was reading the data out every time for every image request. Not exactly the most efficient use of resources. Static files come down with information in the headers indicating when they were last modified (IIS, and presumably any other Web server does this), so the next time the browser makes the request, it the server compares the time in the request header with that of the file, and returns a 304 "not modified" response, and no file.
That seemed like an obvious thing to do, even if it has no impact on Google indexing. Fortunately, it just required some refactoring of the IHttpHandler I had doing the work.
Sidebar: This is probably the point at which some people will make a big stink about serving images out of a database, and how it's bad for performance or scalability. That's a fine argument to make, but outside of doing obviously stupid things, this is not an issue here. I'd prefer to address performance and scalability problems if I have them, not when I might have them, or never have them. Seriously, this is a site that does somewhere between a half-million and a million page views a month depending on the season. There are no performance issues here.
So anyway, assuming for a moment that "photo" is a business object in this code, and it was determined by a query value to the handler, this is the meaty part of the ProcessRequest() method of the handler:
CultureInfo provider = CultureInfo.InvariantCulture;
var lastMod = DateTime.ParseExact(context.Request.Headers["If-Modified-Since"], "r", provider);
if (lastMod == photo.SubmitDate.AddMilliseconds(-photo.SubmitDate.Millisecond))
context.Response.StatusCode = 304;
context.Response.StatusDescription = "Not Modified";
byte imageData = GetImageData(photo);
context.Response.OutputStream.Write(imageData, 0, imageData.Length);
var adjustedTime = DateTime.SpecifyKind(photo.SubmitDate, DateTimeKind.Utc);
Yes, it probably needs to be refactored, and yes, it should probably be used in an IHttpAsyncHandler. But let's go through what's happening, starting at the bottom.
The last few lines write out the actual bytes of the image (MIME type was set in previous code), then set the cacheability and the modification time of the image, which in my case is stored with the bits. The goofy part is where we create a new DateTime to make its kind known. If you don't explicitly state that it's a UTC time, the SetLastModified() method apparently adjusts it. I happen to store most times as UTC, so that was one less thing to worry about. This adds a header in the response called Last-Modified, and gives it a value that looks something like "Sun, 22 Jun 2003 16:27:19 GMT" (note that it truncates milliseconds and ticks, as you may expect).
Now, on subsequent requests for the same image, the browser adds an If-Modified-Since header to the request, with the same date and time as the value. Here we're checking to see if the value is present on the request, and if so, let's see if we should do a 304. If it's there, we parse it into a DateTime and compare the time with the one stored in the business object. We're stripping off the milliseconds because the database will fill them in on our DateTime, and the incoming request doesn't have the same high resolution. If we have a match, we send out the 304 and return, not sending any more data or reading the bytes from the database.
You can do this pretty easily in ASP.NET MVC as well.
public ActionResult Image(int id)
var image = _imageRepository.Get(id);
if (image == null)
throw new HttpException(404, "Image not found");
CultureInfo provider = CultureInfo.InvariantCulture;
var lastMod = DateTime.ParseExact(Request.Headers["If-Modified-Since"], "r", provider).ToLocalTime();
if (lastMod == image.TimeStamp.AddMilliseconds(-image.TimeStamp.Millisecond))
Response.StatusCode = 304;
Response.StatusDescription = "Not Modified";
var stream = new MemoryStream(image.GetImage());
return File(stream, image.MimeType);
Let me start by saying that this was something I just prototyped. It's in dire need of refactoring, as much of the logic isn't stuff you'd normally put in a controller action. I think there's a method for returning nothing on the Controller base, but I don't remember off the top of my head. If there is, you'd use that instead of Content() in the 304 case.
I hope this helps someone out!