Jeff and .NET

The .NET musings of Jeff Putz



My Sites


Reading from a queue in an Azure WebJob

A few months ago, Microsoft introduced something called a WebJob in Azure. It's essentially a "thing" that can run as a background task to do "stuff." The reason this is cool has a lot to do with the way you would do this sort of thing in the pre-cloud days.

Handling some background task in the old days usually meant writing a Windows Service. It was this thing that you had to install, and it was kind of a pain. The scope of background tasks is pretty broad, ranging from image or queue processing to regularly doing something on a schedule to whatever. For those of us who have focused on the Web and services, they're definitely a weird thing to think about.

Azure made this more interesting with worker roles (or cloud services, which also include web roles), which are essentially virtual machines that do just one thing. Those are pretty cool, but of course the cost involves spinning up an entire VM. They start at $14 a month, per instance right now, but still, it's not like your Azure Websites are running at full utilization, so it makes sense to use that resource since you’re already paying for it.

That's where WebJobs are awesome, because they run on the VM that's already running your sites. If you have something to do that isn't going to overwork that VM, a WebJob is perfect. They run pretty much any flavor of code you can think of, but for the purpose of this post, I'm thinking C#. For added flavor, you can bind these jobs to the various forms of Azure storage, and do it without having to wire stuff up. See Scott Hanselman's intro for more info.

I just happen to have a use case where this totally makes sense. I have a project where I'm using, a port of the Java text search engine, to search tags and titles for various pieces of content. I'm also using the AzureDirectory Library with it, which allows me to use blob storage for the index. Updating the index happens when a user creates or edits content. Infrequent as that might be, it is time consuming, and it's a crappy user experience to make them wait. The solution then is to queue a message that says, "Hey, this content is updated, so update the index, please." Firing off a message to the queue is super fast, and the user is happy.

This is a pretty common pattern when you have to break stuff up into components, and a little latency is OK. In this case, it's not a big deal if the search index isn't updated instantly. If it doesn't happen even for a few minutes, that's probably good enough (even though it likely happens within a second or two).

As with the other examples out there, the code to set up the WebJob as a C# console app is really straight forward. In my case, I have some extra stuff in there to take care of the StructureMap plumbing, resolving dependencies between different assemblies and such.

internal class Program
	private static void Main(string[] args)
		ObjectFactory.Initialize(x =>
				x.Scan(scan =>
		var host = new JobHost();

	public static void ProcessProjectSearchQueue([QueueInput("searchindexqueue")] ProjectSearchQueueMessage message)
		var indexer = ObjectFactory.GetInstance<IProjectSearchIndexer>();
		indexer.Processor(message.ProjectID, message.ProjectSearchFunction);

The ObjectFactory stuff is the StructureMap container setup, and right after that is the WebJob magic from the SDK. I’m pretty sure what those two lines are doing is saying, “Hey Azure, you’ve gotta run this stuff, so just hang out and don’t let the app close.”

The ProcessProjectSearchQueue is where the magic wireup to Azure storage takes place. The QueueInput attribute is looking for a queue to monitor, in this case “searchindexqueue.” The connection string, as mentioned in the other articles you can Google on Bing, show you how to put the storage account string in the Azure administration portal. In the case of this code, when a message hits that queue, this function reads it from the queue and acts on it. It’s like magic.

As of the time of this writing, WebJobs are in preview, so the documentation is a little thin. On the other hand, the product itself is really robust at this point. The monitoring stuff and ability to get a stack trace when something is broken is really awesome.

Here are the bumps I hit in implementing this:

  • My calling code has to talk to SQL via Entity Framework. The app.config for my WebJob did not have the EF configuration section that specifies to use System.Data.SqlClient, so it choked until I had that in place.
  • At first I had my StructureMap initialization after the RunAndBlock call, which was pretty silly because that method is pretty descriptive about what it’s doing.
  • I went down an ugly dependency hole of despair at first, where the WebJob required a ton of assemblies from a core library. In this case, I just needed to pull out the SQL data access to its own project in my solution. DI containers like StructureMap help with this (duh).
  • The deployment is a little ugly because there’s no tooling for it, but it’s still just a matter of zipping up the build and uploading via the Azure portal.
  • You can’t run it locally. I hope they’re going to figure out a way to simulate this, because having to test with real Azure can be a little awkward when you need to share your code (and connection strings) with other developers. To compensate, I took the two lines in the above method and put them in an MVC action to call at will by viewing the action in a browser.
  • If your code fails, the queue message is gone forever. I haven’t used Azure queues in awhile, but I do recall the mechanism that restores a message in the event you can’t process it. Normally you would have some retry logic, so I’m not sure what to do here.

This is a really exciting piece of technology, and I’m planning to use it next to pull out the background stuff in POP Forums, which currently runs on Timers out of an HttpModule. Ditching that ugly hack after more than a decade means finally getting the app to a multi-instance place. That makes me very happy.

5 Ways to surround yourself with awesome

[Repost from my personal blog.]

I'm surprised at how often it comes up, the issue of creating a "culture of awesome" where you work. And honestly, as much as I think of it in terms of the software profession, the path to getting there likely applies to most any business. I've been fortunate to be a part of some awesome teams, and they don't happen by accident. Some would argue you can't create it, but I disagree.

1. Lead by listening

Yes, you probably reach a certain career band because you're good at managing people, process or product. That's awesome, and that's why you get paid the big bucks. Your wisdom is what makes you stand out. Still, my hope is that the wisdom comes with the acknowledgment that you do not in fact have all of the answers. It took me awhile to realize this.

A lot of things motivate us to not listen to people (which when you think about it, doesn't inspire a lot of confidence in our own staffing decisions). We often want to demonstrate control, we fear failure, or we just don't trust other people. But if we insist on having a hierarchy, then the one thing we can be sure of is that we don't have all of the answers because there aren't enough hours in the day to take all of the input. A lot of the time, others know better than we do. Strong leadership involves listening, and acting, on the information and opinions of others.

2. Challenge, and be challenged

There is some feeling in our line of work that toxic conflict helps us arrive at a better place. I don't agree with this at all. That doesn't mean that there isn't a whole lot of room for people to challenge each other. This has the nice side effect that egos are kept in check.

Leaders create a framework that fosters these challenges. You have to make sure that people are free to express their concerns and different views without the threat of negative repercussions. It doesn't matter if someone is new or bordering on retirement. Good ideas come from all over. The worst thing that can happen is that someone takes a ridiculous position, and learns why it's ridiculous. The best thing that can happen is you save time and money, and deliver something better.

3. Remember that you get what you pay for

While some organizations will scrutinize the strangest little expenses, many never think much about the money they spend on humans. This is a catastrophic error. In software circles, many believe that people are interchangeable, ignoring the very wide range of capability and domain knowledge that travels with the person. As it turns out, people like to be recognized for their ability to do good work, but they also want to be paid for that work.

Know your market. In most places in the US right now, software is a sellers market. The good people will chase the money not because they're greedy, but because the basics of supply and demand drive them there. Hiring "C" players won't get you the results that "A" players will.

4. Never be afraid to experiment and make radical changes

"We've always done it this way" is the innovation equivalent of ebola. It attacks quickly and turns you into a pile of goo in short order. It's not uncommon to be in a situation where everyone knows something isn't working, but no one does anything to change the outcome. What's that cliché about doing the same thing over and over and expecting a different result?

Sometimes things aren't working, and you need to make a serious change to right the ship. It might mean tossing aside a process, reorganizing people, a totally new approach, and sometimes even letting go of people. Aside from the letting go of people part, there's a good chance that the big changes you need to make aren't that risky if you're already in the midst of a tragedy. The worst thing that can happen is you continue to be tragic, but more than likely, you'll end up a little more awesome.

5. Find the awesomesauce that's already there

I've been fortunate (or not, if you consider the layoffs and flameouts I've been in) to have worked in a really diverse set of companies, large and small. There isn't one that I can name that didn't have little hints and glimmers of greatness just begging to get out.

We often focus a great deal on roles and responsibilities, and try to put everyone in a neat little box with a clear and well defined label. This breeds a lot of "not my job" precedent, and it hides that fact that some people are really good at a lot of different things. Enable those people. Not everyone is a born leader, but almost everyone does have something meaningful to contribute. Find it! Figure out how to make it fit into what you need.

The problem and opportunity with accountants' view on software people

[Note: This is a repost from my personal blog.]

I was having lunch with a colleague the other day, talking about the strange state of affairs that is software developer hiring. There's a great desire at the executive and accounting level to hire people on a contract basis, with the perceived benefit that you can scale the work pool up and down, and that doing so saves money because you don't have to pay out benefits. In practice, those of us in the profession know that you never really scale down, and because demand exceeds supply, the prices you pay for contractors are exorbitantly high.

But my colleague made a more important point about their perception. They believe that developers are simply interchangeable. This is probably the most harmful misconception. As anyone who does the work knows, not all developers are created equal. If the vast differences in experience and skill weren't enough, there's also the issue that it takes a considerable amount of time to ramp up in any situation. Depending on the complexity of the systems, it can often take a good two months before someone is really proficient working in the code. If you're talking about a six-month contract, you're looking at paying higher rates and getting perhaps as little as two-thirds the productive time. Oh, and all that domain knowledge gained? It leaves at the end of the contract. Rinse, lather, repeat. It's so inefficient.

Still, as strange as this is, it does represent a different kind of opportunity. There is already a trend where companies are shifting their dollars to services, instead of building and hosting their own stuff. Sure, there are still a lot of irrational fears and kingdom building that happens among people who don't want to give up some sense of control, but it's happening anyway. The big use cases are already going that way, especially e-mail. Heck, Salesforce has been around for almost a decade and a half, back when everyone was like, "You can't always be connected to the Internet, it will never work."

What's interesting is that there is a long-tail of niche markets that can be filled as well, and I'm interested to see where that goes. There's a fairly large opportunity there for folks already working on a contractual basis. Maybe you can "productize" the niche, or maybe you can simply build a reputation as an authority in the niche.

Granted, there is some risk associated with this. I'm not a fan of outside investment, and think that bootstrapping your business is a better way to go. It's easy for me to take this position after falling into some relatively minor income 15 years ago with my hobby sites, not having to put in a ton of hours to maintain those things. But it's quite another thing to really build something up, perhaps employ others, etc. It's also hard to work a day job and build a business, and as I've famously come to understand, devoting your free time to something like a business at the expense of your family can be a non-choice.

This is partly me trying to look at the bright side of a profession that has gone totally away from the century-old model of "climbing the ranks" in other industries. I explain to my family that the average longevity even for full-time software people is likely 18 months, and they can't believe it. Companies do not invest in us, and see us as interchangeable. What's crazy is that it's a commodity view with premium pricing. It's probably a contributing reason to the lack of quality people to do the work too, since you can't stick around in any one place to be mentored (glad I've had a series of really solid opportunities for that).

POP Forums v12 posted, for MVC 5

POP Forums v12 is now available on CodePlex for download.

This is a significant upgrade that includes updates to various packages and use of .NET 4.5.1, as well as MVC 5.


This release has data changes. Run the PopForums9.2andLaterto12.0.sql script against your database, which is found in the PopForums.Data.SqlSingleWebServer project.

What's new?

Known Issues

None at this time.

Performant: Stop making up words

People need to stop saying "performant." If it's a word at all, it's a noun synonymous with "performer," like an actor. Even if the construct was a real word, it wouldn't indicate whether it's a negative or positive thing. Something can perform well or poorly... this word would only describe that it performs.

Technologists are silly. :)

Decoupling OWIN external authentication from ASP.NET Identity

One of the nicest features of the forthcoming release for the ASP.NET Web stack is the inclusion of bits around external authentication. It makes it stupid easy to add login capability through Google, Facebook and such. Coupled to this in the default project templates is a tie to the new ASP.NET Identity, which is a replacement for the old (and frankly crappy) Membership API. This new thing uses Entity Framework and is extensible and neat-o.

However, if you're like me, you probably have plenty of projects that already have their own login and user management schemes, so retrofitting Identity into your app is not something you really want to do. This is also true for POP Forums, the open source forum app that I use at the core of my own communities. The real magic happens through OWIN middlewear, and with help from a great StackOverflow answer, I figured out how to avoid creating any dependencies on Microsoft.AspNet.Identity.

(Note: I'm able to get the stuff I checked in on CodePlex to work on one machine, but not on another machine. The AuthenticationResult coming back from the AuthenticationManager is null on the non-working box. Haven't figured that out yet. If anything below is suspect, please let me know.)

First, you might need some context. What is OWIN? It stands for "Open Web Interface for .NET," and is essentially an abstraction of stuff that happens between your app and an HTTP server. There's a great read if you want to get into the weeds, but let me distill it down to something smaller. OWIN is a spec that allows you to run .NET-based Web apps without depending specifically on IIS. It lets you get deep into the raw request/response lifecycle. Middlewear components are added to a collection, and they act on the requests to the Web server (IIS, self-hosted, whatever) and give something back.

The external auth stuff lives as a bunch of these OWIN components. If you look at the magic created for you in a new project in Visual Studio 2013, you'll see a class file under /App_Start called Startup.Auth.cs. It's actually a partial class, tied to one in Startup.cs in the root. That class fires off the registration of components. You'll see a series of commented out extension methods that register the various types of external auth.

The general workflow goes like this: Display external auth buttons on the login page, submit those to an MVC method that handles the forwarding to the appropriate auth provider, take the result that comes back and either login the user or save the auth provider data (the issuer and provider key) into the ASP.NET Identity data store.

My goal was to get the result of the provider and handle the persistence and association with forum accounts myself. My implementation might have something that isn't correct, so feel free to let me know if something ain't right. First I registered my own OWIN startup class. As best I can tell, you can don one of these per assembly. In this case, I'm using the settings already found in the forums to make decisions about what to enable and what values to use:

using System;
using Microsoft.Owin;
using Microsoft.Owin.Security;
using Microsoft.Owin.Security.Cookies;
using Ninject;
using Owin;
using PopForums.Configuration;
using PopForums.ExternalLogin;
using PopForums.Web;

[assembly: OwinStartup(typeof (PopForumsOwinStartup))]
namespace PopForums.Configuration
	public class PopForumsOwinStartup
		public void Configuration(IAppBuilder app)
			var settings = PopForumsActivation.Kernel.Get().Current;


			app.UseCookieAuthentication(new CookieAuthenticationOptions
				AuthenticationType = ExternalAuthentication.ExternalCookieName,
				AuthenticationMode = AuthenticationMode.Passive,
				CookieName = CookieAuthenticationDefaults.CookiePrefix + ExternalAuthentication.ExternalCookieName,
				ExpireTimeSpan = TimeSpan.FromMinutes(5),

			if (settings.UseTwitterLogin)
				   consumerKey: settings.TwitterConsumerKey,
				   consumerSecret: settings.TwitterConsumerSecret);

			if (settings.UseMicrosoftLogin)
					clientId: settings.MicrosoftClientID,
					clientSecret: settings.MicrosoftClientSecret);

			if (settings.UseFacebookLogin)
				   appId: settings.FacebookAppID,
				   appSecret: settings.FacebookAppSecret);

			if (settings.UseGoogleLogin)


The most important thing here is that the order of the cookie setup matters. After that, I register the various providers.

Moving on to the login page, my revised controller action looks like this:

public ViewResult Login()
	// not relevant stuff

	var externalLoginList = new List(HttpContext.GetOwinContext().Authentication.GetAuthenticationTypes((Func<AuthenticationDescription, bool>) (d =>
			if (d.Properties != null)
			return d.Properties.ContainsKey("Caption");
			return false;

	return View(externalLoginList);

I'm not positive, but I think I actually took this code from an extension method in Microsoft.AspNet.Identity, which seemed like a weird place for it. Since I don't want dependencies on that, I did a copy-paste. It gets the collection of registered auth providers so you can enumerate through them and make buttons. Make buttons I did, again borrowing from the default templates in the tooling.

using (Html.BeginForm("ExternalLogin", "Account", new { ReturnUrl = ViewBag.Referrer }))
	<fieldset id="socialLoginList">
		<legend>Use another service to log in.</legend>
			@foreach (AuthenticationDescription p in Model){
				<button type="submit" class="btn" id="@p.AuthenticationType" name="provider" value="@p.AuthenticationType" title="Log in using your @p.Caption account">@p.AuthenticationType</button>

The controller actions are where I started to diverge. Again, this involves some copy-paste from the templates, starting with the ChallengeResult used in the action methods. I won't bore you with those details, though it doesn't hurt to learn more about them, even if you're using the templates in VS2013 with the Identity stuff.

public class ChallengeResult : HttpUnauthorizedResult
	public ChallengeResult(string provider, string redirectUrl)
		LoginProvider = provider;
		RedirectUrl = redirectUrl;

	public string LoginProvider { get; set; }
	public string RedirectUrl { get; set; }

	public override void ExecuteResult(ControllerContext context)
		context.HttpContext.GetOwinContext().Authentication.Challenge(new AuthenticationProperties{ RedirectUrl = RedirectUrl }, LoginProvider);

// from the AccountController:

public ActionResult ExternalLogin(string provider, string returnUrl)
	return new ChallengeResult(provider, Url.Action("ExternalLoginCallback", "Account", new { loginProvider = provider, ReturnUrl = returnUrl }));

public async Task ExternalLoginCallback(string loginProvider, string returnUrl)
	var authentication = OwinContext.Authentication;
	var authResult = await ExternalAuthentication.GetAuthenticationResult(authentication);
	var matchResult = UserAssociationManager.ExternalUserAssociationCheck(authResult);
	if (matchResult.Successful)
		UserService.Login(matchResult.User, HttpContext);
		return Redirect(returnUrl);

	// TODO: offer standard login to associate, or go to create

	return RedirectToAction("Create");

There's a point of magic to point out here. The OwinContext mentioned above is an IOwinContext property of the controller, and it's injected in via the constructor. I'm using Ninject (for now... I'm really considering a switch to StructureMap), so my mapping looks like this:

Bind().ToMethod(x => HttpContext.Current.GetOwinContext());

Extension methods are cool, but they make unit testing kind of a pain, so this helps reduce some of that pain.

The callback sends the Authentication property of the IOwinContext to fetch the auth result, and that's passed to an association manager, which essentially checks to see if the Issuer and ProviderKey coming back match any records in the association table that stores those values with user ID's. If yes, it does a login, and if not, it sends you off to the page to create an account. At some point I'll get something in there to offer the user a chance to associate the external login with the forum's native login.

If the rest of this was TL;DR here's the important part. This is the part where you can see what the external provider has, so you can associate that stuff with whatever your user management system uses.

using System;
using System.Security.Claims;
using System.Threading.Tasks;
using Microsoft.Owin.Security;
using PopForums.Extensions;

namespace PopForums.ExternalLogin
	public class ExternalAuthentication : IExternalAuthentication
		public async Task GetAuthenticationResult(IAuthenticationManager authenticationManager)
			var authResult = await authenticationManager.AuthenticateAsync(ExternalCookieName);
			if (!authResult.Identity.IsAuthenticated)
				return null;
			var externalIdentity = authResult.Identity;
			var providerKeyClaim = externalIdentity.FindFirst(ClaimTypes.NameIdentifier);
			var issuer = providerKeyClaim.Issuer;
			var providerKey = providerKeyClaim.Value;
			var name = externalIdentity.FindFirstValue(ClaimTypes.Name);
			var email = externalIdentity.FindFirstValue(ClaimTypes.Email);
			if (String.IsNullOrEmpty(issuer))
				throw new NullReferenceException("The identity claims contain no issuer.");
			if (String.IsNullOrEmpty(providerKey))
				throw new NullReferenceException("The identity claims contain no provider key");
			var result = new ExternalAuthenticationResult
				             Issuer = issuer,
				             ProviderKey = providerKey,
				             Name = name,
				             Email = email
			return result;

		public const string ExternalCookieName = "External";

The first line is really where we get the data we're looking for. The magic in the OWIN middlewear uses its context to figure out what the heck you just scored in terms of claims and credentials from the external provider. From there you can see how I'm grabbing the Issuer and ProviderKey, the two things you'll want to compare against when doing your own, non Identity framework user management.

Five things to remember when trying to change a company

One of the recurring things I've seen at companies large and small is that they often have really great people who don't necessarily have the breadth of experience to push processes in the "right" direction. It's happening a lot less in software circles than it used to, in part because people move around so much, and they build big boxes of best (or better) practices. Still, some people will only have experience moving between suboptimal environments, some will have long-term engagements that simply don't expose them to new things, and others will be the kind of stakeholders that by default won't expose them to alternatives (specifically, small company owners).

Let me make it clear that I'm not dogging these people at all. As I've said before, questioning everything in order to innovate is exhausting. Furthermore, we're all a product of our experience. Frankly we're often too busy executing what we do to stop and take a look around and see if we can do things in a better way.

This means that you might come into a situation where you can clearly see how changing some things around would benefit the company and everyone involved. The hard part isn't a technical problem, it's a people problem. You have to deal with a mix of personalities, and people are naturally adverse to change to varying degrees. Here are some tips you can follow to make those changes start to roll.

1. It's not about you.

I think your gut reaction is to throw your hands in the air and say, "You're all wrong, do it my way!" I'm guilty of that. It's a career limiting move, certainly, but I don't have to explain why it isn't effective. Your motivation for changing things can make for a great resume bullet point down the road, but ultimately, to keep the cause focused, you need to make your desired outcome all about the team or company. Remove your own ego from the equation.

2. Campaign, don't order.

If you were running for office, you wouldn't put all of the voters in a room and tell them to vote for you because you've got it right, and your opponent has it wrong. That wouldn't work. When you're trying to change processes, the more radical the change, the less likely you can get people in a room and get them to agree with you. I've made that mistake, too. It doesn't matter how logical you are, or the data you have to back up your position. The crowd will beat you down.

Instead, once you have some understanding about the people involved, you can go to them individually, and explain to them why you want to make the change. Get them to understand why it's good for them, and it's good for the team. Repeat that process, and you'll start to get momentum and consensus.

3. Don't try to be the life of the party.

When you enter a new social situation, or start a new job, you don't show up and try to be a big hit with everyone. Even if you have decision making authority (or pour the drinks), you quietly hang out in a corner and observe what's going on. The same thing is true when you're trying to change how things operate. You need to build rapport, understand who the players are, see the vision of the business.

4. Consider the context.

Early in my volleyball coaching career, I came up with a system derived from a number of things I had seen at different levels of the game, packaged it up, and decided to go all-in with a team to try it. The results were astonishing, and I felt like I cracked the code.

The next year, I tried to do the same thing, and it was falling flat. Once I stopped and looked at what was failing, I started to realize that everything I knew was still mostly correct, but I had to make some changes to apply to this different set of girls. You have to make the same changes when you're trying to apply a process you know to be better in a new environment. Taylor the process for the people and context.

5. Incremental change is easier than the big bang

This one is the hardest. We're anxious to institute massive changes as fast as possible because we want the better outcome as fast as possible. Make no mistake, people are going to get in your way, and fight you at every turn if you try to change everything all at once.

It's easier to break things down into smaller chunks and attack them separately. If you're really slick, you might even be able to do a number of these chunks concurrently, with different people involved. That's when you're a process changing ninja. They don't even know you're there.

Today's ideal developer

Scott Hanselman wrote a short blog post about developers vs. Googlers, but I really love where that inspired Rick Strahl to go. Having been in a lot of positions to hire people over the years, I think Rick really goes in depth with regard to skills, career development and the market reality of what we need out of people.

I have a lot of respect for people who really endeavor to go deep into computer science. If I were to pick heroes, few people could blow my mind like people I met in Microsoft Research. Scary smart people thinking about things I would never think about. That said, you don't need people of that caliber to do the bulk of production coding. The requirements are completely different.

The biggest things that have changed since I shifted to this line of work are the availability of open source projects and information online. Back in the day, I got better by way of books and experimentation, and hopefully worked with people better than me. Today, you still need the better people, but the Internet quickly shows you that most problems have already been solved.

So in this environment, the thing I value the most is the ability to skillfully assemble solutions. Being an algorithmic genius is not essential, but the ability to loosely couple different components, whether from open source or their own code, is where it's at. It's kind of like being a plumber. You don't need to know all of the specifics about how the hot water heater works, but you do have to understand it well enough to make it work in the system.

Let's be honest here. If you've ever switched jobs, probably the first thing you noticed about the code base was that things were tightly coupled, and therefore hard to change and maintain, and hard to test. I've seen it a hundred times. It's really a drag. There are a lot of things that go into measuring code quality, but this keeps coming back to me as one of the most important.

What do I look for when hiring a developer? I want them to demonstrate reasonable understanding of the frameworks and environment they're working in. Beyond that, I want to see how they structure an application. Do they mix different concerns in the same code? Can I easily unit test what they write?

The soft skills are important too. I want to see that they fundamentally understand that they might be coding for fun, but that there is a business there with a lot of stakeholders. I want them to experiment and try new things, but not go off and do something that hasn't been socialized, prioritized and added to the current iteration of work.

Things have really changed, but I think they've changed for the better. I think about how my forum app, which has been around in some form since 1999, was a lot harder to build back then. I had to write my own rich text editor (which only worked in IE), and scripts were hard to encapsulate and improve. Now the same app has open source components for rich text, dependency injection and testing. It's even available in five languages. There's no question that we live in a better world.

The profession is evolving, and I think it's for the better. We'll always need people who can write device drivers and 3D game engines, but for most of us, there's a great opportunity to learn to quickly build quality applications that are easy to maintain.

When the Google beats on your SignalR

Around the end of April, I put v11 of POP Forums into production on CoasterBuzz. Probably the biggest feature of that release was all of the new real-time stuff in the forum, with new posts appearing before your eyes and in the topic lists and such. This was all enabled in part by SignalR, the framework that allows for bidirectional communication between the browser and the server over an open connection (or simulated open connection, depending on the browser).

It didn't take long before I noticed some odd exceptions being thrown in the error logs around the SignalR endpoints. They were all coming from Googlebot, which apparently scans Javascript and looks for ways to scan content that's ordinarily loaded dynamically into your site's pages. Yay for Google trying to find content on your site, but in this case, there are two big problems.

The first problem is that Googlebot appears to be somewhat stupid. While it identifies the endpoint, the actual URL that SignalR uses, it seems to have no regard as to what data has to be posted to it. That's where the exceptions come from, because SignalR doesn't understand the request.

The second problem is that Googlebot understandably expects to get a response and move along. But SignalR likes to keep an open connection so that the client and server bits can talk to each other. That's kind of the whole point. I didn't catch this issue until I used Google Webmaster Tools to see what my load times were looking like. You can very plainly see where I started using SignalR, and when I fixed the problem. Google was hanging on for as long as two seconds.


The reason I looked is because Google was being relentless at one point, banging on the thing hard enough to generate hundreds of exceptions every hour. The fix was easy enough, just put a few lines in your robots.txt file that tell the Google to back off:

User-agent: *
Disallow: /signalr/

The more you know, the smarter you grow.

Everyone else is doing it (incorrectly)!

[This is actually a repost from my personal blog, but I think the technical audience might “dig” it as well.]

Innovation is hard. You can definitely foster it, but you can't really force it. It's completely fascinating when people innovate in a massively disruptive way. While you can't make innovation happen, it's something I try to strive for. There are certain ways that I've had a great deal of success innovating, and others where I haven't. Professionally, it's easy to get into the rut of doing things a certain way, because everyone else does it that way. The first step to doing it in a better way often requires questioning the establishment. While my inner rebel is all about that, it's also an exhausting practice.

Coaching volleyball is one of those scenarios where the questioning comes easy. For example, before a match, you're given several minutes of court time to warm up (the actual time depends on the governing organization). Since I was in high school, that time was always used by coaches to send perfectly tossed balls into the air for hitting, while your one or two short defensive specialists tried to dig those hit balls. This results in a lot of "whoo-hoo's" and pleasure on the part of your athletes, but I wasn't sure if it was constructive.

Attacking the ball is always step three in volleyball. Someone has to expertly pass the ball first, then someone has to set it for the hitter. Without those two things, there is no hitting. So after a season or two, I thought, why am I wasting time on this, especially when my kids can't pass to save their lives? So despite the protest of the kids (and parents, who always have the answers), I ditched the hitting lines. I put six kids on the other side of the net, and tossed balls in for them to pass, set and hit. I rotated them around. This exercised all of the skills necessary to score, including the ever important transition on and off the net. It was real, core to the game, and made a huge difference. It also happened to be noisy and menacing in appearance, which freaked out the other team, so that was a plus.

I tossed out what everyone else was doing, and tried something that seemed to better serve the scenario. I try to do this with all things in life. And yes, it can be exhausting questioning everything, especially if you end up where you started, and "everyone" had it right.

It's a lot harder to innovate your way out of the norm in my line of work. In terms of the actual computer science, sure, there are a lot of things that have been thought to death and they're good ideas. It tends to be the process and the associated people issues that are harder to change. There is an important parallel though to the volleyball warm-up. It turns out that process is almost always wrought with wasted time for things that don't matter, that don't get to something real and valuable. Even in celebrated (capital "A") Agile practices, teams have a hard time identifying the things they do that aren't adding value, let alone innovating.

Innovation isn't easy, but you can get practice at it. It starts when you stop accepting sheep behavior and ask if there's a better way.

More Posts Next page »