Jeff and .NET

The .NET musings of Jeff Putz

Sponsors

News

My Sites

Archives

Lessons from live blogging with Azure (nothing bad happened)

I wrote previously about how I built a "live blog" app in Azure, so we could use it for PointBuzz during last week's festivities at Cedar Point. Not surprisingly, it worked just fine. As I expected, the whole thing was kind of overkill. Sweet, over-provisioned overkill.

The traffic loads we encountered were not a big deal. At one point, we were close to 300 simultaneous connections. We didn't really need Azure to handle that, but the truth is that I wasn't entirely sure what to expect in terms of SignalR and its effect on the servers. What better reason to spin up virtual machines in Azure? I still think that's the biggest magic about cloud services, that you can turn on stuff just when you need it, and pay just for that. It sure beats having to buy or rent a rack of equipment.

The performance was stellar. Average CPU usage never went over 1.5%. I ran two instances of the MVC app as a Web role. I chose this over straight Web sites because of the distributed cache that comes free with it. Of course I didn't really need it, but why not? I didn't do any page rendering timing at the server, because grabbing a few objects out of the cache and making them into a simple page was probably stupid fast, but testing from the park's WiFi, the time from request to last byte (not counting images) was generally under 200 ms. The AJAX calls on scroll for more content were just slightly faster.

The CDN performance was similarly pretty solid. I did a few unscientific tests on that prior to our big day, comparing the latency of the CDN to direct calls to blob storage. Predictably, the first hit to the CDN was always slower as it grabbed the data from storgage, but after that, they were about the same. Again, this was not scientific, and I also can't control which point of presence I was hitting on the CDN. This was another feature I certainly didn't need, but figured I would try it since it was there.

We moved a total of 6 gigs of photos that day, which was a lot more than I expected. This isn't a big deal for a few days of activity, but if I were using this (or any of the cloud services) long-term, bandwidth costs would be a concern. They're still a lot higher than the "free" terabytes of transfer you typically get when you rent a box in some giant data center.

At the end of the day, the app proved two things. The first was that SignalR imposes very little overhead, even with hundreds of open connections. The service bus functionality, still in beta, works great, to shuttle messages between running instances of the app. The other thing that it proves is that I bet you could throw this simple thing up for a big live blog event like an Apple product announcement and it would work just fine. I need to find someone willing to take that chance now. :)

So what does all of this overkill cost?

166 compute hours (2 small instances): $13.28
6 gigs out of the CDN: $0.60
18,000 CDN transactions: $0 (it's a dime for a million)
11,000 service bus messages: $0 (it's a buck for a million)
1.2 gigs out from app: $0.10
200 MB storage: < $1
SQL database: < $5 for a month

Building for Web scale is a different skill

There are a lot of things that one can find satisfying about building stuff for the Web. For a lot of people, it's probably just the act of building something cool, pretty and useful. These are certainly things to strive for, but for me, the interesting thing has always been to build something that can scale.

Like so many things in life, this particular desire grew out of experience. Very early on, before I was technically getting paid to be a software developer, I learned about scale problems. In the wild west of 2000, I launched CoasterBuzz and did some advertising for it. I was on a shared hosting plan, and the site started to get slow in a hurry. There were a number of things I did poorly, including some recursive database queries, and worse, fetching more data than I needed. You live and learn, as they say, and I got better at it over time.

Many years later, I would have the chance to work on the MSDN/TechNet forums, which served well over 45 million pages per month. It's not lost on me how rare it is for anyone to get to work on a Web app that has to scale to that size. My team was actually there to try and rope it in a little, because it required a huge number of servers to run. There was a lot of low-hanging fruit, and some really hard things to do as well. I didn't directly do a lot of the performance enhancing stuff (though I did pair for it), but I still took a lot away from that experience.

With my own sites, they collectively do 12 to 15 million pages per year, depending on what's going on that year. Respectable, but under any normal circumstances, not a lot. At peak times, that works out to be between 6 to 10 pages per second, and less than 1 in off-peak times. It's very rare that my server ever gets pushed beyond 25% CPU usage (it doesn't hurt that it's total overkill, with four fast cores).

Still, I've noticed that people who work on Web applications don't always think in Web terms. By that, I mean it's not uncommon for them to think in "offline" terms, where time is not nearly as critical. For example, someone who works a typical job doing line-of-business applications doesn't care if they build a SQL query that has a ten-way join over two views. It might take a few seconds (or minutes) to get results, but it doesn't matter for the report it's going to generate. For the Web, that timing matters.

So here are a few of the things that I think people building apps for the Web need to think about. If there are others you can think of, I'd love to hear them! This is not an exhaustive list...

  • Denormalize! Disk space is cheap, and disks are huge. Really, it's OK to duplicate data if it means you don't have to do a bunch of expensive database joins. This is even more important now, in an age where we might use different kinds of data storage, like the various flavors of table and blob storage.
  • Calculate once. This is perhaps the biggest sin I've seen. You might have a large set of rules on whether or not you should display some piece of data. You have two choices: You can make those calculations every time the data is requested, or you can do it once and store the outcome of that decision. Which is going to be faster? Calculating once, probably in an infrequent data writing situation, or calculating every time, in a frequent read-only situation? I think the answer is pretty obvious.
  • Use caching, but only when it makes sense. Slapping an extra box with a bunch of memory on your network to store data is a pretty quick way to boost performance. There are some pitfalls to avoid, however. If the data changes frequently, make sure your code to invalidate the cache is well tested. Beware giant object graphs that serialize into gigantic objects that are many times larger than their binary counterparts. If you're caching because of expensive data querying or composition, fix that problem first.
  • Don't wait until the end to understand performance. I'll be honest, premature optimization annoys the crap out of me. Developers who waste time on what-ifs and try to code for them drive me nuts. That said, you can't pretend that performance is a last mile consideration. Fortunately, most shops these days are working with continuous integration environments at least as far as staging or testing, so problems should become apparent early on.
  • Use appropriate instrumentation. I worked with one company that had a hard time finding the weak spots in its system, because it wasn't obvious where the problems were. Big distributed systems can have a lot of moving parts, and you need insight into how each part talks to the other parts. For that company, I insisted that we had a dashboard to show the average times and failure rates for calls to an external system. (I also wanted complete logging of every interaction, but didn't get it.) Sure enough, one system was choking on requests at one point every day, and we could address it.
  • Remember your HTTP basics. I'm being intentionally broad here. Think about the size and shape of scripts and CSS (minification and compression), the limits in the number of connections a browser has to any one host, cookies and headers, the very statelessness of what you're doing. The Web is not VB6, regardless of the layers of abstraction you pile on top of it.

These are mostly off the top of my head, but I'd love to hear more suggestions.

Building a live blog app in Windows Azure

If you're a technology nerd, then you've probably seen one technology news site or another do a "live blog" at some product announcement. This is basically a page on the Web where text and photo updates stream into the page as you sit there and soak it in. I don't remember which year these started to appear, but you may recall how frequently they failed. The traffic would overwhelm the site, and down it would go.

So I got to thinking, how would I build something like this? We've got a pretty big media day at Cedar Point coming up for GateKeeper, and it would be fun to post updates in real time. Ars Technica posted an article about how they tackled the problem a couple of months ago, and while elegant, it wasn't how I would do it.

My traffic expectations are lower. I don't expect to get tens of thousands of simultaneous visitors, but a couple thousand is possible. The last time we even had the chance to publish real-time from an event was 2006, for Skyhawk. Maverick got delayed the next year, and that event was scaled back to a few hours in the morning. Still, the server got stressed enough back in 2006 with a lot of open connections, in this case because I was serving video myself, and I didn't write the code that I write today. Regardless, I still wanted to build this using cloud services, as if I was expecting insane traffic. The resulting story, from a development standpoint, is wholly unremarkable, but I'll get to why that's important.

So the design criteria went something like this:

  • Be able to add instances on the fly to address rising traffic conditions.
  • Update in real-time with open connections, not a browser polling mechanism.
  • Keep as much stuff in memory as possible.
  • Serve media from a CDN or something not going through the Web site itself.
  • Not spend a ton of time on it.

The first thing I did was wire up the bits on the server and client to fire up a SignalR connection, and have an admin push content to the browsers. I won't go deeper into that, because there are plenty of examples around the Internets showing how to do it with a few lines of code. Later in the process, I added the extra line of code and downloaded the package to make SignalR work through Azure Service Bus. This means that if I ran three instance of the app, the admin pushing content out from one instance, will have the content go via the service bus to the other instances, where other users are connected via SignalR. It's stupid easy. Adding instances on the fly and make it real-timey, check.

Next, I needed a way to persist the content. Originally I toyed with using table storage for this, because it's cheaper than SQL. However, ordering in a range becomes a problem, because while you can take a number of entities in a time stamp range, and then order them in code, there's no guarantee you'll get that number of entities. After thinking about it, SQL is $5/month for 100 MB, and I was only going to be using it for a few days. Performance differences would likely be negligible, and since I was going to cache the heck out of everything, that was even less important. I used SQL Azure instead.

Instead of using the Azure Web Sites, I used Web roles, or "cloud services" as they're labeled in the Azure portal. These are the original PaaS things that I was originally drawn to. Sure, they're technically full blown VM's, you can even RDP into them, but I like the way they're intended to exist for a single purpose, maintained and updated for you. More to the point, they have Azure Cache available, which is essentially AppFabric spun up across every instance you have. So if you have two instances up, and they use 30% of the memory of these small instances, that adds up to around a gigabyte of distributed cache, for free. Yes, please! I had my data repositories use this cache liberally. The infinite scroll functionality takes n content items after a certain date, which means different people starting up the page at different times will have different "pages" of data, but I cache those pages despite the overlap. Why not? It's cheap! Keep stuff in memory, check.

The CDN functionality is pretty easy too. I probably didn't need this at all, but why not? Again, for a day or two, given the amount of content, it's not an expensive endeavor. The Azure CDN is simply an extension of your blob storage, so there's little more to do beyond turning it on, adding a CNAME to my DNS, and off we go. CDN, check.

I stole a bunch of stuff from POP Forums, too. Image resizing was already there, the infinite scrolling, the date formatting, the time updating... all copy and paste with a few tweaks. I didn't do the page design either. Granted, most of it wasn't used, but my PointBuzz partner Walt did that. Total time into this endeavor was around 10 hours. Not spend a lot of time, check.

Here's the Visio diagram:

Live Blog diagram

As I said, if this sounds unremarkable from a development standpoint, it is, and that's really the point. I'm whipping up and provisioning a long list of technologies without having to buy a rack full of equipment. That's awesome. Think about what this app is using:

  • Two Web servers
  • Distributed cache (on the Web servers, in this case)
  • Database server
  • A service bus
  • External storage
  • A CDN

For the four days or so that I'll use this stuff, it's going to cost me less than twenty bucks. This, my friends, is why cloud infrastructure and platform services get me so excited. We can build these enterprisey things with practically no money at all. Compare this to 2000, when the most cost effective way to run a couple of quasi-popular Web sites was to get a T-1 to my house, where I ran my own box, and paid $1,200 a month for 1.5 mbits. Things are so awesome now.

I'll let you know how it goes after the live event on May 9!

One interface to rule them all

I'm not shy about telling people that I'm not much of a computer science kind of guy. It's not that I don't respect computer science or understand it, I'm just not one to get academic over it to the point of not building anything. And while I can't always remember what the hell SOLID stands for, I do remember that the "I" stands for the "interface segregation principle." It says, "Thou shalt not force everything to use one interface, because specific interfaces are better."

I've seen this principle violated many times, but twice in the last six months I've seen projects that just abuse it to death. The big problem for me is that trying to force everything into a particular interface leads to pain and suffering whenever you want to change something. While developers are in the general sense understanding what dependency injection is, I find that they're often doing it a way that violates the interface segregation principle.

For example, I've seen several instances where people are passing in the dependency resolver itself (for MVC, this is one of the IDependencyResolver interfaces) to various classes, and then those classes new up instances of whatever they need. This is bad for two reasons. For one, you're then coding on One Interface to rule them all, and for another, the overhead of mocking and verifying in testing gets bigger. That's no fun.

Another anti-pattern, related to the ISP, is the master do-it-all abstract class. These drive me nuts, too. While an abstract class isn't exactly an interface per se, it ends up being used as one. In an effort to keep the interface concise (as if it's easy to change when it's used in a thousand places), it ends up having one or two methods in order to conform to the base type. This is suboptimal because it keeps you from grouping similar functionality together, it abuses generics (which are fun when you have to bounce between value and reference types), and more to the point, that single interface isn't single for any really good reason. I would rather see you inject whatever functionality you need by way of a specific interface than force One Interface.

Think about it in terms of the mature frameworks that you've used. There aren't a lot of interfaces to be had that are widely used, because they would be hard to change, and force-feed members that don't need to be widely used. When there are widely used interfaces, they're pretty sparse (think IDisposable).

Specific is better. Don't try to cure cancer with an interface.

POP Forums v11 for ASP.NET MVC posted, with SingalR goodness

Get the juicy bits on CodePlex!

This is a significant upgrade that adds lots of real-time features (using SignalR), as well as a number of style improvements. I’m really excited about this version. I have to give a big shout out to the members of the forum on CoasterBuzz who have been so excellent in providing feedback.

Upgrading?

This release has no data changes, and is consistent with v10. Views, scripts, content and of course the core library have all been updated.

What's new?

  • Updated to use v4.5 of .NET.
  • External references now use NuGet.
  • Adding an award definition in admin now bounces you to its edit page.
  • Fixed: Show more posts updates topic context with updated page counts.
  • Activities and awards restyled.
  • User profiles are tabbed.
  • Activity feed shows real-time view of activity sent via the scoring game API
  • Times are updated every minute, formatted to current culture.
  • More posts are loaded on scroll (a la Facebook), but pager links are maintained for search engine discoverability.
  • New posts appear inline at end of post stream as they're made.
  • Forum home and individual topic lists updated in real time.
  • Breadcrumb/navigation floats at top of browser.
  • .forumGrid CSS removes outline, so it's more Metro-y.

Known Issues

None to report yet.

POP Forums v11 Beta for MVC posted on CodePlex

I won’t repost all of the changes here, but this is the version of the app that gets all real-time and stuff (thank you, SignalR!). I also spent some time refining the UI. You can get these naughty bits, and the overall change log, here:

http://popforums.codeplex.com/releases/view/103956

I’ve got the app running in production on CoasterBuzz, and I have to give that community a shout out for being my guinea pigs and an excellent source of feedback.

A few weeks to track down any final issues, and I’ll release the final version.

The burden of hiring software developers

(I wrote this for my personal blog, but it’s obviously an important topic here.)

I've had the unusual opportunity to hire and manage people on and off starting with my first real job after college. I think it's one of the hardest things to do because it's time consuming and expensive to make mistakes. When I first had to assemble a team in a consulting gig (I think it was 2005, for context), I found out it's even harder to hire software developers.

First off, check out my former boss, Jonathan, and the talk he did with another guy about how not to do technical interviewing. The irony to people who have had bad experiences interviewing at Microsoft is not lost on me, but Jonathan gets it. Obviously, since he hired me. :) Go rate up his video!

The problem in hiring starts with the fact that resumes don't mean much. You look for the key word matches for what you're looking for, and from there look at the depth and breadth of the experience. If it doesn't smell like bullshit, you move on to a phone screen. From there you further dismiss the fakers. By the time you bring someone in, I would guess that 90% of the time you can already be pretty sure they would be a good fit, and you can have your choice of candidates provided they like you and your offer.

But it's the screening part that is such a huge burden. The resume part isn't that big of a deal. I can get through a stack of resumes pretty fast and figure out who I want to follow up with. It's the next stage of the screening that takes too damn long and sucks the life out of you.

My typical phone screen is more about the gauging the person's knowledge. I don't ask them to identify acronyms like SOLID or DRY (I can never remember what the former stands for), but I can walk them through questions about language and object-oriented patterns and get a pretty good feel for where they are. But even if you're trying to get a faker off of the phone, these still take a half-hour at least, and that's not counting the overhead in agreeing to a time to talk.

If I bring someone in for real interviews, that's going to take at least three hours, including some time working on a real software problem with, you know a computer. I don't complain about that part, it's the screening process that is a huge burden.

Hiring people, even for something as technical as a software development position, is still largely a problem with human beings. Expectations are set, social contracts have to be followed and of course people have to get along. It just doesn't feel like it should be so inefficient.

First off, job boards are nearly useless. They're just keyword matching devices. The quality of candidates varies by board, but it's still not a great value prop. Staffing agencies add even less value, especially now that it's common for a person sitting in India or China to just roll through a database, making keyword matches, spamming people.

I've been talking with people a lot lately about how to make the discovery and vetting process more efficient. The use cases for smaller to medium sized businesses in particular interest me, since they might not even have someone who is technically proficient enough to make that first critical hire.

I'm open to suggestions. How do you make the discovery and vetting easier? Is there a technical process that could help?

SignalR really changes everything

I think I started to mess with HTML in 1995, and the Internets became the focus of my profession in 1999. The fun thing about this is that I’ve watched the tools and development technology evolve most of the way, and it has been an awesome ride.

That said, the Web has only had what I would consider a small number of “wow” moments in terms of development technology. AJAX as a concept was a game changer, but it wasn’t really until jQuery came along that it became stupid easy to perform AJAX tasks. The ASP.NET MVC framework was another great moment, but as it was clearly inspired by other platforms, I don’t know that it’s a big deal outside of my little world. Beyond that, dev tech has been slow and evolutionary.

Until now. SignalR, as far as I’m concerned, is a huge deal. It really does change everything. It lifts the constraint inherent to standard HTTP exchanges, in that we call the server, get something back, and we’re done. Now the client and server can talk to each other, and do so on a potentially massive scale.

At first it sounds like we should credit WebSockets for this, but by itself, that technology is only slightly useful. I say that because browser implementation is not always consistent, and SignalR compensates by gracefully falling back to long “open” connections, or polling, if necessary. It’s also not entirely useful without the backend portion, which handles the registration of clients and the ability to remotely call code at the client end.

There are a great many things that I’m thinking about using it for, not the least of which is POP Forums, my open source project. I’m wrapping up real-time updating right now, in fact, for various forum and topic lists. The amount of code to do it has been trivial. It’s a big deal.

Go try it. You won’t regret it.

A year of remote working

[Note: I originally posted this on my personal blog, but it occurs to me that it’s likely something of general interest to software developers. -J]

Today is my last day working for Humana, after about a year. One of the initial considerations in taking the job was that it was remote, as the home office is in Louisville, Kentucky, and I'm not. I had worked from home from time to time while at Microsoft, but it wasn't on a schedule or anything, just when it made sense. I didn't see any reason why it wasn't possible to do it on a full-time basis.

My conclusions after a year are that it totally makes sense to work remotely in this line of work. The only negative consideration that I can think of is that you don't build the same kind of social relationships with your coworkers. You obviously can't go out for a beer with them after work. Beyond that, the benefits are huge.

The most obvious benefit to me is increased productivity. I didn't expect this at all, but it makes sense. There are fewer distractions when you're not in the same physical environment as everyone else. When you don't have to worry about the commute, I would also argue that you're a lot more willing to spend more time on actual work. In on-site jobs, I've always been quick to make sure I'm out by 4:30 to get a jump on traffic. That concern goes away when your commute involves going downstairs to your kitchen.

The benefit applies to companies as well, because remote workers don't require real estate, where they take up space. If it costs $10 per square foot (double that in a lot of prime markets), and you plop someone in a 5x5 cube, you're spending $3,000 a year for that office space, for one person.

Technology is in a state where it's not hard to collaborate with people in different places. Ideally it means you have more real-time, ad hoc collaboration and less meetings, but certainly big old companies have a hard time with this. IP telephony, Web cams, screen sharing, wikis, Sharepoint... these all reduce barriers to remote work.

There are cultural problems that I'm sure can get in the way. This is especially true with what I call the "Midwest factory culture" in the workplace. These are businesses that have a strict working hour policy, where face time is associated with productivity. They're the same kinds of businesses that promote people for working later instead of working smarter, oblivious to the actual results of work. Those are places not equipped to handle remote workers because they don't know how to evaluate their effectiveness.

The personal benefits are many, not the least of which is not having to get into a car. I estimate that I saved almost two weeks of my life this year by not having to drive anywhere. The best part of that is the fact that most of that time was directly translated into more time with my 2-year-old, and that's priceless. Not seeing him for pre-nap roughhousing is something I really struggled with when I considered changing jobs.

Overall, I see a future world where many business, especially in those related to software development or other primarily electronic endeavors, will tend to work in a hybrid mode of sorts. Formal and structured work spaces, with cubes and offices, will go away. It's rarely necessary to meet in-person, but there are certainly times when it adds certain intangibles to the work experience and the output of the business.

Real-world SignalR example, ditching ghetto long polling

One of the highlights of BUILD last week was the announcement that SignalR, a framework for real-time client to server (or cloud, if you will) communication, would be a real supported thing now with the weight of Microsoft behind it. Love the open source flava!

If you aren’t familiar with SignalR, watch this BUILD session with PM Damian Edwards and dev David Fowler. Go ahead, I’ll wait. You’ll be in a happy place within the first ten minutes. If you skip to the end, you’ll see that they plan to ship this as a real first version by the end of the year. Insert slow clap here.

Writing a few lines of code to move around a box from one browser to the next is a way cool demo, but how about something real-world? When learning new things, I find it difficult to be abstract, and I like real stuff. So I thought about what was in my tool box and the decided to port my crappy long-polling “there are new posts” feature of POP Forums to use SignalR.

A few versions back, I added a feature where a button would light up while you were pecking out a reply if someone else made a post in the interim. It kind of saves you from that awkward moment where someone else posts some snark before you.

image

While I was proud of the feature, I hated the implementation. When you clicked the reply button, it started polling an MVC URL asking if the last post you had matched the last one the server, and it did it every second and a half until you either replied or the server told you there was a new post, at which point it would display that button. The code was not glam:

// in the reply setup
PopForums.replyInterval = setInterval("PopForums.pollForNewPosts(" + topicID + ")", 1500);

// called from the reply setup and the handler that fetches more posts
PopForums.pollForNewPosts = function (topicID) {
	$.ajax({
		url: PopForums.areaPath + "/Forum/IsLastPostInTopic/" + topicID,
		type: "GET",
		dataType: "text",
		data: "lastPostID=" + PopForums.currentTopicState.lastVisiblePost,
		success: function (result) {
			var lastPostLoaded = result.toLowerCase() == "true";
			if (lastPostLoaded) {
				$("#MorePostsBeforeReplyButton").css("visibility", "hidden");
			} else {
				$("#MorePostsBeforeReplyButton").css("visibility", "visible");
				clearInterval(PopForums.replyInterval);
			}
		},
		error: function () {
		}
	});
};

What’s going on here is the creation of an interval timer to keep calling the server and bugging it about new posts, and setting the visibility of a button appropriately. It looks like this if you’re monitoring requests in FireBug:

image

Gross.

The SignalR approach was to call a message broker when a reply was made, and have that broker call back to the listening clients, via a SingalR hub, to let them know about the new post. It seemed weird at first, but the server-side hub’s only method is to add the caller to a group, so new post notifications only go to callers viewing the topic where a new post was made. Beyond that, it’s important to remember that the hub is also the means to calling methods at the client end.

Starting at the server side, here’s the hub:

using Microsoft.AspNet.SignalR.Hubs;

namespace PopForums.Messaging
{
	public class Topics : Hub
	{
		public void ListenTo(int topicID)
		{
			Groups.Add(Context.ConnectionId, topicID.ToString());
		}
	}
}

Have I mentioned how awesomely not complicated this is? The hub acts as the channel between the server and the client, and you’ll see how JavaScript calls the above method in a moment. Next, the broker class and its associated interface:

using Microsoft.AspNet.SignalR;
using Topic = PopForums.Models.Topic;

namespace PopForums.Messaging
{
	public interface IBroker
	{
		void NotifyNewPosts(Topic topic, int lasPostID);
	}

	public class Broker : IBroker
	{
		public void NotifyNewPosts(Topic topic, int lasPostID)
		{
			var context = GlobalHost.ConnectionManager.GetHubContext<Topics>();
			context.Clients.Group(topic.TopicID.ToString()).notifyNewPosts(lasPostID);
		}
	}
}

The NotifyNewPosts method uses the static GlobalHost.ConnectionManager.GetHubContext<Topics>() method to get a reference to the hub, and then makes a call to clients in the group matched by the topic ID. It’s calling the notifyNewPosts method on the client. The TopicService class, which handles the reply data from the MVC controller, has an instance of the broker new’d up by dependency injection, so it took literally one line of code in the reply action method to get things moving.

_broker.NotifyNewPosts(topic, post.PostID);

The JavaScript side of things wasn’t much harder. When you click the reply button (or quote button), the reply window opens up and fires up a connection to the hub:

var hub = $.connection.topics;
hub.client.notifyNewPosts = function (lastPostID) {
	PopForums.setReplyMorePosts(lastPostID);
};
$.connection.hub.start().done(function () {
	hub.server.listenTo(topicID);
});

The important part to look at here is the creation of the notifyNewPosts function. That’s the method that is called from the server in the Broker class above. Conversely, once the connection is done, the script calls the listenTo method on the server, letting it know that this particular connection is listening for new posts on this specific topic ID.

This whole experiment enables a lot of ideas that would make the forum more Facebook-like, letting you know when stuff is going on around you.

More Posts « Previous page - Next page »