Contents tagged with Azure

  • The great Azure outage of 2014

    We had some downtime on Tuesday night for our sites, about two hours or so. On one hand, November is the slowest month for the sites anyway, but on the flip side, we pushed a new version of PointBuzz that I wanted to monitor, and I did post a few photos from IAAPA that were worthy of discussion. It doesn't matter either way, because the sites were down and there was nothing I can do about it because of a serious failure in protocol with Microsoft's Azure platform.

    I'm going to try and be constructive here. I'll start by talking about the old days of dedicated hardware. Back in the day, if you wanted to have software running on the Internet, you rented servers. Maybe you had virtual networks between machines, but you still had physical and specific hardware you were running stuff on. If you wanted redundancy, you paid a lot more for it.

    I switched to the cloud last summer, after about 16 years in different hosting situations. At one point I had a T-1 and servers at my house (at a grand per month, believe it or not that was the cheapest solution). Big data centers and cheap bandwidth eventually became normal, and most of that time I was spending $200 or less per month. Still, as a developer, it still required me to spend a lot of time on things that I didn't care about, like patching software, maintaining backups, configuration tasks, etc. It also meant that I would encounter some very vanilla failures, like hard disks going bad or some routing problem.

    Indeed, for many years I was at SoftLayer, which is now owned by IBM and was formerly called The Planet. There was usually one instance of downtime every other year. I had a hard drive failure once, a router's configuration broke in a big way, and one time there was even a fire in the data center. Oh, and one time I was down about five hours as they physically moved my aging server between locations (I didn't feel like upgrading... I was getting a good deal). In every case, either support tickets were automatically generated by their monitoring system, or I initiated them (in the case of the drive failure). There was a human I could contact and I knew someone was looking into it.

    I don't like downtime, but I accept that it will happen sometimes. I'm cool with that. In the case of SoftLayer, I was always in the loop and understood what was going on. With this week's Azure outage, that was so far from the case that it was inexcusable. They eventually wrote up an explanation about what happened. Basically they did a widespread rollout of an "improvement" that had a bug, even though they insist that their own protocol prohibits this.

    But it was really the communication failure that frustrated most people. Like I said, I think most people can get over a technical failure, not liking it, but dealing with it. What we got was vague Twitter posts about what "may" affect customers, and a dashboard that was completely useless. It said "it's all good" when it clearly wasn't. Not only that, but if you then describe that there's a problem with blob storage but declare websites and VM's as all green, even though they depend on storage, you're doing it wrong. Not all customers would know that. If a dependency is down, then that service is down too.

    The support situation is also frustrating. Basically, there is no support unless you have a billing issue or you pay for it. Think about that for a minute. If something bad happens beyond your control, you have no recourse unless you pay for it. Even cable companies have better support than that (though not by much).

    Microsoft has to do better. I think what people really wanted to hear was, "Yeah, we messed up really bad, not just in service delivery, but in the way we communicated." The support situation has to change too. I have two friends now that had VM's more or less disappear, and they couldn't get them back. They had to buy support, which then failed to "find" them. Talk about insult to injury.

    Hopefully this is just a growing pain, but a significant problem can't go down like this again, from a communication standpoint.

    Read more...

  • I moved my Web sites to Azure. You won't believe what happened next!

    TL;DR: I eventually saved money.

    I wrote about the migration of my sites, which is mostly CoasterBuzz and PointBuzz, from a dedicated server to the various Azure services. I also wrote about the daily operation of the sites after the move. I reluctantly wrote about the pain I was experiencing, too. What I haven't really talked about is the cost. Certainly moving to managed services and getting out of the business of feeding and caring for hardware is a plus, but the economics didn't work out for the longest time. That frustrated me, because when I worked at Microsoft in 2010 and 2011, I loved the platform despite its quirks.

    The history of hosting started with a site on a shared service that I paid nearly $50/month for back in 1998. It went up to a dedicated server at more than $650, and then they threatened to boot me for bandwidth, so I started paying a grand a month for a T-1 to my house, plus the cost of hardware. Eventually the dedicated servers came down again, and for years were right around $200. The one I had the last three years was $167. That was the target.

    Let me first say that there is some benefit to paying a little more. While you won't get the same amount of hardware (or the equivalent virtual resources) and bandwidth, you are getting a ton of redundancy for "free," and I think that's a hugely overlooked part of the value proposition. For example, your databases in SQL Azure exist physically in three places, and the cost of maintaining and setting that up yourself is enormous. Still, I wanted to spend less instead of more, because market forces being what they are, it can only get cheaper.

    Here's my service mix:

    • Azure Web Sites, 1 small standard instance
    • Several SQL Azure databases, two of which are well over 2 gigs in size (both on the new Standard S1 service tier)
    • Miscellaneous blob storage accounts and queues
    • A free SendGrid account

    My spend went like this:

    • Month 1: $204
    • Month 2: $176
    • Month 3: $143

    So after two and a half months of messing around and making mistakes, I'm finally to a place where I'm beating the dedicated server spend. Combined with the stability after all of the issues I wrote about previously, this makes me happy. I don't expect the spend to increase going forward, but you might be curious to know how it went down.

    During the first month and a half, only the old web/business tiers were available for SQL Azure. The pricing on these didn't make a lot of sense, because they were based on database size instead of performance. Think about that for a minute... a tiny database that had massive use cost less than a big one that was used very little. The CoasterBuzz database, around 9 gigs, was going to cost around $40. Under the new pricing, it was only $20. That was preview pricing, but as it turns out, the final pricing will be $30 for the same performance, or $15 for a little less performance.

    There ended up being another complication when I moved to the new pricing tiers. They were priced that any instance of a database, spun up for even a minute, incurred a full day's charge. I don't know if it was a technical limitation or what, but it was a terrible idea. You see, when you do an automated export of the database, which I was doing periodically (this was before the self-service restore came along), you incurred the cost of an entire day's charge for that database. Fortunately, starting next week, they're going to hourly pricing starting next month.

    I also believe there were some price reductions on the Web sites instances, but I'm not sure. There was a reduction in storage costs, but they're not a big component of the cost anyway. Honestly, I always thought bandwidth was my biggest concern, but that's because much of what I used on dedicated hardware was exporting backups. On Azure, I'm using less than 300 gigs out.

    So now that things have evened out and I've understood how to deal with all of the unknowns from previous months, coupled with a lot of enhancements the Azure team has been working in, I'm in a good place. It feels like it should not have been so difficult, but Azure has been having an enormous growth and maturity spurt in the last six months or so. It's really been an impressive thing to see.

    Read more...

  • The indie publisher moving to Azure, part 2: operation

    About a month ago, I wrote all about my experience migrating my sites off of dedicated hardware and into Azure. I figured I would wait awhile before writing about the daily operation of those sites, so I could gather enough experience to make a meaningful assessment. As I said in the previous post, this is a move that I was looking forward to make for a good three years or so, when I actually worked with Azure from within Microsoft. The pricing finally came down to a point where it made sense for an indie publisher, and here we are.

    Read more...

  • The indie publisher moving to Azure, part 1: migration

    I've been a big fan of cloud-based infrastructure for a long time. I was fortunate enough to be on a small team of developers who built the reputation system for MSDN back in 2010, on Microsoft's Azure platform. Back then, there was a serious learning curve because Azure was barely a product. At the end of the day, we built something that easily could handle millions of transactions per month without sweating, and that was a sweet opportunity. Most people never get to build stuff to that scale.

    Read more...

  • Building a live blog app in Windows Azure

    If you're a technology nerd, then you've probably seen one technology news site or another do a "live blog" at some product announcement. This is basically a page on the Web where text and photo updates stream into the page as you sit there and soak it in. I don't remember which year these started to appear, but you may recall how frequently they failed. The traffic would overwhelm the site, and down it would go.

    Read more...

  • From HttpRuntime.Cache to Windows Azure Caching (Preview)

    I don’t know about you, but the announcement of Windows Azure Caching (Preview) (yes, the parentheses are apparently part of the interim name) made me a lot more excited about using Azure. Why? Because one of the great performance tricks of any Web app is to cache frequently used data in memory, so it doesn’t have to hit the database, a service, or whatever.

    Read more...