Azure: 99.95% SQL Database SLA, 500 GB DB Size, Improved Performance Self-Service Restore, and Business Continuity

Earlier this month at the Build conference, we announced a number of great new improvements coming to SQL Databases on Azure including: an improved 99.95% SLA, support for databases up to 500GB in size, self-service restore capability, and new Active Geo Replication support.  This 3 minute video shows a segment of my keynote where I walked through the new capabilities:

image

Last week we made these new capabilities available in preview form, and also introduced new SQL Database service tiers that make it easy to take advantage of them.

New SQL Database Service Tiers

Last week we introduced a new Basic and Standard tier option with SQL Databases – which are additions to the existing Premium tier we previously announced.  Collectively these tiers provide a flexible set of offerings that enable you to cost effectively deploy and host SQL Databases on Azure:

  • Basic Tier: Designed for applications with a light transactional workload. Performance objectives for Basic provide a predictable hourly transaction rate.
  • Standard Tier: Standard is the go-to option for cloud-designed business applications. It offers mid-level performance and business continuity features. Performance objectives for Standard deliver predictable per minute transaction rates.
  • Premium Tier: Premium is designed for mission-critical databases. It offers the highest performance levels and access to advanced business continuity features. Performance objectives for Premium deliver predictable per second transaction rates.

You do not need to buy a SQL Server license in order to use any of these pricing tiers – all of the licensing and runtime costs are built-into the price, and the databases are automatically managed (high availability, auto-patching and backups are all built-in).  We also now provide you the ability to pay for the database at the per-day granularity (meaning if you only run the database for a few days you only pay for the days you had it – not the entire month). 

The price for the new SQL Database Basic tier starts as low as $0.16/day ($4.96 per month) for a 2 GB SQL Database.  During the preview period we are providing an additional 50% discount on top of these prices.  You can learn more about the pricing of the new tiers here.

Improved 99.95% SLA and Larger Database Sizes

We are extending the availability SLA of all of the new SQL Database tiers to be 99.95%.  This SLA applies to the Basic, Standard and Premium tier options – enabling you to deploy and run SQL Databases on Azure with even more confidence.

We are also increasing the maximum sizes of databases that are supported:

  • Basic Tier: Supports databases up to 2 GB in size
  • Standard Tier: Supports databases up to 250 GB in size. 
  • Premium Tier: Supports databases up to 500 GB in size.

Note that the pricing model for our service tiers has also changed so that you no longer need to pay a per-database size fee (previously we charged a per-GB rate) - instead we now charge a flat rate per service tier.

Predictable Performance Levels with Built-in Usage Reports

Within the new service tiers, we are also introducing the concept of performance levels, which are a defined level of database resources that you can depend on when choosing a tier.  This enables us to provide a much more consistent performance experience that you can design your application around.

The resources of each service tier and performance level are expressed in terms of Database Throughput Units (DTUs). A DTU provides a way to describe the relative capacity of a performance level based on a blended measure of CPU, memory, and read and write rates. Doubling the DTU rating of a database equates to doubling the database resources.  You can learn more about the performance levels of each service tier here.

Monitoring your resource usage

You can now monitor the resource usage of your SQL Databases via both an API as well as the Azure Management Portal.  Metrics include: CPU, reads/writes and memory (not available this week but coming soon),  You can also track your performance usage relative (as a percentage) to the available DTU resources within your service tier level:

Performance Metircs

Dynamically Adjusting your Service Tier

One of the benefits of the new SQL Database Service Tiers is that you can dynamically increase or decrease them depending on the needs of your application.  For example, you can start off on a lower service tier/performance level and then gradually increase the service tier levels as your application becomes popular and you need more resources. 

It is quick and easy to change between service tiers or performance levels — it’s a simple online operation.  Because you now pay for SQL Databases by the day (as opposed to the month) this ability to dynamically adjust your service tier up or down also enables you to leverage the elastic nature of the cloud and save money.

Read this article to learn more about how performance works in the new system and the benchmarks for each service tier.

New Service-Service Restore Support

Have you ever had that sickening feeling when you’ve realized that you inadvertently deleted data within a database and might not have a backup?  We now have built-in Service Service Restore support with SQL Databases that helps you protect against this.  This support is available in all service tiers (even the Basic Tier).

SQL Databases now automatically takes database backups daily and log backups every 5 minutes. The daily backups are also stored in geo-replicated Azure Storage (which will store a copy of them at least 500 miles away from your primary region).

Using the new self-service restore functionality, you can now restore your database to a point in time in the past as defined by the specified backup retention policies of your service tier:

  • Basic Tier: Restore from most recent daily backup
  • Standard Tier: Restore to any point in last 7 days
  • Premium Tier: Restore to any point in last 35 days

Restores can be accomplishing using either an API we provide or via the Azure Management Portal:

clip_image004

New Active Geo-replication Support

For Premium Tier databases, we are also adding support that enables you to create up to 4 readable, secondary, databases in any Azure region.  When active geo-replication is enabled, we will ensure that all transactions committed to the database in your primary region are continuously replicated to the databases in the other regions as well:

image

One of the primary benefits of active geo-replication is that it provides application control over disaster recovery at a database level.  Having cross-region redundancy enables your applications to recover in the event of a disaster (e.g. a natural disaster, etc). 

The new active geo-replication support enables you to initiate/control any failovers – allowing you to shift the primary database to any of your secondary regions:

image

This provides a robust business continuity offering, and enables you to run mission critical solutions in the cloud with confidence.  You can learn more about this support here.

Starting using the Preview of all of the Above Features Today!

All of the above features are now available to starting using in preview form. 

You can sign-up for the preview by visiting our Preview center and clicking the “Try Now” button on the “New Service Tiers for SQL Databases” option.  You can then choose which Azure subscription you wish to enable them for.  Once enabled, you can immediately start creating new Basic, Standard or Premium SQL Databases.

Summary

This update of SQL Database support on Azure provides some great new features that enable you to build even better cloud solutions.  If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

32 Comments

  • Hello Scott,

    Can we except some kind of announcement/guidance about the deprecation of Azure SQL Federation?

  • Any updates on supporting TDE? Thanks!

  • In the Active Geo Replication, are write queries executed synchronously across geo separated nodes, and if so, what is the average latency that we can expect to incur?

    Thanks, and keep up the good work :)

  • New SQL DB announcements are awesome!

    But the new Preview Portal although very pretty and shiny is very, very confusing to use! Way too much going on I'm completely lost. Please don't remove the old portal after this thing goes out of preview.

  • Hi Scott

    Great new features.

    However, I would urge you to reconsider the pricing for databases between 2 GB and say 25 GB.

    It seems to me you need a new band (a "mini S1") to take account of these smaller databases to avoid a dramatic relative price increase being forced on existing customers.

    I have databases in the 2-5GB range. The pricing story is not good for databases in this size range. Running these on S1 under the full GA pricing will cost over twice as much money as the present Web database.

    Example:

    1x 2-4 GB Web DB now = approx 8-14 GBP per month (14-20 USD).
    1x 2-4 GB S1 DB with full GA pricing = 26 GBP per month (40 USD).

    I am not the only person noticing this. If Microsoft insist on moving ahead with this pricing I and others at this scale will be forced to look elsewhere. 40 USD for a 2-5 GB DB leaves me feeling way overcharged.

  • Hi Scott,

    This looks like it might be a good fit for one of our clients. I was looking at using database mirroring between SQL VMs in different data-centres, to provide a 'hot spare'. However this turned out to be beyond their budget.

    Am I correct in thinking that with the standard-tier service, in the event of a complete data centre loss we would be able to restore the last backup to another region? And in theory this backup gets us to within five minutes of the outage?

    If so, that might just be a balance between cost and availability that's acceptable.

    The documentation at http://msdn.microsoft.com/library/azure/hh852669.aspx suggests disaster recovery on standard tier is achieved via "Database Copy + Manual Export" but I don't seem to be able to find a thorough explanation of what's involved in that when trouble strikes.

  • Can't wait to use Azure now.

  • Any word on adding Full Text support, in SQL Azure or something similar to amazon CloudSearch?

  • I echo what Chris and Tamas posted - while I welcome these new options but also have several issues with the attitude towards federations, general scale out patterns and lack of transparency on actual concrete resource allocations.

    My thoughts are distilled on a rant posted here:
    http://tmblr.co/ZVuYOw1EUsp3i

  • When will SQL Azure support SQL 2014 features such as in memory tables?

  • I am an existing Azure SQL DB Customer.
    The GA pricing for S1 between 1 GB and 8 GB isn't competitive, either to the existing Azure SQL Web databases or to your competitors. Amazons RDS service is cheaper in this size range and they offer a whole year of that service free to new customers.
    I would rather stay on Azure but the GA pricing is making me look at other options.
    Sure, the S1 tier offers up to 250 GB but my databases are in the 1 to 10 GB range. This doesn't feel like pay-only-for-what-you-use.
    What would I like to see? A new tier like S1 but with a smaller maximum database size that is more competitive on price.

  • The performance of the new tiers (Basic and Standard) are abysmal. I don't see how MSFT can claim improved performance. The performance of S1 doesn't even meet today's web/business edition (which themselves seemed under-powered).

    If things remain as they are, SQL Database will see a mass exodus towards other platforms. Once you lost the data tier, the compute will follow.

  • I have to agree with the poster about the pricing of database in the 2GB-4GB. The new pricing for S1 is way too expensive in comparison of what we have now.

    Having a site with a 3.6GB that runs just fine with the performance that it gets, I feel that I'm now being forced to pay so much more for the same database size.

    I understand that Microsoft is moving to a more predictable performance model, but I believe Microsoft is missing a tier level. My site in question works well with the performance that is being delivered now. I user cache intensively both on the Azure side and the client (web browser) side to lighten the load on the database. I don't need more predictability/performance, I need an inexpensive place to store my relational data.

  • Hi Scott,

    This looks like it might be a good fit for one of our clients. I was looking at using database mirroring between SQL VMs in different data-centres, to provide a 'hot spare'. However this turned out to be beyond their budget.

    Am I correct in thinking that with the standard-tier service, in the event of a complete data centre loss we would be able to restore the last backup to another region? And in theory this backup gets us to within five minutes of the outage?

    If so, that might just be a balance between cost and availability that's acceptable.

    The documentation suggests disaster recovery on standard tier is achieved via "Database Copy + Manual Export" but I don't seem to be able to find a thorough explanation of what's involved in that when trouble strikes.

  • I am seriously concerned about the lower throughput for the S1 tier compared to what we have today. And also the higher costs for sub-10GB databases.

    Web/Business can easily outperform S1. So I am being asked to pay more for a lot less, right? Seriously now looking at other options.

    PS. To echo what DBUser has said, when I take my relational solution elsewhere, all my Compute is coming with me.

  • Hello everyone,

    Thanks for trying out the new Azure SQL Database service tiers and providing your feedback! As always we are listening carefully to you.

    On the questions here related to the performance I wanted to provide some more information. Based on customer feedback, our first priority for the new service tiers has been performance predictability. In Basic/Standard/Premium we are applying a different design principle than Web/Business. The database’s performance should be as if it is running on a dedicated computer. As you have noticed, each performance level (Basic, S1, S2, P1, P2, and P3) corresponds to a set of defined resources (CPU, memory, IO, and more). This design principle is what delivers predictable performance.
    We are in preview and we do expect to make adjustments to our performance levels during the preview based on customer feedback, our goal is to continue providing great price/performance. We are planning to provide updates when we make these adjustments. You can also expect a blog post with more details about our performance journey shortly.

    Thank you,
    Tobias
    Program Manager, Azure SQL Database

    Links to some additional information:
    • http://msdn.microsoft.com/en-us/library/dn741336.aspx
    • http://msdn.microsoft.com/en-us/library/dn369873.aspx
    • http://msdn.microsoft.com/en-us/library/dn741327.aspx

  • I have raised several concerns with the new SQL pricing and tiers model in a blog post linked on my name url.

    What I am hearing from other Azure customers is echoed by Mike - namely that the Standard tier is an order of magnitude step backwards compared to Web/Business.

    Additionally those of us that have taken a scale OUT option using many small (< 5GB) databases are essentially screwed by the new tier pricing model.

    The key reason we need to scale out is to avoid resource restrictions experienced in more monolithic large databases.
    The new tiers effectively force us into have less and larger dbs, while also gimping us on the resource side for the same money.

    While I'm sure the enterprise types are cheering the new tiers (they can now move their massive ERP databases to Azure), the new model is a serious blow for any legitimate cloud app that uses scale out approaches like custom sharding or Federations.

    Why not keep the Web/Business option available for those of us that want to scale out many small databases at an affordable cost?

    Additionally, trying to apply a generic "one size fits all" performance guarantee in the form of DTUs is completely idiotic because frankly no one will ever fit your abstract vision of a typical SQL database user.

    The truth is that in the real world, database loads vary constantly based on the operations we need to undertake at any point in time.

    Sometimes we need high I/O bursts in order to run an import, sometimes we need high CPU bursts to crunch a big query.

    Giving us a DTU based guarantee is pretty much worthless in the real world.

    What MS should be doing is disclosing the concrete resource guarantees - namely what CPU, I/O and memory allocation we are guaranteed to receive.

    TL;DR:
    - Keep Web/Business option for cloud startups and apps that wish to scale out many small dbs
    - Disclose minimum guarantees on concrete resources like CPU, I/O and memory
    - Don't kill Federations (its actually pretty awesome!)

  • Hello everyone,

    Thanks for trying out the new Azure SQL Database service tiers and providing your feedback! As always we are listening carefully to you.

    On the questions here related to the performance I wanted to provide some more information. Based on customer feedback, our first priority for the new service tiers has been performance predictability. In Basic/Standard/Premium we are applying a different design principle than Web/Business. The database’s performance should be as if it is running on a dedicated computer. As you have noticed, each performance level (Basic, S1, S2, P1, P2, and P3) corresponds to a set of defined resources (CPU, memory, IO, and more). This design principle is what delivers predictable performance.
    We are in preview and we do expect to make adjustments to our performance levels during the preview based on customer feedback, our goal is to continue providing great price/performance. We are planning to provide updates when we make these adjustments. You can also expect a blog post with more details about our performance journey shortly.

    Thank you,
    Tobias
    Program Manager, Azure SQL Database
    Here is some additional information:
    • http://msdn.microsoft.com/en-us/library/dn741336.aspx
    • http://msdn.microsoft.com/en-us/library/dn369873.aspx
    • http://msdn.microsoft.com/en-us/library/dn741327.aspx

  • Folks,

    Thank you for the positive comments re the new business continuity features we announced. I want to follow up on the question by Darren Grayson re the restore feature, in particular the ability to restore to other regions. It is correct that our goal is eventually providing that capability, which will be the most economical way to recover in case of a regional failure. However, this capability is not yet available. At this point we only introduced the API and it is limited to restoring to the same logical server. But it does allow our customers to explore it, test the restore performance with the real databases and provide feedback. We will separately announce the cross region support as soon as it is enabled, so stay tuned.

    Re the question about the DR options available for Standard edition at this point the options are the same as what's available for Web and Business edition. Please refer to this article for more details (http://msdn.microsoft.com/en-us/library/azure/hh852669.aspx). In the meantime we are working on other DR options for Standard. In addition, Standard edition supports Point In Time Restore, which is not available with Web and Business editions. See this article for more details (http://msdn.microsoft.com/en-us/library/azure/dn715779.aspx)
    Thank you

    Sasha Nosov
    Program Manager, Azure SQL Database

  • Folks,
    Thank you for the positive comments re the new business continuity features we announced. I want to follow up on the question by Darren Grayson re the restore feature, in particular the ability to restore to other regions. It is correct that our goal is providing this capability, which will be the most economical way to recover in case of a regional failure. However, it is not yet available. At this point we only introduced the API and it is limited to restoring to the same logical server. But it does allow our customers to explore it, test the restore performance with the real databases and provide feedback. We will separately announce the cross region support as soon as it is enabled, so stay tuned.
    Re the question about the DR options available for Standard edition at this point the options are the same as what's available for Web and Business edition. Please refer to this article for more details (http://msdn.microsoft.com/en-us/library/azure/hh852669.aspx). In the meantime we are working on other DR options for Standard. In addition, Standard edition supports Point In Time Restore, which is not available on Web and Business editions. See this article for more details (http://msdn.microsoft.com/en-us/library/azure/dn715779.aspx)

    Thank you
    Sasha
    Program Manager, Azure SQL Database

  • I posted 14 days ago about the much higher cost we, who currently use Web/Business, will have to pay for the new tier when we actually don't need that option.

    I see that others have the same concern. However, I just returned to see if there has been any updates from MS, or at least acknowledge the concern, but I see that our issue has not even been acknowledged. Other issues have been addressed by MS employees. But not the legitimate adverse cost update we're going to incur.

  • John Smith, you are spot on. In my situation I am looking at a 9000% (you read that right) increase in price to get the same performance as the previous Web/Business.

  • I would have loved to try out the new tiers - but as it seems due to the performance limitations of the new tiers, it's virtually impossible to import an existing database into one of these instances. I tried to import a database with about 6 GB of data, couple millions of rows ... and even after multiple hours, the import was stuck at 5 %.

    You need to have a better import workflow for these new tiers, kind of an attach feature.

  • What illustration tool was used for this?
    https://aspblogs.blob.core.windows.net/media/scottgu/Media/image_thumb_278F1829.png

  • Microsoft: please address comments by myself, Paul DB, John Smith, Craig, DBUser, etc.

    We have raised legitimate points re: cost/performance comparison between Web/Business and S1 for small / moderately sized databases. As the saying goes, on these points thus far, "the silence is deafening" from Microsoft.

    It seems the move to performance based pricing actually equates to a large or extremely large price increases at these scales and potentially others too.

    There have been no price-point vs. performance comparison metrics provided by Microsoft about this transition from web/business/etc into basic/standard/etc. (e.g. what DTUs does web/business support currently?). My expectation is web/business support quite a lot more DTUs than the new tiers per $, albeit slightly less consistently. If so, why not come out and transparently say that?

    All-in-all, I think through the handling of this transition, Microsoft is deliberately not being honest which is unhelpful at best and immoral at worst.

  • @Chris, we definitely understand your frustration with the lack of a clear answer on the $/perf or other perf numbers (such as our benchmark) between Web/Business and the new service tiers and we are definitely not trying to be unhelpful or dishonest. You may have seen our blog post giving some more information, if not I added a link below. The premise of the problem is the fact that for W/B it depends on the activity of the databases that currently share the machine that the database being tested is hosted on, this changes quite frequently as the service load balances databases across machines. So while you may be able to get better throughput on Web database than say even an S2 at one point, you may later get much lower. I think it is fair to say that you are seeing a pretty stable and good performance experience in W/B in many of our locations at the present time. If you look back a few months this wasn't the case and one part of the answer is simply that this is a cycle and as more capacity comes online in a region databases automatically get less densely packed in the Web & Business edition model and stability increases, as this capacity fills up stability degrades again. Looking at the long term customer feedback since the inception of the SQLDB service we are confident that we need to provide a model where you can trust the level of performance we provide. We have started this journey with the introduction of Basic, Standard and Premium. Rest assured that we are also absolutely focused on our customers being happy with the combination of the level of service AND the price/performance. Right now we are in preview with our new tiers and one of the main reasons is to make sure we GA with the right performance levels as well as making the right engineering investments for the future.

    I am super happy to schedule a Lync call with you to better understand your concerns if I am missing something, feel free to e-mail me on tobiast at microsoft dot com and we'll set it up.

    Thanks!
    /Tobias Ternstrom
    Program Manager, Azure SQL Database

    Blog post: http://azure.microsoft.com/blog/2014/05/19/performance-in-the-new-azure-sql-database-service-tiers/

  • Why did you kill off Federations? Can we expect something similar in the near future? How can Microsoft build a cloud computing platform without SQL scale out? I'm vastly disappointed by the new changes. I'm going to start looking into a NewSQL solution. You've lost a customer.

  • @Ben -- We understand your concerns around the deprecation of the Federations in SQLDB. The change is necessary because of scaling limits in the current implementation of Federations and incompatibility with the newest features we are introducing in the cloud data platform. We will be investing over time in deeper guidance and capabilities to fill the gap – in the interim we recommend utilizing a “self-sharding” approach that many of our large scale customers (including those that host hundreds or thousands of DBs in Azure use) implement on our platform. Examples of sharding patterns for databases in Azure are covered in the Cloud Service Fundamentals samples and articles from the Azure CAT team -- http://social.technet.microsoft.com/wiki/contents/articles/17987.cloud-service-fundamentals.aspx .

    In any case, we would be interested in learning more about how you are using Federations today and what your highest priority requirements are for future capabilities, so I would welcome the opportunity for a phone discussion or separate email thread. Please reach out to me by email to stuarto at Microsoft dot com.

    Stuart Ozer
    Group Program Manager
    Azure SQL Database

  • I contacted Azure Support about this statement:

    "We are extending the availability SLA of all of the new SQL Database tiers to be 99.95%. This SLA applies to the Basic, Standard and Premium tier options – enabling you to deploy and run SQL Databases on Azure with even more confidence."

    They said that it is not valid as of now. SLA will only apply to new tiers when they are moved from Preview to General Availability.

    I was thinking of using new Tiers in Production apps, but looks like will have to wait.

  • @EricP: Microsoft will guarantee at least a 99.95% uptime SLA for Basic, Standard and Premium at time of general availability for each of these service tiers. While we’re gathering feedbacks from customers about the new database tiers during the Public Preview period, there are many customers who are already running their production workload on the new database tiers today, including both external, large enterprise customers as well as many critical Microsoft products and services. During Public Preview the new database tiers also have the same level of CSS support as the Generally Available Web and Business Editions. You can find more information in the Support and SLA section here: http://azure.microsoft.com/en-us/pricing/details/sql-database/#basic-standard-and-premium

    At the same time we’re actively working on bringing the new database tiers to GA as soon as possible.

    Thanks!
    Jason Wu
    Group Program Manager, Azure SQL Database

  • We understand your concerns around the deprecation of the Federations in SQLDB. The change is necessary because of scaling limits in the current implementation of Federations and incompatibility with the newest features we are introducing in the cloud data platform

  • Is there a better/faster upgrade path for existing SQL Azure users than restoring from a .bacpac in an Azure blob store? I am working with a client who has an 8 GB production database. We tried loading loading the bacpac from the management console into a new S2 database to test the upgrade process. It took almost the entire day to do the restore, we want to avoid that much production downtime if possible.

Comments have been disabled for this content.