Azure: New DocumentDB NoSQL Service, New Search Service, New SQL AlwaysOn VM Template, and more

Today we released a major set of updates to Microsoft Azure. Today’s updates include:

  • DocumentDB: Preview of a New NoSQL Document Service for Azure
  • Search: Preview of a New Search-as-a-Service offering for Azure
  • Virtual Machines: Portal support for SQL Server AlwaysOn + community-driven VMs
  • Web Sites: Support for Web Jobs and Web Site processes in the Preview Portal
  • Azure Insights: General Availability of Microsoft Azure Monitoring Services Management Library
  • API Management: Support for API Management REST APIs

All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them:

DocumentDB: Announcing a New NoSQL Document Service for Azure

I’m excited to announce the preview of our new DocumentDB service - a NoSQL document database service designed for scalable and high performance modern applications.  DocumentDB is delivered as a fully managed service (meaning you don’t have to manage any infrastructure or VMs yourself) with an enterprise grade SLA.

As a NoSQL store, DocumentDB is truly schema-free. It allows you to store and query any JSON document, regardless of schema. The service provides built-in automatic indexing support – which means you can write JSON documents to the store and immediately query them using a familiar document oriented SQL query grammar. You can optionally extend the query grammar to perform service side evaluation of user defined functions (UDFs) written in server-side JavaScript as well. 

DocumentDB is designed to linearly scale to meet the needs of your application. The DocumentDB service is purchased in capacity units, each offering a reservation of high performance storage and dedicated performance throughput. Capacity units can be easily added or removed via the Azure portal or REST based management API based on your scale needs. This allows you to elastically scale databases in fine grained increments with predictable performance and no application downtime simply by increasing or decreasing capacity units.

Over the last year, we have used DocumentDB internally within Microsoft for several high-profile services.  We now have DocumentDB databases that are each 100s of TBs in size, each processing millions of complex DocumentDB queries per day, with predictable performance of low single digit ms latency.  DocumentDB provides a great way to scale applications and solutions like this to an incredible size.

DocumentDB also enables you to tune performance further by customizing the index policies and consistency levels you want for a particular application or scenario, making it an incredibly flexible and powerful data service for your applications.   For queries and read operations, DocumentDB offers four distinct consistency levels - Strong, Bounded Staleness, Session, and Eventual. These consistency levels allow you to make sound tradeoffs between consistency and performance. Each consistency level is backed by a predictable performance level ensuring you can achieve reliable results for your application.

DocumentDB has made a significant bet on ubiquitous formats like JSON, HTTP and REST – which makes it easy to start taking advantage of from any Web or Mobile applications.  With today’s release we are also distributing .NET, Node.js, JavaScript and Python SDKs.  The service can also be accessed through RESTful HTTP interfaces and is simple to manage through the Azure preview portal.

Provisioning a DocumentDB account

To get started with DocumentDB you provision a new database account. To do this, use the new Azure Preview Portal (http://portal.azure.com), click the Azure gallery and select the Data, storage, cache + backup category, and locate the DocumentDB gallery item.

image

Once you select the DocumentDB item, choose the Create command to bring up the Create blade for it.

In the create blade, specify the name of the service you wish to create, the amount of capacity you wish to scale your DocumentDB instance to, and the location around the world that you want to deploy it (e.g. the West US Azure region):

image

Once provisioning is complete, you can start to manage your DocumentDB account by clicking the new instance icon on your Azure portal dashboard. 

image

The keys tile can be used to retrieve the security keys to use to access the DocumentDB service programmatically.

Developing with DocumentDB

DocumentDB provides a number of different ways to program against it. You can use the REST API directly over HTTPS, or you can choose from either the .NET, Node.js, JavaScript or Python client SDKs.

The JSON data I am going to use for this example are two families:

// AndersonFamily.json file

{

    "id": "AndersenFamily",

    "lastName": "Andersen",

    "parents": [

        { "firstName": "Thomas" },

        { "firstName": "Mary Kay" }

    ],

    "children": [

        { "firstName": "John", "gender": "male", "grade": 7 }

    ],

    "pets": [

        { "givenName": "Fluffy" }

    ],

    "address": { "country": "USA", "state": "WA", "city": "Seattle" }

}

and

// WakefieldFamily.json file

{

    "id": "WakefieldFamily",

    "parents": [

        { "familyName": "Wakefield", "givenName": "Robin" },

        { "familyName": "Miller", "givenName": "Ben" }

    ],

    "children": [

        {

            "familyName": "Wakefield",

            "givenName": "Jesse",

            "gender": "female",

            "grade": 1

        },

        {

            "familyName": "Miller",

            "givenName": "Lisa",

            "gender": "female",

            "grade": 8

        }

    ],

    "pets": [

        { "givenName": "Goofy" },

        { "givenName": "Shadow" }

    ],

    "address": { "country": "USA", "state": "NY", "county": "Manhattan", "city": "NY" }

}

Using the NuGet package manager in Visual Studio, I can search for and install the DocumentDB .NET package into any .NET application. With the URI and Authentication Keys for the DocumentDB service that I retrieved earlier from the Azure Management portal, I can then connect to the DocumentDB service I just provisioned, create a Database, create a Collection, Insert some JSON documents and immediately start querying for them:

using (client = new DocumentClient(new Uri(endpoint), authKey))

{

    var database = new Database { Id = "ScottsDemoDB" };

    database = await client.CreateDatabaseAsync(database);

 

    var collection = new DocumentCollection { Id = "Families" };

    collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection);

 

    //DocumentDB supports strongly typed POCO objects and also dynamic objects

    dynamic andersonFamily =  JsonConvert.DeserializeObject(File.ReadAllText(@".\Data\AndersonFamily.json"));

    dynamic wakefieldFamily = JsonConvert.DeserializeObject(File.ReadAllText(@".\Data\WakefieldFamily.json"));

 

    //persist the documents in DocumentDB

    await client.CreateDocumentAsync(collection.SelfLink, andersonFamily);

    await client.CreateDocumentAsync(collection.SelfLink, wakefieldFamily);

 

    //very simple query returning the full JSON document matching a simple WHERE clause

    var query = client.CreateDocumentQuery(collection.SelfLink, "SELECT * FROM Families f WHERE f.id = 'AndersenFamily'");

    var family = query.AsEnumerable().FirstOrDefault();

 

    Console.WriteLine("The Anderson family have the following pets:");              

    foreach (var pet in family.pets)

    {

        Console.WriteLine(pet.givenName);

    }

 

    //select JUST the child record out of the Family record where the child's gender is male

    query = client.CreateDocumentQuery(collection.DocumentsLink, "SELECT * FROM c IN Families.children WHERE c.gender='male'");

    var child = query.AsEnumerable().FirstOrDefault();

 

    Console.WriteLine("The Andersons have a son named {0} in grade {1} ", child.firstName, child.grade);

 

    //cleanup test database

    await client.DeleteDatabaseAsync(database.SelfLink);

}

As you can see above – the .NET API for DocumentDB fully supports the .NET async pattern, which makes it ideal for use with applications you want to scale well. 

Server-side JavaScript Stored Procedures

If I wanted to perform some updates affecting multiple documents within a transaction, I can define a stored procedure using JavaScript that swapped pets between families. In this scenario it would be important to ensure that one family didn’t end up with all the pets and another ended up with none due to something unexpected happening. Therefore if an error occurred during the swap process, it would be crucial that the database rollback the transaction and leave things in a consistent state.  I can do this with the following stored procedure that I run within the DocumentDB service:

function SwapPets(family1Id, family2Id) {

    var context = getContext();

    var collection = context.getCollection();

    var response = context.getResponse();

 

    collection.queryDocuments(collection.getSelfLink(), 'SELECT * FROM Families f where f.id  = "' + family1Id + '"', {},

    function (err, documents, responseOptions) {

        var family1 = documents[0];

 

        collection.queryDocuments(collection.getSelfLink(), 'SELECT * FROM Families f where f.id = "' + family2Id + '"', {},

        function (err2, documents2, responseOptions2) {

            var family2 = documents2[0];

                   

            var itemSave = family1.pets;

            family1.pets = family2.pets;

            family2.pets = itemSave;

 

            collection.replaceDocument(family1._self, family1,

                function (err, docReplaced) {

                    collection.replaceDocument(family2._self, family2, {});

                });

 

            response.setBody(true);

        });

    });

}

 

If an exception is thrown in the JavaScript function due to for instance a concurrency violation when updating a record, the transaction is reversed and system is returned to the state it was in before the function began.

It’s easy to register the stored procedure in code like below (for example: in a deployment script or app startup code):

    //register a stored procedure

    StoredProcedure storedProcedure = new StoredProcedure

    {

        Id = "SwapPets",

        Body = File.ReadAllText(@".\JS\SwapPets.js")

    };

               

    storedProcedure = await client.CreateStoredProcedureAsync(collection.SelfLink, storedProcedure);

 

And just as easy to execute the stored procedure from within your application:

    //execute stored procedure passing in the two family documents involved in the pet swap              

    dynamic result = await client.ExecuteStoredProcedureAsync<dynamic>(storedProcedure.SelfLink, "AndersenFamily", "WakefieldFamily");

If we checked the pets now linked to the Anderson Family we’d see they have been swapped.

Learning More

It’s really easy to get started with DocumentDB and create a simple working application in a couple of minutes.  The above was but one simple example of how to start using it.  Because DocumentDB is schema-less you can use it with literally any JSON document.  Because it performs automatic indexing on every JSON document stored within it, you get screaming performance when querying those JSON documents later. Because it scales linearly with consistent performance, it is ideal for applications you think might get large.

You can learn more about DocumentDB from the new DocumentDB development center here.

Search: Announcing preview of new Search as a Service for Azure

I’m excited to announce the preview of our new Azure Search service.  Azure Search makes it easy for developers to add great search experiences to any web or mobile application.   

Azure Search provides developers with all of the features needed to build out their search experience without having to deal with the typical complexities that come with managing, tuning and scaling a real-world search service.  It is delivered as a fully managed service with an enterprise grade SLA.  We also are releasing a Free tier of the service today that enables you to use it with small-scale solutions on Azure at no cost.

Provisioning a Search Service

To get started, let’s create a new search service.  In the Azure Preview Portal (http://portal.azure.com), navigate to the Azure Gallery, and choose the Data storage, cache + backup category, and locate the Azure Search gallery item.

image

Locate the “Search” service icon and select Create to create an instance of the service:

image

You can choose from two Pricing Tier options: Standard which provides dedicated capacity for your search service, and a Free option that allows every Azure subscription to get a free small search service in a shared environment.

The standard tier can be easily scaled up or down and provides dedicated capacity guarantees to ensure that search performance is predictable for your application.  It also supports the ability to index 10s of millions of documents with lots of indexes.

The free tier is limited to 10,000 documents, up to 3 indexes and has no dedicated capacity guarantees. However it is also totally free, and also provides a great way to learn and experiment with all of the features of Azure Search.

Managing your Azure Search service

After provisioning your Search service, you will land in the Search blade within the portal - which allows you to manage the service, view usage data and tune the performance of the service:

image

I can click on the Scale tile above to bring up the details of the number of resources allocated to my search service. If I had created a Standard search service, I could use this to increase the number of replicas allocated to my service to support more searches per second (or to provide higher availability) and the number of partitions to give me support for higher numbers of documents within my search service.

Creating a Search Index

Now that the search service is created, I need to create a search index that will hold the documents (data) that will be searched. To get started, I need two pieces of information from the Azure Portal, the service URL to access my Azure Search service (accessed via the Properties tile) and the Admin Key to authenticate against the service (accessed via the Keys title).

image

Using this search service URL and admin key, I can start using the search service APIs to create an index and later upload data and issue search requests. I will be sending HTTP requests against the API using that key, so I’ll setup a .NET HttpClient object to do this as follows:

HttpClient client = new HttpClient();

client.DefaultRequestHeaders.Add("api-key", "19F1BACDCD154F4D3918504CBF24CA1F");

I’ll start by creating the search index. In this case I want an index I can use to search for contacts in my dataset, so I want searchable fields for their names and tags; I also want to track the last contact date (so I can filter or sort on that later on) and their address as a lat/long location so I can use it in filters as well. To make things easy I will be using JSON.NET (to do this, add the NuGet package to your VS project) to serialize objects to JSON.

var index = new

{

    name = "contacts",

    fields = new[]

    {

        new { name = "id", type = "Edm.String", key = true },

        new { name = "fullname", type = "Edm.String", key = false },

        new { name = "tags", type = "Collection(Edm.String)", key = false },

        new { name = "lastcontacted", type = "Edm.DateTimeOffset", key = false },

        new { name = "worklocation", type = "Edm.GeographyPoint", key = false },

    }

};

 

var response = client.PostAsync("https://scottgu-dev.search.windows.net/indexes/?api-version=2014-07-31-Preview",

                                new StringContent(JsonConvert.SerializeObject(index), Encoding.UTF8, "application/json")).Result;

response.EnsureSuccessStatusCode();

You can run this code as part of your deployment code or as part of application initialization.

Populating a Search Index

Azure Search uses a push API for indexing data. You can call this API with batches of up to 1000 documents to be indexed at a time. Since it’s your code that pushes data into the index, the original data may be anywhere: in a SQL Database in Azure, DocumentDb database, blob/table storage, etc.  You can even populate it with data stored on-premises or in a non-Azure cloud provider.

Note that indexing is rarely a one-time operation. You will probably have an initial set of data to load from your data source, but then you will want to push new documents as well as update and delete existing ones. If you use Azure Websites, this is a natural scenario for Webjobs that can run your indexing code regularly in the background.

Regardless of where you host it, the code to index data needs to pull data from the source and push it into Azure Search. In the example below I’m just making up data, but you can see how I could be using the result of a SQL or LINQ query or anything that produces a set of objects that match the index fields we identified above.

var batch = new

{

    value = new[]

    {

        new

        {

            id = "221",

            fullname = "Jay Adams",

            tags = new string[] { "work" },

            lastcontacted = DateTimeOffset.UtcNow,

            worklocation = new

            {

                type = "Point",

                coordinates = new [] { -122.131577, 47.678581 }

            }

        },

        new

        {

            id = "714",

            fullname = "Catherine Abel",

            tags = new string[] { "work", "personal" },

            lastcontacted = DateTimeOffset.UtcNow,

            worklocation = new

            {

                type = "Point",

                coordinates = new [] { -121.825579, 47.1419814}

            }

        }

    }

};

 

var response = client.PostAsync("https://scottgu-dev.search.windows.net/indexes/contacts/docs/index?api-version=2014-07-31-Preview",

                                new StringContent(JsonConvert.SerializeObject(batch), Encoding.UTF8, "application/json")).Result;

response.EnsureSuccessStatusCode();

Searching an Index

After creating an index and populating it with data, I can now issue search requests against the index. Searches are simple HTTP GET requests against the index, and responses contain the data we originally uploaded as well as accompanying scoring information.

I can do a simple search by executing the code below, where searchText is a string containing the user input, something like abel work for example:

var response = client.GetAsync("https://scottgu-dev.search.windows.net/indexes/contacts/docs?api-version=2014-07-31-Preview&search=" + Uri.EscapeDataString(searchText)).Result;

response.EnsureSuccessStatusCode();

 

dynamic results = JsonConvert.DeserializeObject(response.Content.ReadAsStringAsync().Result);

 

foreach (var result in results.value)

{

    Console.WriteLine("FullName:" + result.fullname + " score:" + (double)result["@search.score"]);

}

Learning More

The above is just a simple scenario of what you can do.  There are a lot of other things we could do with searches. For example, I can use query string options to filter, sort, project and page over the results. I can use hit-highlighting and faceting to create a richer way to navigate results and suggestions to implement auto-complete within my web or mobile UI.

In this example, I used the default ranking model, which uses statistics of the indexed text and search string to compute scores. You can also author your own scoring profiles that model scores in ways that match the needs of your application.

Check out the Azure Search documentation for more details on how to get started, and some of the more advanced use-cases you can take advantage of.  With the free tier now available at no cost to every Azure subscriber, there is no longer any reason not to have Search fully integrated within your applications.

Virtual Machines: Support for SQL Server AlwaysOn, VM Depot images

Last month we added support for managing VMs within the Azure Preview Portal (http://portal.azure.com).  We also released built-in portal support that enables you to easily create multi-VM SharePoint Server Farms as well as a slew of additional Azure Certified VM images.  You can learn more about these updates in my last blog post.

Today, I’m excited to announce new support for automatically deploying SQL Server VMs with AlwaysOn configured, as well as integrated portal support for community supported VM Depot images.

SQL Server AlwaysOn Template

AlwaysOn Availability Groups, released in SQL Server 2012 and enhanced in SQL Server 2014, guarantee high availability for mission-critical workloads. Last year we started supporting SQL Availability Groups on Azure Infrastructure Services. In such a configuration, two SQL replicas (primary and secondary), each in its own Azure VM, are configured for automatic failover, and a listener (DNS name) is configured for client connectivity. Other components required are a file share witness to guarantee quorum in the configuration to avoid “split brain” scenarios, and a domain controller to join all VMs to the same domain. The SQL as well as the domain controller replicas are each deployed to an availability set to ensure they are in different Azure failure and upgrade domains.

Prior to today’s release, setting up the Availability Group configuration could be tedious and time consuming. We have dramatically simplified this experience through a new SQL Server AlwaysOn template in the Azure Gallery. This template fully automates the configuration of a highly available SQL Server deployment on Azure Infrastructure Services using an Availability Group.

You can find the template by navigating to the Azure Gallery within the Azure Preview Portal (http://portal.azure.com), selecting the Virtual Machine category on the left and selecting the SQL Server 2014 AlwaysOn gallery item. In the gallery details page, select Create. All you need is to provide some basic configuration information such as the administrator credentials for the VMs and the rest of the settings are defaulted for you. You may consider changing the defaults for Listener name as this is what your applications will use to connect to SQL Server.

image

Upon creation, 5 VMs are created in the resource group: 2 VMs for the SQL Server replicas, 2 VMs for the Domain Controller replicas, and 1 VM for the file share witness.

Once created, you can RDP to one of the SQL Server VMs to see the Availability Group configuration as depicted below:

image

Try out the SQL Server AlwaysOn template in the Azure Preview Portal today and give us your feedback!

VM Depot in Azure Gallery

Community-driven VM Depot images have been supported on the Azure platform for a couple of years now. But prior to today’s release they weren’t fully integrated into the mainline user experience.

Today, I’m excited to announce that we have integrated community VMs  into the Azure Preview Portal and the Azure gallery. With this release, you will find close to 300 pre-configured Virtual Machine images for Microsoft Azure.

Using these images, fully functional Virtual Machines can be deployed in the Preview Portal in minutes and customized for specific use cases. Starting from base operating system distributions (such as Debian, Ubuntu, CentOS, Suse and FreeBSD) through developer stacks (such as LAMP, Ruby on Rails, Node and Django), to complete applications (such as Wordpress, Drupal and Apache Solr), there is something for everyone in VM Depot.

Try out the VM Depot images in the Azure gallery from within the Virtual Machine category.

image

Web Sites: WebJobs and Process Management in the Preview Portal

Starting with today’s Azure release, Web Site WebJobs are now supported in the Azure Preview Portal.  You can also now drill into your Web Sites and monitor the health of any processes running within them (both to host your web code as well as your web jobs).

Web Site WebJobs

Using WebJobs, you can now now run any code within your Azure Web Sites – and do so in a way that is readily parallelizable, globally scalable, and complete with remote debugging, full VS support and an optional SDK to facilitate authoring. For more information about the power of WebJobs, visit Azure WebJobs recommended resources.

With today’s Azure release, we now support two types of Webjobs: on Demand and Continuous.  To use WebJobs in the preview portal, navigate to your web site and select the WebJobs tile within the Web Site blade. Notice that the part also now shows the count of WebJobs available.

image

By drilling into the title, you can view existing WebJobs as well as create new OnDemand or Continuous WebJobs. Scheduled WebJobs are not yet supported in the preview portal, but expect to see this in the near future.

Web Site Processes

I’m excited to announce a new feature in the Azure Web Sites experience in the Preview Portal - Websites Processes. Using Websites Process you can enumerate the different instances of your site, browse through the different processes on each instance, and even drill down to the handles and modules associated with each process. You can then check for detailed information like version, language and more.

image

In addition, you also get rich monitoring for CPU, Working Set and Thread count at the process level.  Just like with Task Manager for Windows, data collection begins when you open the Websites Processes blade, and stops when you close it.

image

This feature is especially useful when your site has been scaled out and is misbehaving in some specific instances but not in others. You can quickly identify runaway processes, find open file handles, and even kill a specific process instance.

Monitoring and Management SDK: Programmatic Access to Monitoring Data

The Azure Management Portal provides built-in monitoring and management support that makes it easy for you to track the health of your applications and solutions deployed within Azure.

If you want to programmatically access monitoring and management features in Azure, you can also now use our .NET SDK from Nuget. We are releasing this SDK to general availability today, so you can now use it for your production services!

For example, if you want to build your own custom dashboard that shows metric data from across your services, you can get that metric data via the SDK:

// Create the metrics client by obtain the certificate with the specified thumbprint.

MetricsClient metricsClient = new MetricsClient(new CertificateCloudCredentials(SubscriptionId, GetStoreCertificate(Thumbprint)));

 

// Build the resource ID string.

string resourceId = ResourceIdBuilder.BuildWebSiteResourceId("webtest-group-WestUSwebspace", "webtests-site");

 

// Get the metric definitions.

MetricDefinitionCollection metricDefinitions = metricsClient.MetricDefinitions.List(resourceId, null, null).MetricDefinitionCollection;

 

// Display the available metric definitions.

Console.WriteLine("Choose metrics (comma separated) to list:");

int count = 0;

foreach (MetricDefinition metricDefinition in metricDefinitions.Value)

{

    Console.WriteLine(count + ":" + metricDefinition.DisplayName);

    count++;

}

 

// Ask the user which metrics they are interested in.

var desiredMetrics = Console.ReadLine().Split(',').Select(x =>  metricDefinitions.Value.ToArray()[Convert.ToInt32(x.Trim())]);

 

// Get the metric values for the last 20 minutes.

MetricValueSetCollection values = metricsClient.MetricValues.List(

    resourceId,

    desiredMetrics.Select(x => x.Name).ToList(),

    "",

    desiredMetrics.First().MetricAvailabilities.Select(x => x.TimeGrain).Min(),

    DateTime.UtcNow - TimeSpan.FromMinutes(20),

    DateTime.UtcNow

).MetricValueSetCollection;

 

// Display the metric values to the user.

foreach (MetricValueSet valueSet in values.Value )

{

    Console.WriteLine(valueSet.DisplayName + " for the past 20 minutes:");

    foreach (MetricValue metricValue in valueSet.MetricValues)

    {

        Console.WriteLine(metricValue.Timestamp + "\t" + metricValue.Average);

    }

}

 

Console.Write("Press any key to continue:");

Console.ReadKey();

We support metrics for a variety of services with the monitoring SDK:

Service

Typical metrics

Frequencies

Cloud services

CPU, Network, Disk

5 min, 1 hr, 12 hrs

Virtual machines

CPU, Network, Disk

5 min, 1 hr, 12 hrs

Websites

Requests, Errors, Memory, Response time, Data out

1 min, 1 hr

Mobile Services

API Calls, Data Out, SQL performance

1 hr

Storage

Requests, Success rate, End2End latency

1 min, 1 hr

Service Bus

Messages, Errors, Queue length, Requests

5 min

HDInsight

Containers, Apps running

15 min

If you’d like to manage advanced autoscale settings that aren’t possible to do in the Portal, you can also do that via the SDK. For example, you can construct autoscale based on custom metrics – you can autoscale by anything that is returned from MetricDefinitions.

All of the documentation on the SDK is available on MSDN.

API Management: Support for Services REST API

We launched the Azure API Management service into preview in May of this year.  The API Management service enables  customers to quickly and securely publish APIs to partners, the public development community, and even internal developers.

Today, I’m excited to announce the availability of the API Management REST API which opens up a large number of new scenarios. It can be used to manage APIs, products, subscriptions, users and groups in addition to accessing your API analytics. In fact, virtually any management operation available in the Management API Portal is now accessible programmatically - opening up a host of integration and automation scenarios, including directly monetizing an API with your commerce provider of choice, taking over user or subscription management, automating API deployments and more.

We've even provided an additional SAS (Shared Access Signature) security option. An integrated experience in the publisher portal allows you to generate SAS tokens - so securely calling your API service couldn’t be easier. In just three easy steps:

  1. Enable the API on the System Settings page on the Publisher Portal
  2. Acquire a time-limited access token either manually or programmatically
  3. Start sending requests to the API, providing the token with every request

image 

See the REST API reference for full details.

Delegation of user registration and product subscription

The new API Management REST API makes it easy to automate and integrate other processes with API management. Many customers integrating in this way already have a user account system and would prefer to use this existing resource, instead of the built-in functionality provided by the Developer Portal. This feature, called Delegation, enables your existing website or backend to own the user data, manage subscriptions and seamlessly integrate with API Management's dynamically generated API documentation.

image

It's easy to enable Delegation: in the Publisher Portal navigate to the Delegation section and enable Delegated Sign-in and Sign up, provide the endpoint URL and validation key and you're good to go. For more details, check out the how-to guide.

Summary

Today’s Microsoft Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier.

If you don’t already have a Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Microsoft Azure Developer Center to learn more about how to build apps with it.

Hope this helps,

Scott

P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at:twitter.com/scottgu

62 Comments

  • Oh man, thank you SO much for the Search as a Service feature, it looks absolutely AWESOME. Lucene.NET was always a huge PITA on Azure because it was always incompatible with the SDK and AzureDirectory. The documentation was also horrible and outdated.

  • 'SELECT * FROM Families f where f.id = "' + family1Id + '"'

    So, are we back in SQL injection land?

  • Thank you so much, this is awesome news, Search Service looks amazing. I was looking to implement fast search on my website running jobs against my DB and storing in Table Storage but this approach is really what I needed. Love Azure!! Keep up the good work.

  • Hi Scott,
    Congrats, Great features! I was waiting for AlwaysOn template but I think that this still IAS. Do you know when SQL Premium will be released? I mean that still in preview with 99.95% of SLA.

    I guess that we have and debit when talking about SQL in Azure. In my company for example, we have to move to Virtual Machines, because of performance degradation, after 3 years using SQL Azure. So, we have to worry about backups, Scalability, performance etc.

    If you go to Amazon RDS, I guess that we have more options using the "ready-to-go" concept. What do you think about it?

    Thank you!

  • Search as a service certainly looks promising. Are there any plans to have any other pricing tiers? There seems to be quite a jump between the two current tiers.

  • Azure is becoming a monster. You have to hire specialists in order to use it.

  • @tobi, DocumentDB's SQL API executes strictly read only operations. The query is scoped to the collection it is executed on, and the permission model does not allow elevation of privileges.

  • The sql server always on template will be released for sql server 2012 too?

  • If you think Azure's a monster, try AWS sometime. (0)_(0)

  • @Thiago: Yes, we'll soon support also SQL12 in the AlwaysOn template.

  • Does DocumentDB play nicely with existing poco classes?

  • DocumentDB is exactly what the doctor ordered! Thanks!

    Is there a way to only partially update a document, instead of submitting the whole document again when editing?

  • Is there any plans for spatial indexes in the DocumentDB? I believe it's the only thing that would make me choose MongoDB over it at present.

  • What is the local development story for DocumentDB? Will there be an emulator or some suggested way we can develop against an instance on our local machines before deploying to Azure?

  • Paging seems to be available with Continuation Tokens. But how does Order By work?

  • Are stored procedures the only way of doing multi-action transactional commits?

  • @Lai - yes, the .Net SDK plays well with poco classes. Take a look at the samples posted here http://code.msdn.microsoft.com/Azure-DocumentDB-NET-Code-6b3da8af in the document management sample there's some poco examples.

  • Hello Scott,

    Is there support for geo queries?

  • Really excited about DocumentDB! Great work guys!

  • I would also like to know the story on partial updates to a document. Also, the ordering example seems to require a stored proc which basically pulls back all records and sorts them in memory which seems, well, bad :)

    Lastly, the guidance on CUs and Collections and scale seem a bit vague. For example, can I have a single massive collection that spans 10 CUs (so 100GB of storage) or do I have to shard my collections? Also is it true that if I have more than one collection the RUs are distributed evenly between them? I would rather just have a pool and whatever collection needs more gets more.

  • @Andrew - we are considering different improvements to the index include support for spatial. Please go to http://feedback.azure.com/forums/263030-documentdb and get it voted up. This will help us prioritize.
    @Eric - currently we do not have a local emulator environment but it's something we want to provide. Same advice as I gave Andrew, please go to feedback.azure.com and vote it up.
    @Andy - Order By is in the pipeline coming soon. For some use cases, you can get the equivalent functionality using range query and pagination with client side ordering of batches
    @Phil - stored procedures and triggers allow you to do multi-document commits through JavaScript code executed on the server. There's more on this here http://azure.microsoft.com/en-us/documentation/articles/documentdb-programming/

  • At the bottom of this post is the text "Then visit the Microosft Azure Developer Center to learn more". Note the "oo" in Microsoft.

  • @RyanLM, Partial updates of documents, a.k.a. PATCH is coming soon. Please vote for the feature in feedback.azure.com.

    Regarding your other question - RUs are pooled, and shared evenly among collections in an account. Collections in the preview offers have a maximum limit of 10GB storage and 2000 RUs of throughput. Please refer to the limits page for the latest values: http://azure.microsoft.com/en-us/documentation/articles/documentdb-limits/.

  • The new Azure portal is really unusable. Not sure why Microsoft feels the need to completely redo the portal so often but the main goal should be fast access to different services and with this drill down methodology it's just to slow to get anywhere

    AWS management portal is a mess and it seems Microsoft wants their's to be just as messy

  • Have to agree with EShy, the new portal is terrible compared to the current one which is quite usable.

  • Looking good, had been looking at Qbox.io for a hosted search for a startup app and then Azure Search rocks up. Nice.

    One thing. The pricing tiers at the minute... could so with one or two in between free and standard. 50% preview discount which would mean the standard tier ends up at £160/month. Ouch. Azure itself might be scalable but the price certainly ain't! Case of http://azure.microsoft.com/en-us/pricing/details/search/ vs http://qbox.io/pricing

  • I have to agree with @EShy and @Craig, the new portal is really a pain to use compared with the existing https://manage.windowsazure.com portal. The navigation of this new preview portal is horribly slow, confusing and way too many steps to get to what you want to do.

    Is Microsoft's plan to completely replace the existing portal with the new preview one? I for one sure hope not.

  • I'm so excited about DocumentDB and Search Service. I've been using Amazon's CloudSearch for full text searches and it's very expensive for what you get with it.

    But I agree with everyone else about the new Portal. It's an atrocity. Cancel that project.

  • Hi,

    Something like DocumentDB is something I have been wanting for a long time in Azure - thanks for that :) However, I was wondering how come the starting price is as high as it is? Is there any plan to offer this at an entry level (or play-around-with level) any time soon? 100 MB at a low price or even free for instance?

    At the moment I know of quite a few shoestring projects where DocumentDB would fit perfectly, but the high entry price makes it a no-go :(

  • I also agree on the portal. The existing one is great and usable. The new one I get lost, confused, and the sliding window is always in the way.

  • Hi Christian - thanks for the feedback. We are looking into the addition of a lower end offer. No specifics on time frame at the moment but stay tuned.

  • @Joseph Woodward,

    It is agreed that the jump from free to the paid standard pricing is a large jump. We will definitely be considering a price somewhere in the middle assuming their is sufficient demand. The big thing that we will want your feedback on is what would be viable trade-offs to get to the lower price point? Would you be willing to have lower QPS rates and document count then what is available in the current standard offering? Do you have any other ideas of things you would be willing to give up to get to a lower price point?

    Liam Cavanagh (MSFT)

  • @Ion Singh

    Was your question on geo-search in reference to Azure Search? If so, the answer is yes. We support geo queries that includes intersects (i.e., find all documents within a specified polygon) and distance where you can search for documents within X KM's of a specified location. In addition, you can use geo-locations to boost items weightings in t results. For example, you might want to boost items that are close to the user executingt the search

    Liam Cavanagh

  • The search service feature is assumed to be able to index documents in azure BLOB storage. Is there any tutorials or samples for such cases ? Can we index things like XML documents or HTML files dropped into a BLOB container.

  • The search service functionality looks fantastic but the pricing really puts it out of touch for small businesses. There is a big jump from the 10,000 document limit to the 15,000,000 of the Standard tier. I would like to see a middle tier that allows around 1,000,000 documents for $20 per month and 5 queries per second.

  • Is there no "MoreLikeThis" feature in Azure Search?

  • I wish that Microsoft would use their own terminology right. AlwaysOn is NOT a feature. Availability Groups is the feature. AlwaysOn is a marketing term that covers FCIs and AGs.

  • David - we don't currently offer "more like this" queries in Azure Search. Can you tell us a bit of your scenario for this feature?

  • Pablo, thanks for letting me know! In my scenario, I use it to narrow down the search for similar documents in an index; however, after evaluating Azure Search a bit more, I've found that it's not the best solution for us because each user has their own set of data and their own index in our system. It wouldn't make sense for us to switch to Azure Search at this time.

  • David - got it, thanks for the extra details. Not sure if this helps in your scenario, but we see many cases where you have many users each with their own dataset. Two ways to approach this in Azure Search are to create an index for each dataset (if you have a small number, e.g. 10s of datasets) or to use a single index and a structured filter. The latter approach can support an arbitrary number of datasets as long as the total number of documents is within the limits of the provisioned capacity. Let me know if you want more details about any of these approaches.

  • David - got it, thanks for the extra details. Not sure if this helps in your scenario, but we see many cases where you have many users each with their own dataset. Two ways to approach this in Azure Search are to create an index for each dataset (if you have a small number, e.g. 10s of datasets) or to use a single index and a structured filter. The latter approach can support an arbitrary number of datasets as long as the total number of documents is within the limits of the provisioned capacity. Let me know if you want more details about any of these approaches.

  • David - I ended up signing with your name , sorry about that :)

  • I can't find the package in nugget for DocumentDB......... help!

  • The NuGet package for DocumentDB is here.
    https://www.nuget.org/packages/Microsoft.Azure.Documents.Client/0.9.0-preview

  • Need to read about DocumentDB more....

  • I agree with several of the comments on here regarding the new portal. It's just plain awful.

    I don't know why Microsoft seems to think that everyone now uses touch devices. I just want to be able to get to my services quickly without having to drill down several times. It takes ages to start up, it's impossible to see what I'm actually using, the pricing models of websites and VM's require the patient of a saint to work through and deleting items seems to take an age (I deleted a Redis cache but it still appears on the desktop until I manually unpinned it).

    Azure is great, but you're making the same mistakes with the UI as you made with Windows 8. If you're so focused on touch interfaces, why not just detect the browser type and display a touch enhanced version to people on touch devices whilst leaving us on the desktop with something usable.

  • thank you
    this blog is very best :)

  • Great new features. Will definitely be checking out Azure search as a possible replacement for a current solr-based search solution. MoreLikeThis type search though will be key to finding related articles of content for a huge CMS.

    I have to TRIPLE UNDERSCORE the comments about the new portal. Please don't take away the current portal. It has a nice, standard, familiar UI paradigm (left-nav and top-nav with main content pane) that works like the majority of "administration areas" in any system. That's why it's easy to use - it's familiar. Take a look an any of the admin templates on theme forest (http://themeforest.net/category/site-templates/admin-templates) and you'll see what I mean. Sure, the portal may not look 100% unique, but it doesn't have to. It only needs to look like "Server Manager" in Windows Server since it's like a cloud equivalent.

    The new portal looks cool but I feel like I'm playing a game more than using an administration area. If anything, take touch users to the new portal and take desktop users to the existing portal. I agree that the new portal feels like Windows 8 Start screen, but productivity users need the desktop (the current portal) 95% of the time. I can't stress that enough. If it ain't broke, don't fix it. And it ain't broke!

  • Thank you very much very nice website

  • yeah...not feeling the metro theme. I think you guys really blew it with that one. also it looks like documentdb needs another take as well. otherwise I think azure has a good thing going.

  • Ok, you answered regarding geo queries in the Search service. But what I'm interested in is to be able to efficiently query DocumentDB filtering by geospatial properties (this is what I can do in SQL Server). I could use the Search service to index the content of the DocumentDB, search there, take the found document ids and go to the DocumentDB and get corresponding documents. But this looks too fat, and Search service is probably designed for much richer and more complex searches. So is there any chance that geo indexing will be supported in DocumentDB?

  • Likely, we can see the new search box in azure. It's working really awesome.

  • Really nice website for the best information.

  • Is this service will be available on Windows and Mac too? Waiting for it!

  • Really nice website

  • the best information

  • @Andriy - we are absolutely looking to support native indexing of geospatial properties in the future. Please go and vote this up at http://feedback.azure.com/forums/263030-documentdb - we review this list regularly as we prioritize upcoming features.

  • My understanding is that, in DocumentDB, we don't have to shard data for the first 50 GBs (10 GB per CU * 5 CUs). Is this true? I'm looking for a solution from Microsoft where my application sees only one logical database (kind of like NewSQL products) so I don't have to deal with sharding in application code. What comes after the 5 CUs?

    Also, we definitely need OrderBy, aggregate functions like Count & Sum, and active geo-replication. I'd also like filtered indexes. Is there any way to do auditing?

  • Questions:
    when I search for “my beach” which is a misspelling I get a bunch of results, but nowhere in the list do I see “my beach”. Notice how there is an i at the end of my? If I search for the correct spelling of “mine beach” it comes up . The problem we have with searches is that many things are misspelled but this systems needs to find near matches. So, do we have any options so that’s it is more like full text search and can find partial words, near spellings, plurals, etc?

    Regards,
    Shakti

  • @Mark each database can scale as you increase the number of CUs. If you need to scale beyond 5 CUs, please submit a support request and we can lift the quota for you. Scaling will adhere to the quotas and limits of each resource type. Today, each collection can grow to support 10GB of data. You can add collections as your DB needs to store more data. As you add collections, your application will need to determine where to place data - note that you can store heterogeneous document types in a single collection.

    For query (and any other) features, please vote up what you need at http://feedback.azure.com/forums/263030-documentdb

  • Thanks For Your Information and Any body want to

    Find top .NET online training providers in INDIA

    <a href="http://biginfosys.com/net-online-training.html">
    .NET Online Training by IT Experts INDIA | GERMANY | USA | UK | SINGAPORE </a>

    This Will Helps you aalot.

  • instead of submitting the whole document again when editing?

Comments have been disabled for this content.