Mike Diehl's WebLog

Much aBlog about nothing...

  • A pseudo-listener for AlwaysOn Availability Groups for SQL Server virtual machines running in Azure

    I am involved in a project that is implementing SharePoint 2013 on virtual machines hosted in Azure. The back end data tier consists of two Azure VMs running SQL Server 2012, with the SharePoint databases contained in an AlwaysOn Availability Group. I used this "Tutorial: AlwaysOn Availability Groups in Windows Azure (GUI)" to help me implement this setup.

    Because Azure DHCP will not assign multiple unique IP addresses to the same VM, having an AG Listener in Azure is not currently supported.  I wanted to figure out another mechanism to support a "pseudo listener" of some sort.

    First, I created a CNAME (alias) record in the DNS zone with a short TTL (time to live) of 5 minutes (I may yet make this even shorter). The record represents a logical name (let's say the alias is SPSQL) of the server to connect to for the databases in the availability group (AG). When Server1 was hosting the primary replica of the AG, I would set the CNAME of SPSQL to be SERVER1. When the AG failed over to Server1, I wanted to set the CNAME to SERVER2. Seemed simple enough.

    (It's important to point out that the connection strings for my SharePoint services should use the CNAME alias, and not the actual server name. This whole thing falls apart otherwise.)

    To accomplish this, I created identical SQL Agent Jobs on Server1 and Server2, with two steps:

    1. Step 1: Determine if this server is hosting the primary replica.

    This is a TSQL step using this script:

    declare @agName sysname = 'AGTest'
    set nocount on

    declare @primaryReplica sysname

    select @primaryReplica = agState.primary_replica
    from sys.dm_hadr_availability_group_states agState
      
    join sys.availability_groups ag on agstate.group_id = ag.group_id
      
    where ag.name = @AGname

    if not exists(
      
    select *
      
    from sys.dm_hadr_availability_group_states agState
      
    join sys.availability_groups ag on agstate.group_id = ag.group_id
      
    where @@Servername = agstate.primary_replica
      
    and ag.name = @AGname)

    begin

       raiserror ('Primary replica of %s is not hosted on %s, it is hosted on %s',17,1,@Agname, @@Servername, @primaryReplica)

     

    end

    This script determines if the primary replica value of the AG group is the same as the server name, which means that our server is hosting the current AG (you should update the value of the @AgName variable to the name of your AG). If this is true, I want the DNS alias to point to this server. If the current server is not hosting the primary replica, then the script raises an error. Also, if the script can't be executed because it cannot connect to the server, that also will generate an error.

    For the job step settings, I set the On Failure option to "Quit the job reporting success". The next step in the job will set the DNS alias to this server name, and I only want to do that if I know that it is the current primary replica, otherwise I don't want to do anything. I also include the step output in the job history so I can see the error message.

    Job Step 2: Update the CNAME entry in DNS with this server's name.

    I used a PowerShell script to accomplish this:

    $cname = "SPSQL.contoso.com"
    $query = "Select * from MicrosoftDNS_CNAMEType"
    $dns1 = "dc01.contoso.com"
    $dns2 = "dc02.contoso.com"
    if ((Test-Connection -ComputerName $dns1 -Count 1 -Quiet) -eq $true)
    {
        $dnsServer = $dns1
    }
    elseif ((Test-Connection -ComputerName $dns2 -Count 1 -Quiet) -eq $true)
    {
       $dnsServer = $dns2
    }
    else
    {
      $msg = "Unable to connect to DNS servers: " + $dns1 + ", " + $dns2
       Throw $msg
    }
    $record = Get-WmiObject -Namespace "root\microsoftdns" -Query $query -ComputerName $dnsServer  | ? { $_.Ownername -match $cname }
    $thisServer = [System.Net.Dns]::GetHostEntry("LocalHost").HostName + "."
    $currentServer = $record.RecordData
    if ($currentServer -eq $thisServer )
    {
        $cname + " CNAME is up to date: " + $currentServer
    }
    else
    {
        $cname + " CNAME is being updated to " + $thisServer + ". It was " + $currentServer
        $record.RecordData = $thisServer
        $record.put()
    }

    This script does a few things:

    • finds a responsive domain controller (Test-Connection does a ping and returns a Boolean value if you specify the -Quiet parameter)
    • makes a WMI call to the domain controller to get the current CNAME record value (Get-WmiObject)
    • gets the FQDN of this server (GetHostEntry)
    • checks if the CNAME record is correct and updates it if necessary

    (You should update the values of the variables $cname, $dns1 and $dns2 for your environment.)

    Since my domain controllers are also hosted in Azure VMs, either one of them could be down at any point in time, so I need to find a DC that is responsive before attempting the DNS call. The other little thing here is that the CNAME record contains the FQDN of a machine, plus it ends with a period. So the comparison of the CNAME record has to take the trailing period into account.

    When I tested this step, I was getting ACCESS DENIED responses from PowerShell for the Get-WmiObject cmdlet that does a remote lookup on the DC. This occurred because the SQL Agent service account was not a member of the Domain Admins group, so I decided to create a SQL Credential to store the credentials for a domain administrator account and use it as a PowerShell proxy (rather than give the service account Domain Admins membership).

    In SQL Management Studio, right click on the Credentials node (under the server's Security node), and choose New Credential...

    Then, under SQL Agent-->Proxies, right click on the PowerShell node and choose New Proxy...

    Finally, in the job step properties for the PowerShell step, select the new proxy in the Run As drop down.

    I created this two step Job on both nodes of the Availability Group, but if you had more than two nodes, just create the same job on all the servers. I set the schedule for the job to execute every minute.

    When the server that is hosting the primary replica is running the job, the job history looks like this:

    The job history on the secondary server looks like this: 

    When a failover occurs, the SQL Agent job on the new primary replica will detect that the CNAME needs to be updated within a minute. Based on the TTL of the CNAME (which I said at the beginning was 5 minutes), the SharePoint servers will get the new alias within five minutes and should be able to reconnect. I may want to shorten up the TTL to reduce the time it takes for the client connections to use the new alias.

    Using a DNS CNAME and a SQL Agent Job on all servers hosting AG replicas, I was able to create a pseudo-listener to automatically change the name of the server that was hosting the primary replica, for a scenario where I cannot use a regular AG listener (in this case, because the servers are all hosted in Azure).

     

     

     

     

    Read more...

  • Query Logging in Analysis Services

    On a project I work on, we capture the queries that get executed on our Analysis Services instance (SQL Server 2008 R2) and use the table for helping us to build aggregations and also we aggregate the query log daily into a data warehouse of operational data so we can track usage of our Analysis databases by users over time.

    We've learned a couple of helpful things about this logging that I'd like to share here.

    First off, the query log table automatically gets cleaned out by SSAS under a few conditions - schema changes to the analysis database and even regular data and aggregation processing can delete rows in the table. We like to keep these logs longer than that, so we have a trigger on the table that copies all rows into another table with the same structure:

    Here is our trigger code:

    CREATE TRIGGER [dbo].[SaveQueryLog] on [dbo].[OlapQueryLog] AFTER INSERT AS

          INSERT INTO dbo.[OlapQueryLog_History] (MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration)

          SELECT MSOLAP_Database, MSOLAP_ObjectPath, MSOLAP_User, Dataset, StartTime, Duration FROM inserted

    Second, the query logging process is "best effort" - if SSAS cannot connect to the database listed in the QueryLogConnectionString in the Analysis Server properties, it just stops logging - it doesn't generate any errors to the client at all, which is a good thing. Once it stops logging, it doesn't retry later - an hour, a day, a week, or even a month later, so long as the service doesn't restart.That has burned us a couple of times, when we have made changes to the service account that is used for SSAS, and that account doesn't have access to the database we want to log to.

    The last time this happened, we noticed a while later that no logging was taking place, and I determined that the service account didn't have sufficient permissions, so I made the necessary changes to give that service account access to the logging database. I first tried just the db_datawriter role and that wasn't enough, so I granted the service account membership in the db_owner role. Yes, that's a much bigger set of permissions, but I didn't want to search out the specific permissions at the time.

    Once I determined that the service account had the appropriate permissions, I wanted to get query logging restarted from SSAS, and I wondered how to do that? Having just used a larger hammer than necessary with the db_owner role membership, I considered just restarting SSAS to get it logging again. However, this was a production server, and it was in the middle of business hours, and there were active users connecting to that SSAS instance, so I thought better of it.

    As I considered the options, I remembered that the first time I set up query logging, by putting in a valid connection string to the QueryLogConnectionString server property, logging started immediately after I saved the properties. I wondered if I could make some other change to the connection string so that the query logging would start again without restarting the service. I went into the connection string dialog, went to the All page, and looked at the properties I could change that wouldn't affect the actual connection. Aha! The Application Name property would do just nicely - I set it to "SSAS Query Logging" (it was previously blank) and saved the changes to the server properties. And the query logging started up right away. If I need to get this running again in the future, I could just make a small change in the Application Name property again, save it, and even change it back again if I wanted to.

    The other nice side effect of setting the Application Name property is that now I can see (and possibly filter for or filter out) the SQL activity in that database that is related to the query logging process in Profiler:

     

     To sum up:

    • The SSAS Query Logging process will automatically delete rows from the QueryLog table, so if you want to keep them longer, put a trigger on the table to copy the rows to another table
    • The SSAS service account requires more than db_datawriter role membership (and probably less than db_owner) in the database specified in the QueryLogConnectionString server property to successfully insert log rows to the QueryLog  table.
    • Query logging will stop quietly whenever it encounters an error. Make a change to the QueryLogConnectionString server property (such as the Application Name attribute) to get query logging to restart and you won't have to restart the service.

    Read more...

  • Joel's Predictions for 2011 (and Mike's comments)

    Joel has posted his predictions for 2011. I find his predictions very interesting, mostly because I am crappy at doing predictions myself. However, I am seldom at a loss for commenting on someone else's work:

     1. The Kanban Influence:

    I have seen a little bit of this, and I like what I see. I would like to try to implement this in the project I am currently on, but I think it will take a lot of education of many involved in the project, as most of them don't even know the term.

    2. Digital Entertainment Crossing the Chasm:

    In December 2009, our family purchased a high-def digital cable video recorder (DVR), and a new LCD flatscreen TV. The DVR has completely changed my viewing habits, I seldom watch programs live anymore. I watched the Vancouver olympics on two channels, in near-real-time, by using the two tuners on the DVR and using pause/skip to split our viewing across the channels, and avoid commercials.

    The new TV has USB connections, so we occasionally watch videos and view photos on the TV as well. We recently received an XBox 360 as a Christmas gift and I am probably going to explore its integration with Windows Media player on our home PCs.

    3. Many App Stores:

    I recently noticed an "App Store" tab in the latest version of uTorrent. It lists the uTorrent add-ons available for download.

    4. Kinecting with your PC

    We also received the Kinect with our Xbox 360, and it significantly changes the gaming experience, I think. Still a little lagging in responsiveness in some games and situations, and really hard to be precise, but it's a huge leap over the WiiMote (which we also have).

    In terms of using Kinect on the PC, there is already a burgeoning community for this: http://kinecthacks.net/

     6. Mobile Really Race Heats Up:

    I am following this somewhat, but our house is certainly not "bleeding edge" with our mobile phones. My daughter is on a pay-as-you-go plan, the cheapest service for what she needs - texting and very little voice calls for $15/month. My wife has a relatively old phone, and she is on a pick-5 plan with unlimited texts (no data). What she likes most about the phone is the large number buttons, which makes me wonder about the aging demographic and when the mobility companies will start catering directly to people who shouldn't need to put on their reading glasses to send/read a text or make a phone call.

    My (non-smart) phone has a full qwerty keyboard (about the size of a Blackberry) and I have still refused to read/send email from my phone. I do find myself using the web occasionally from my phone, but primarily I use my phone for texts and voice (in that order). With those limited phones, I still pay between $120-$150 per month. That seems crazy, and I can't imagine a decent plan for a smart phone would reduce my costs. Add $30-40 per month for my landline that hardly ever gets used anymore, and I have long since concluded that I pay MTS too much (but changing providers probably wouldn't reduce anything either).

    What I would love as a feature on my phone: voice recognition for texts - I speak and the phone types. Or the phone reads aloud the texts that I receive.

     

    7. Cloud Apps will gain momentum:

    This is only recently that I have looked at Windows Azure, and it has changed the way I think of software architectures. The development experience "just worked" - things that I thought would be pretty complex to do (configure Visual Studio 2010 to deploy an app to the cloud) worked first try, and pretty darn simply. The large multi-national company that I am currently working on a project for may never use a public cloud for their applications, but they are starting to use Verizon cloud services, and I have already talked to some people there about using the Azure app fabric in a corporate cloud and how it would change their IT and Development processes significantly.

    8. Storage Class Memory:

    I would so love to buy an SSD drive for my laptop. That would be sweet. 'Nuff said.

    9. Another Stage of Social Networking:

    I am a Facebook user. I probably browse it a dozen times a day. I don't post as much on a regular basis, but Facebook has changed the way I keep track of my friends and family and former schoolmates.  

    10. In Vehicle Experience will spark new dimension:

    I have a relatively new vehicle (2007), and I am guessing the next brand new vehicle I buy may not be for another 5 years or more. Since my last purchase, a number of technologies have become more common in vehicles:

    • easily accessible auxiliary connections for MP3 players (mine has an "Aux" selection button on the front and the connections for it in the back)
    • GPS-assisted navigation/mapping (I use Microsoft Streets and Trips but the laptop is awkward and requires power bricks, etc). The built-in ones or the after-market ones are much more convenient. I remember travelling in the 90's in foreign cities in rental cars and being stressed about finding the hotel or the training center. GPS and Google/Bing Maps and MS Streets and Trips has tamed that stress.
    • Bluetooth integration for mobile phones. I've seen this work, seems pretty nice.
    • Ford/Microsoft Sync - I haven't seen this first hand, but I suspect I would like it.

    11. Learning Content:

     I have recently taken several Microsoft exams, after a hiatus of several years. I can't believe how difficult it is to schedule these things. It is easier for me to schedule an exam when I am travelling for work in St. Louis than it is to take it in Winnipeg. Also, I can't quote my MCT number on the Prometric website to get my MCT exam discount when booking an exam.

    OK, I'm going to add one "pet peeve" of mine here, not as a prediction, but as a "I hope this gets better in 2011":

    Managing usernames and passwords

    For all the things I do online or with my credit and debit cards electronically, I have a hard time managing my usernames and passwords and PINs. My recently replaced credit card now has a chip built in, and it required me to set a PIN (this is a step up from some previous cards that have *told me* the PIN to use). However, the PIN was limited to 5 digits, and my "normal" PIN that I use on other cards is 6 digits. Needless to say, I have had that card locked out and the PIN reset a number of times because I couldn't remember the PIN for it. And to access it online, I cannot use the card number for the account, the provider created me a new account number to use for web access. That's more secure, I suppose, but now I have to "write down" that account number somewhere, and I have an encrypted file that contains a bunch of those accounts and their passwords stored, hopefully securely. The list of those usernames/passwords is growing, and becoming more unmanageable for me every month it seems.

    However, the prospect of having "one key to rule them all" doesn't make me feel any better either. I don't know what the solution is here, but I hope this gets better for me in 2011.

     

    Read more...

  • SQL Table stored as a Heap - the dangers within

    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do.

    On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table.

    Use Warehouse

    CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5))

    CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date)

    CREATE PROC Integration.MergePolicy as

    begin

    begin tran

    Merge fact.Policy as tgt
    Using Integration.MergePolicy as Src
    On (tgt.PolicyId = Src.PolicyId)

    When not matched by Target then

    Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)
    values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate)

    When matched and src.Operation = 'U' then

    Update set

    PolicyTypeKey = src.PolicyTypeKey,
    Premium = src.Premium,
    Deductible = src.Deductible,
    EffectiveDate = src.EffectiveDate

    When matched and src.Operation = 'D' then

    Delete

    ;

    delete from Integration.WorkPolicy

    commit

    end

    Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc.

    For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking.

    This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete.

    I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans.

    (I was beginning now to suspect that my problem was because the work table was being stored as a heap.)

    Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.

    Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.

    I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0.

    I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table.

    After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time).

    I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table.

    In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables.

    Mike

     

    Read more...

  • SSIS Bulk Insert task that bit me in the butt...

    I've been working on SSIS packages that extract data from production databases and put them into data warehouses, and recently I hit an issue using the Bulk Insert task that bit me real good.

     When you create a Bulk Insert task in the control flow of your package, the properties you generally edit are:

    1. The target connection (which references a connection manager)

    2. The target table

    3. The source file (which references a file-type connection manager).

     I did that, ran the package in Visual Studio with my local file against a dev SQL database on a test server and it all worked just fine.

    I ran it again, and it failed, due to a primary key violation - so I needed to make the execution of the task conditional, so long as the table was empty, I would run the task, otherwise if it contained anything, I would skip the task.

    This was harder to do than I thought it would be. I started by creating a variable to hold the row count of the table, then an Execute Sql Task to run a statement on the target table (select count(*) as RowCount from targetTable) and set the variable value to the column in the resultset of the statement.

    Now I go to look for an IF construct and there isn't any such thing. The closest was a For Next loop that I went down a rabbit-trail trying to use, and having it execute only zero or once, and I couldn't get that to work. Is there magic between the @variable syntax in the initialize, condition, and iteration expressions and the package user:variable declarations that make those work together? I still don't know the answer to that.

    Then I thought of using the Expression on the dependency arrow from the task that got the row count from the target table. So I joined the Row Count task to the Bulk Insert task using the green arrow, then edited the dependency to be dependent both on Success of the row count task and the value of the user:rowCount variable I had created. That worked.

     Believe it or not, that isn't really what bit me in the butt.

    Now I had a package that I could execute multiple times and it would work properly. My buddy Jeremy would say that it is "idempotent".

     What bit me was when I went to execute the package in another environment.

     I moved the .dtsx file to a test server and used the Execute Package Utility. I set the values for the connection managers in the package to the new server connections (and the new location of the bulk copy file), and ran the package, and it worked.

    Just to make sure it was "idempotent", I ran it again.

    It failed this time.

    Another PK violation. Why?

    It took me a while to find the problem. Eventually it came down to the target table property of the Bulk Insert Task - the value of this property was not just a two part table name, but it also included the database name.

    It just so happened that the database I was testing with from my Visual Studio is on the same server as when I was testing with the Execute Package utility.

    So, the first time I ran it with the Execute Package utility with the modified connection manager settings, it was querying the *real* target database for the number of rows, and getting back 0. Then it executed the bulk insert task into the *original* database I was testing with on the server (that I happened to clear out the rows from the table), and the bulk insert worked. The second time, the number of rows was still 0, and it tried to do the bulk insert into the same database, despite the fact that the connection manager was pointing to a different database.

    I can understand why this was done this way, since when you use the bcp command line utility, you need to database-qualify the table you are moving data into, because bcp doesn't specify the database otherwise. But the Bulk Insert task was using the T-SQL statement BULK INSERT which is already database specific, so you don't generally qualify the table name with the database. With the whole database name in the target table property, the task isn't very responsive to changes to the connection manager at runtime.

    Here is how I fixed it, and it's a HACK.

    You can't just free-form enter the table name in the Bulk Insert task, it only allows you to pick from the list, which is based off the source connection you specify. So, in the Expressions tab of the Bulk Insert task, I used an expression to set the source table property to the non-database-qualified name of the table. It's kinda hidden, and it now overrides whatever table you select from the drop down later, and you wouldn't know it.

    I hope that the SSIS Bulk Insert task gets fixed so it doesn't include the database name in the target table; it isn't needed, and it gets in the way of runtime changes to the target connection.

    Mike

    Read more...

  • SQL Database diagramming and VSTS Data Dude

    At Imaginet, we use Visual Studio Team Edition for Database Professionals (Data Dude) on our projects to manage database schemas, keep them in source control, unit testing, and lots of other nice features.

    But it doesn't do database models well. Or at all, for that matter. I really would like the Database Diagramming tool in SQL Management Studio and Visual Studio to be able to go against a database project. But no, it can only go against an actual database.

    Here is what we do to be able to model our tables and relationships with the diagramming tool and still use Data Dude.

    For every project, we have a number of database "instances" - usually named after the project (I'll use the name Northwind from here on) with a suffix for the "environment", such as Northwind_Dev and Northwind_Test.

    We also have another called Northwind_Schema, which is considered the "gold" standard for the schema of the project database. I'll start by creating that schema database and create tables in it using the database diagramming tool in SSMS. I can fairly quickly create a number of tables, and have a diagram for each subject area of the data. It also means my documentation is getting built at the same time as my database (in my world, the diagram forms the large part of the required database documentation). And these diagrams, like Xml comments in C# or VB, are also very close to "the code", and will keep current with the state of the schema database. Models created in other tools then exported to a database are very hard to keep accurate in the long run. When it comes time to snapshot the documentation for the database, we can fairly quickly embed pictures of the database models in Word or OneNote or some other documentation tool.

    At the same time as I am modelling the database in Northwind_Schema, I create a database project in Visual Studio called Northwind. If I have the Northwind_Schema database in a state that I like (for first draft), I will use the Import Schema from Database wizard when creating the new database project. Otherwise, I'll just create an empty database project.

    When I am happy with Northwind_Schema, I use a Schema Comparison to compare the Northwind_Schema database to the Northwind database project. I will update the database project with the changes that are in Northwind_Schema, then run any local tests against the database project before checking in.

    Upon checkin, we have Team System automatically build the database and deploy it to Northwind_Dev, which is available for any developers on the project to use as they code other areas of the project. In the project I am working on now, we use LINQ and CSLA-based entities for our data access layer, so I will keep our LINQ model synchronized with the database project as well (usually by dragging tables onto the LINQ designer surface from the Northwind_Schema database).

    If we ever lose Northwind_Schema, it is easy to rebuild it from the database project, because the database project in source control is "more true" than the Northwind_Schema instance. (However, we can lose the diagrams by rebuilding Northwind_Schema).

    As I said above, I would actually prefer to do my diagramming in Visual Studio, against a database project rather than a database, and in that way I could also keep the diagrams in source control. But with the Northwind_Schema database, I can model new subject areas or do fairly major refactoring prior to checking out the database project files.

    In my next post, I'll talk about how we build and manage stored procedures in project databases.

    Read more...

  • MS BI Conference: Monday Keynote

    Here are my notes on the Monday morning keynote:

    • About 3000 attendees at the conference, over 60 countries represented.
    • There is BI in Halo3: whenever you look at competitor stats or weapon effectiveness, this is implemented using BI tech

    Madison - MS has acquired DATAllegro, a company that was accomplishing low TCO MPP (massively parallel processing) scale out of BI. Using standard enterprise servers, you can process queries on very large data warehouse databases very quickly. They demonstrated a hardware setup of a MPP cluster: one control node, 24 compute nodes, and at least as many storage nodes (ie. shared disks). They loaded 1 trillion (yes trillion) rows in the fact table, and a bunch of dimension tables, such that the data warehouse contained over 150 TeraBytes of data. Then they sliced the fact table up onto the 24 SQL instances on the compute nodes (each compute node then had 1/24 of the trillion rows) and replicated the dimension tables to all compute nodes. Using SQL 2008 (and its new star join optimization) they then issued a query on the fact table and the related dimension tables to the cluster, where the control node passed the query along to the compute nodes, they each processed it, and returned the results back to the client.

    On one screen they had Reporting Services (the client app) and on another, a graphic display of the CPU and disk stats for the control node and all 24 compute nodes, each node having 8 CPUs. When the Report was being displayed, the query got processed, and you could see the CPU usage go up on many of the nodes, then disk usage on each of the nodes, then the activity would subside and the reporting view would then display the results. It was all done in under 10 seconds. It was truly impressive. Now, that was with essentially read only data, so you could probably "roll your own" MPP system, given the time and hardware. It's not a huge technical problem to scale out read-only data. If they could show the same demonstration except with a SSIS package *loading* a trillion rows into the cluster, that would have been astounding - it's a much different and more difficult problem. Still, I was impressed.

    Gemini - this is "BI Self Service" - the first evidence of this is an Excel addin that the always-entertaining Donald Farmer demonstrated. He used the addin to connect to a data warehouse and in a spreadsheet showed 20 million rows. We didn't *see* all 20 million rows, but he did sort it in under a second, and then filtered it (to UK sales only, about 1.5 million rows) in under a second. That performance and capacity was on what he said was a <$1000 computer with 8 GB RAM, similar to what he purchased for home a few weeks ago.

    Aside from the jaw-dropping performance, he used the addin to dynamically link the data from analysis services with another spreadsheet of user-supplied data (I think it was "industry standard salary" or something). The add-in was able to build a star-schema in the background automatically and then make it available in the views they wanted in Excel  ( a graph or something? I can't remember). So it was showing the fact that sometimes the data warehouse doesn't have all the data needed for users to make decisions, so they got the data themselves, rather than wait for IT to get it in the DW. Ok, cool. So then he published that view into Sharepoint using Excel services, and the user-supplied data went along with it. So centrally publishing that view means it can be utilized by others in the enterprise, rather than sharing via email or a file share or something.

    From the IT perspective, he showed a management view (dashboard) in SharePoint showing usage stats of "Sandboxes" (the thing they are currently calling these publications) and they could see how popular this particular sandbox was, and then take steps to formalize it into the enterprise. The tantalizing link on that web page was "Convert to Performance Point" - the idea was that you could take the sandbox view and convert it into a PPS web part. That looked cool too.

    So Gemini looks very interesting.

    Timeframes: the next major release of SQL will be 24-36 months from release of SQL 2008, but in the meantime, there are a number of releases coming: Madison and Gemini will be coming in the first half of 2010, and CTP's will be available sometime early next year. There are some incremental releases of Analysis Services, Integration Services and Reporting Services coming - the next gen of Reporting Services in particular will become available in a Feature Pack "real soon now".

     

    Read more...

  • MS BI Conference 2008 - First Impressions

    It has been over a year since I last blogged, but I want to restart with some posts about the BI Conference I am attending this week.

    Chris and I flew to Vancouver yesterday and drove down to Seattle in a Camry Hybrid. Sitting in the lineup at the US border for an hour drained the batteries on the Camry so it had to restart the engine to recharge a couple of times, for about 10 minutes each time. Seemed odd to discharge so much battery just sitting in a lineup and moving 10 feet every five minutes. Anyway...the trip display shows that our fuel efficiency was under 8 liters/100km on the trip down. That also seems a little poor compared to my Golf TDI that gets 4.5 liters/100 km regularly.

    We registered last night and wandered the Company Store for a bit - saw uber-geek stuff there and we thought of getting something for Cam, our uber-geek on the team at Imaginet. The conference package was predictable: a nice back-pack, a water bottle, a pen, a 2 GB USB stick, a SQL Server magazine, not as many sales brochures as last year, and a conference guidebook.

    Last year's guide book was a small coil bound notebook with a section of blank pages at the end for taking notes. This year's edition has the same content - a description of all the sessions and keynotes and speakers, as well as sponsor ads, but it is missing the note-taking section. I really liked that section last year, so today I found myself scribbling notes on loose paper, and running out. I specifically left my (paper) notebook at home because I liked the smaller conference book instead, but now I am going back to the Company Store to buy a small notebook for the rest of the sessions.

    The conference is trying to be more environmentally friendly - in the backpack was a water bottle and they encouraged you to refill that at the water stations rather than having bottled water. That's cool. For me, I would have preferred a coffee mug, since I had three cups of coffee over the day (in paper cups, and no plastic lids). In a strange twist, the breakfast and lunch dishes were on paper plates and not the real dishes like last year - one step forward, two steps back I guess. I can't figure it out.

    In the main hall before the keynote address, there was an live band playing 80's hits. They were pretty good, but it seemed odd to have a bouncy energetic group on stage at 8:30 on a Monday morning, everyone was filing in and sitting down, morning coffee still just starting to kick in. The bass player was one or two steps beyond bouncy-happy. It reminded me of someone on a Japanese game show.

     

    Read more...

  • How to rename a Build Type in Team System (and a suggested naming convention)

    I suppose this might be in the manual, but...

     If you want to rename a Build Type that you have created in a Team System Project, you need to open the Source Control Explorer window, dig down into the TeamBuildTypes folder under the project, and rename the folder that corresponds to the build type you want to change. After you check in that change, refresh the Team Builds folder in Team Explorer and you'll see your newly named Build Type.

     Remember to change any scheduled tasks you may have created to run your builds automatically.

    One more thing about naming Build Types - because we like to have an email sent out to the team members after a build, we have found that a naming convention for the build types helps make it easier to easily recognize and organize the build notifications. We use a standard that includes the environment, the Team Project name, and the sub-solution as the name of the build. So we have build names like

     DEV Slam Customer Website - This builds the CustomerWebsite.Sln in the $\Slam\DEV branch.

    QA Slam Customer Website - This builds the CustomerWebsite.Sln in the $\Slam\QA branch.

    DEV Slam Monitor Service - This builds the MonitorService.Sln in the $\Slam\DEV branch.

    QA Slam Monitor Service  - This builds the MonitorService.Sln in the $\Slam\QA branch.

     Having the project name in the build type helps because if you are a subscriber of lots of different builds for different projects, you cannot tell by looking at the email (other than this naming convention) which project the build is from.

     

    Read more...