Mathew Nolton Blog

Software dementia unleashed...
sp_lock2 and sp_lockcount

Here are two helpful procedures to help with sqlserver database work. 

  1. The code for both of these are based on the system stored procedures sys.sp_lock that comes built in the master database.
  2. build both of these in your master database if you want to use them in all databases across your database instance.
  • sp_lock2 -- provides object_name for the object_id normally presented with sp_lock
  • sp_lockcount -- provides a count of the locks. useful when doing etl processes and you are monitoring locks but do not care to receive the tens of thousands of lock data coming back...when all you want to know is the count.

sp_lock2 code

USE [master]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
-- ==================================================================
-- Author:        Mathew Nolton (based on sp_lock in sqlserver)
-- create date: 05/15/2009
-- Description:    Returns lock information similar to sp_lock but
--                with additional object name information.
-- ==================================================================
create procedure [dbo].[sp_lock2]
    @spid1 int = NULL,        /* server process id to check for locks */
    @spid2 int = NULL        /* other process id to check for locks */
as
    -- ======================================================================================================
    -- set options
    -- ======================================================================================================
    set nocount on
    set transaction isolation level read committed

    -- ======================================================================================================
    -- do the work.
    -- ======================================================================================================
    select     convert (smallint, [req_spid])    as [spid],
            [rsc_dbid]                        as [dbid],
            db_name([rsc_dbid])                as [dbname],
            [rsc_objid]                        as [ObjId],
            object_name([rsc_objid])        as [ObjName],
            [rsc_indid]                        as [IndId],
            object_name([rsc_indid])        as [IndName],
            substring ([v].[name], 1, 4)    as [Type],
            substring ([rsc_text], 1, 32)    as [Resource],
            substring ([u].[name], 1, 8)    as [Mode],
            substring ([x].[name], 1, 5)    as [Status]
    from     [master].[dbo].[syslockinfo] [s]
            inner join [master].[dbo].[spt_values] [v]
    on        [s].[rsc_type]        = [v].[number]
            inner join [master].[dbo].[spt_values] [x]
    on        [s].[req_status]    = [x].[number]
            inner join [master].[dbo].[spt_values] [u]
    on        [s].[req_mode] + 1    = [u].[number]
    where    [v].[type]            = 'LR'
    and        [x].[type]            = 'LS'
    and        [u].[type]            = 'L'
    and        ( @spid1 is null or [req_spid] in (@spid1, @spid2) )
    order by [spid]

sp_lockcount code

-- ==================================================================
-- Author:        Mathew Nolton (based on sp_lock in sqlserver)
-- create date: 05/15/2009
-- Description:    Returns count of locks.
-- ==================================================================
create procedure [dbo].[sp_lockcount]
    @spid1 int = NULL,        /* server process id to check for locks */
    @spid2 int = NULL        /* other process id to check for locks */
as
    -- ======================================================================================================
    -- set options
    -- ======================================================================================================
    set nocount on
    set transaction isolation level read committed

    -- ======================================================================================================
    -- do the work.
    -- ======================================================================================================
    select     count(*)
    from     [master].[dbo].[syslockinfo] [s]
            inner join [master].[dbo].[spt_values] [v]
    on        [s].[rsc_type]        = [v].[number]
            inner join [master].[dbo].[spt_values] [x]
    on        [s].[req_status]    = [x].[number]
            inner join [master].[dbo].[spt_values] [u]
    on        [s].[req_mode] + 1    = [u].[number]
    where    [v].[type]            = 'LR'
    and        [x].[type]            = 'LS'
    and        [u].[type]            = 'L'
    and        ( @spid1 is null or [req_spid] in (@spid1, @spid2) )
SqlScript for dropping a set of procedures, user-defined functions or views

I have been wanting to write some posts lately of some productivity tools and tricks. My last post was for a VS-Addin for updating the reference paths for all "selected" projects in your visual solution set.

This post is for a sql script that enables you to drop a series of stored procedures, UDF's or Views. I thought about adding tables but then I would have to handle keys/constraints as well....maybe next time.

PRINT N'Dropping procedures/functions...'

GO

PRINT N''

GO

declare     @name varchar(256), @type varchar(2), @sql nvarchar(1000), @printText varchar(1000)

 

-- declare cursor

declare localCursor cursor local fast_forward read_only for

select      [name], [type]

from  [sysobjects]

where [type] in ('P','TF','FN','V')

and (

            [name] in

            (

                  '[Put procedure/view or udf name here]',

                  '[Put another procedure/view or udf name here]'

            )

      )

      order by [type],[name]

 

-- now open the cursor

open localCursor

fetch next from localCursor into @name, @type

 

-- now loop through building list.

while @@fetch_status=0

begin

      if( @type = 'P' )

      begin

            set @printText = 'Dropping procedure ' + @name + ' ...'

            set @sql = 'drop procedure ' + @name

      end

      else if ( @type in ('TF','FN') )

      begin

            set @printText = 'Dropping function ' + @name + ' ...'

            set @sql = 'drop function ' + @name

      end

      else if ( @type in ('V') )

      begin

            set @printText = 'Dropping view ' + @name + ' ...'

            set @sql = 'drop view ' + @name

      end

     

      print @printText

      exec sp_executesql @sql

      fetch next from localCursor into @name, @type

end

close localCursor

deallocate localCursor

 

HTH,

-Mathew Nolton

Add-In for updating all reference paths in a solution

Update Project References Add-In 

Visual Studio doesn't natively support the ability to update all selected project's within a Solution with the same reference paths. This is important (at least to me) if you have to download from your source control system a different version of your source code. Since you typically do not check in your project preferences into source control, you will be stuck with the mundane task of selecting your project, setting the reference path, select the next project, etc.

This add-in enables you to select a given set of projects within your solution, set your reference paths and make updates across all selected projects. For example:
UpdateReferencePaths Image

You have a couple of options

  1. Completely Replace Existing References...enough said
  2. Do nothing if a reference exists (and append to end of list if it doesn't)
  3. Prepend to List...(note: if the reference already exists, it will be moved).
  4. Append to List...(note: if the reference already exists, it will be moved).

To use the add-in, copy the files UpdateReferencePaths.AddIn and UpdateReferencePaths.dll to your add-in directory. Typically, this is located at: C:\Documents and Settings\[Your User Name Here]\My Documents\Visual Studio 2005\Addins

Get the Add-In and the source code here: http://www.cybral.com/tools.aspx#updatereferencepaths

-Mathew Nolton
http://www.cybral.com

Using skmRss.RssFeed

I have been evaluating the use of Scott Mitchell's RSSFeed control for a new version of my website (it's not done yet) for my consulting company that focuses on the Cable Sector of the Telecommunications Industry and I wanted to display relevant information for both Web Services and the Cable Sector. The control is very nice (and free). During my investigation I wanted to perform a couple of actions that were not quickly apparent but still quite doable once I dug a bit deeper. Specifically, I wanted to be able to:

    1. Keep all items to a single row in the control (without wordwrap).
    2. Display ToolTips

To put all items in a single row, I noticed that it has a boolean called 'Wrap' for an ItemStyle; however, I could not get this to work so I took a nother approach. I modified the RSSDocument to shorten the title to display it correctly. Here is how I did it:

First the code to wire it up:

cableNewsFeed.DataSource = getList( [DataSource], __maxTitleLength, cableNewsFeed.MaxItems );

cableNewsFeed.DataBind();

Next the code to create the list:

protected RssDocument getList( string dataSource, int maxTitleLength, int maxFeedLength )

{

      // create the engine and get the xml document

      RssEngine engine = new RssEngine();

      RssDocument sourceDocument = engine.GetDataSource( dataSource );

      // all of the properties on the document are 'Get' only, so I created a copy.

      RssDocument results = new RssDocument( sourceDocument.Title, sourceDocument.Link,

            sourceDocument.Description, sourceDocument.Version, sourceDocument.FeedType,

sourceDocument.PubDate);

      // get the max offset ( i pass it this in to the method based on the max items of control).

      int countOffset = sourceDocument.Items.Count-1;

      for( int i = 0; i < maxFeedLength && countOffset >= i; i++ )

      {

            // get the item to clone

            RssItem item = sourceDocument.Items[i];

            // shorten the title.

            string title = getTitle( item.Title, maxTitleLength );

            // add the 'New' item.

            results.Items.Add( new RssItem( title, item.Link, item.Description,

                  item.Author, item.Category, item.Guid, item.PubDate, item.RssEnclosure ) );

      }

      // return the results.

      return results;

}

Method to create the title:

protected string getTitle( string title, int maxLength )

{

      int length = title.Length;

      if( length > maxLength ) title = title.Substring(0,maxLength);

      return title += "...";

}

If I were really industrious I would have created the method based on the GDI+ MeasureString method. However, I am not guarenteed that everyone will have the font on their machine. I still think it is worthy of doing this, but maybe later.

Next, I wanted the tool tip to display the description of the RSSFeed. To do this, I wired up the ItemDataBound event:

cableNewsFeed.ItemDataBound+=new RssFeedItemEventHandler(cableNewsFeed_ItemDataBound);

Now the method:

private void cableNewsFeed_ItemDataBound(object sender, RssFeedItemEventArgs e)

{

      ( ( System.Web.UI.WebControls.WebControl )e.Item).ToolTip = e.Item.DataItem.Description;

}

The results (btw, it will probably be under a different company name with different verbage):

Posted: Dec 27 2005, 09:15 AM by MatiasN | with 1 comment(s)
Filed under:
Creating your own XmlSerializer

Very recently I came across an issue that required the creation of a new class derived from XmlSerializer. For reasons I don't want to get into here, we serialize an object instance into XML and store it into a database column so that we can reconstitute it later. This is a great approach except for the issue of changing class definitions.

Lastly, if you are just changing the definition of a top level class, then I suggest taking a look at XmlAttributeOverrides on msdn; however, if you are changing the definition of a class that aggregates other classes and one of your contained class has a different class definition, you need to look at using:

XmlAttributeEventHandler, XmlElementEventHandler or XmlNodeEventHandler

 

These events allows you to control the creation of these internal aggregated classes.

For example, if you have the following class definition for ItemOption that contains a collection of objects of type ProductAttribute and your ProductAttribute definition has changed, then this is a good candidate for creating your own XmlSerializer derived class.

(Simplified For Brevity)

      [Serializable]

      public class ItemOption

      {

            protected string _name=null;

            protected string _description=null;

            protected ProductAttributeCollection _generalAttributes=new ProductAttributeCollection();

 

            public ItemOption(){}

 

            #region properties

            [XmlElement ("Name")]

            public string Name

            {

                  get{return _name;}

                  set{_name=value;}

            }

            [XmlElement ("Description")]

            public string Description

            {

                  get{return _description;}

                  set{_description=value;}

            }

            [XmlArray("GeneralAttributes")]

            [XmlArrayItem("Attribute")]

            public ProductAttributeCollection GeneralAttributes

            {

                  get{return _generalAttributes;}

                  set{_generalAttributes=value;}

            }

      }

 

For the purposes of brevity, let's say that all you did was change your ProductAttribute definition from using XmlAttribute to  XmlElement.For example:

      [Serializable]

      public abstract class ProductAttribute

      {

            #region Member Variables

            protected string _name = null;

            protected object _value;

            protected bool _canOverride = false;

            #endregion

 

            [XmlElement("Name")] <-- Used to be XmlAttribute

            public string Name

            {

                  get{return _name;}

                  set{_name = value;}

            }

            [XmlIgnore()]

            public object Value

            {

                  get{return _value;}

                  set{_value = value;}

            }

            [XmlElement("CanOverride")] <-- Used to be XmlAttribute

            public bool CanOverride

            {

                  get{return _canOverride;}

                  set{_canOverride = value;}

            }

 

            public ProductAttribute(){}

            public ProductAttribute(int id, string name, bool canOverride, object attributeValue)

            {

                  _id = id;

                  _name = name;

                  _value = attributeValue;

                  _canOverride = canOverride;

            }

      }

If you make this seemingly innocuous change, your code will no longer work as expected. In this particular example, the deserialization process will NOT throw an exception and it will fill ItemOption.ProductAttributeCollection with the correct # of ProductAttributes; however, each ProductAttribute definition will contain their default values and NOT the values stored in your RDBMS.

 

The reason is that the Xml stream will contain XmlAttributes and your class definition is expecting XmlElements. This also means that you will have data stored that correspond to two different versions of your class structure. To resolve this issue, I suggest creating your own XmlSerializer class.  You want to do this because XmlSerializer will notify you when it encounters an unknown node/element or attribute. You just need to wire it up.

 

For example, I chose to use the XmlNodeEventHandler and to wire it up, you merely need to:

 

UnknownNode += new XmlNodeEventHandler(_unknownNode);

 

Then you need to create the _unknownNode f(x) to handle these events. In the following example, I have other derived classes from ProductAttribute and I omitted some of the method code for brevity, but the example illustrates how to handle this:

 

        protected void _unknownNode(object sender, XmlNodeEventArgs e)

        {

            object o = e.ObjectBeingDeserialized;

            if (o is ProductAttribute)

            {

                ProductAttribute productAttribute = (ProductAttribute)o;

                switch (e.Name)

                {

                    case "xsi:type":

                        break;

                    case "Name":

                        productAttribute.Name = e.Text;

                        break;

                    case "CanOverride":

                        productAttribute.CanOverride = Convert.ToBoolean(e.Text);

                        break;

                    default:

                        if (o is BooleanAttribute && e.Name == "Value")

productAttribute.Value = Convert.ToBoolean(e.Text);

                        else if (o is DoubleAttribute && e.Name == "Value")

productAttribute.Value = Convert.ToDouble(e.Text);

                        else if (o is IntegerAttribute && e.Name == "Value")

productAttribute.Value = Convert.ToInt32(e.Text);

                        else if (o is LongAttribute && e.Name == "Value")

productAttribute.Value = Convert.ToInt64(e.Text);

                        else if (o is PresentationAttribute && e.Name == "Value")

productAttribute.Value = e.Text;

                        else if (o is LocationAttribute && e.Name == "Value")

productAttribute.Value = e.Text;

                        else if (o is ImageAttribute && e.Name == "Value")

productAttribute.Value = e.Text;

                        else if (o is StringAttribute && e.Name == "Value")

productAttribute.Value = e.Text;

                        break;

                    }

            }

        }

The key thing to remember is that the property o.ObjectBeingDeserialized contains a pointer to the object having difficulty with the Deserialization process. In the example above, you just cast it and set the correct items depending upon the other properties contained in XmlNodeEventArgs. Also, you will noticed that I have a check for xsi:type that does nothing. I did this because even if your class definition and the Xml jibe, the XmlSerializer does not recognize this attribute name and will raise the event. Therefore, I want it to break out of the method as soon as possible.

 

Hope this helps

Mathew Nolton

Posted: Dec 20 2005, 06:25 PM by MatiasN | with 1 comment(s)
Filed under:
New Version of XmlPreCompiler

Well, I finally got around to making the XmlPreCompiler even easier to use. As some of you may or may not know, the XmlPreCompiler is a tool based on a tool Chris Sell's originally created to help developer's handle the xml serialization error:

      "File or assembly name ctewkx4b.dll, or one of its dependencies, was not found" exception?

The new version of the tool has the following improvements:

  1. AssemblyResolve. Typically, if you use the tool to check if a class can be serialized it is outside of the XmlPreCompiler's current directory structure and therefore, it cannot resolve dependencies. To get around this some of you would copy the assemblies into the XmlPreCompiler's bin directory. Now, I wired up the AssemblyResolve event and I do my best guess to find it for you.
  2. Added the ability with a single button click to test all types in your assembly. I forgot who requested this, but it was a good idea.
  3. Provided a TypeDetail window to show you different attributes of the class....e.g. IsAbstract, IsAnsi, IsClass, IsInterface.....
  4. Prrovided a Referenced Assembly window to show you all of the assemblies that the selected assembly references.

I am also in the process of making it an AddIn into the new Visual Studio 2005 (1/2 way done). I also am evaluating doing some disassembling of the code in a manner similar to reflector....I am mostly doing this for my own understanding but also for some things that I want to be able to do with classes in the future.

Anyway, download it and enjoy,

http://www.cybral.com/solutions/tools.htm#XmlPrecompiler

-Mathew Nolton

Posted: Jul 28 2005, 02:20 PM by MatiasN | with 1 comment(s)
Filed under:
Part 2: Consulting in the Cable and Wireless Sectors....The Power of the Bundle....

The Cable sector as well as the entire Telecommunications industry has been expending a large amount of resources to sell bundled products and services. This means that instead of just offering Analog and Digital Cable Services, Cable and Telecommunication Companies are offering Cable, High Speed Internet, Digital Telephone, Digital Video Recorders, High Definition TV, etc. in a "Bundled" product.

The sale of a bundled products and services is easier for a customer requiring fewer mouse clicks and faster checkouts and a single monthly bill. The benefit to the Cable and Telecommunications companies is that is greatly enhances customer retention, revenue and market penetration as can be evidenced by this article.

 Cox Communications Announces First Quarter Financial Results for 2005

-Mathew Nolton

Previous Posts on Consulting in the Cable and Wireless Sectors:
  
Part 1: Consulting in the Cable and Wireless Sectors

Posted: May 11 2005, 08:50 AM by MatiasN | with no comments
Filed under:
Part 1: Consulting in the Cable and Wireless Sectors

Part 1...Commerce Solutions:

The Cable and Wireless sectors are unique. Over the last several years I have done considerable consulting in these sectors. Most recently I have lead the design and architecture of a Channel Sales System that now has 65% of a Web presence for one of the larger companies in the cable sector. Much of the remaining 35% will convert and/or continue to leverage web services of this same system. We are also using this system for Interactive TV and Telephony. It is a blast.

Prior to building this system and during the course of building this system, we evaluated a number of packaged commerce solutions. Many/most of these packaged "commerce" software rarely meets the needs of a cable company. That is why "Build" instead of "Buy" became such a strong consideration. Vendors often state that a Cable Company is not in the business of bulding software. True, but cable (and wireless companies) are in very competitive sectors and it is absolutely crucial that they control their own destiny. So, although a cable company is not in the business of writing software; it is in charge of remaining competitive. Some cable companies have gone down the vendor path only to do an about face.

The issue is that being beholden to a vendor can really cost the company time to market....and Time-To-Market is crucial...and a driving factor for many businesses...not cost.. For example, let's say that a vendor is 6-12 monthis behind the curve; or the vendor panders to many industries instead of just Cable. This means that it is very difficult for a company to react quickly to changing market conditions. Nothing kills a marketing campaign such as a commerce solution that does not meet the needs of the business. Or that it does but with considerable customization. Furthermore, if you customize vendor software, you further exasperate the probem...because you will still have to upgrade and customizations typically have to be reapplied to an upgrade....it is a vicious cycle....one that some vendors enjoy because it leads to continued consulting service sales.

Typically packaged commerce solutions either:

a) Handle the marketing/sales well but do not handle fulfillment/billing elegantly...(e.g. bundling is big in these sectors but not all vendors really handle this well).

b) Handle the fulfillment/billing farely well but rarely handle the marketing/sales.

c) Deployment. Vendors rarely take into account what it takes to get software deployed across a large organization.  For example, Microsoft and PUP files....argggg...what a mess. This is exasperated by the fact that many large companies are maintaining many environments.....Development, QA, User Acceptance Testing, Preview, Staging, Production....to name a few. Any pain points with deployment are therefore magnified.

d) Cable and wireless companies are really a service company with some product. They are shipping bits/bytes along with some equipment and professional services (installation and maintenance). Furthermore products and services are intertwined. This means that the typical "Amazon" sales metaphor really does not hold.

e) Cable companies are made up of numerous business units, often called franchises...they can almost be thought of as separate companies. They are typically selling the same basic products and services but with a twist (e.g. different movie channels...different bundles of service). They also must be able to react quickly to local competitive market conditions. The key here is that they are selling the same basic products and services....but they are marketed differently. So many vendors say....Hey just create another catalog item!. Huhhh. This means that if you are going to have a common commerce solution, you are going to have many dimensions to your data and/or multiple implementations....possibly down to the household. This also means that maintaining 50 different catalogs is not the answer as some vendors suggest and some companies implement. A company must be able to maintain their data. The more catalogs you have the more data you have to maintain.

I hope to continue with further discussions and rants about consulting in the Cable and Wireless Sectors....

-Mathew Nolton

Posted: May 04 2005, 09:34 AM by MatiasN | with 3 comment(s)
Filed under: ,
Fail Fast

I found this paper about "Failing Fast" while reading Paul Lockwood's blog. It refers to Martin Fowler's wiki about writing code to Fail Fast. This technique has been around for a while; however,  I honestly have always been more of a proponent of writing defensive code. 

However, after writing defensive code for years and reading this paper I am starting to rethink my approach. I am not saying that I am doing away with defensive coding. I still believe that writing defensive code makes very robust code; however, there are times when the Fail Fast technique makes sense. For example, I wrote a piece of code that stores some common lookup information in a cache on the server. The base code looks like this:

try
{
    _leaseDuration = Convert.ToInt32( ConfigurationSettings.AppSettings[ _leaseDurationLookupKey ]);
    if(_leaseDuration==0)_leaseDuration = __defaultLeaseDuration;  <--( Defensive technique: Add this line of code)
}
catch
{
    // an error occurred. this means that the lease property is not
    // configured properly in the configuration file. so set it to
    // its defalt value.
    _leaseDuration = __defaultLeaseDuration;  <-- This line of code would rarely get hit.
}

The issue is that Convert.ToInt32() will convert a null value to 0....and it is one of the reasons I typically advocate using Convert.ToXXX() methods over a simple cast. It is much more forgiving; however, it has a side-effect of masking potential issues. Following a Fail Fast technique I should opt to do a straight cast e.g.

 _leaseDuration = (int)ConfigurationSettings.AppSettings[ _leaseDurationLookupKey ];

and then in my catch block, I should have thrown an exception as opposed to setting its default value.

In other words:

try
{
    _leaseDuration = Convert.ToInt32( ConfigurationSettings.AppSettings[ _leaseDurationLookupKey ]); <--(Fail Fast Technique)
}
catch
{
    throw new ConfigurationException ( "Could not lookup Lease Duration!!!" ); <--(Fail Fast Technique)
}

The question then arises, when to use which technique?. Some people would advocate  using 1 versus the other always. However, I believe you must always be pragmatic and that being a purist is a form of dogma and doesn't always lead to the best overall solution. If you think about it, Defensive coding and Fail Fast are not mutually exclusive. Defensive coding means that you are evaluating different execution paths and handling accordingly. However, a pure defensive technique might be to force the code down a different execution path if certain situations occur, a Fail Fast technique would be to throw your hands up and state "fix the problem". The question you then ask, when do you fail fast?

  • Configuration Data. If you don't get it right, Fail Fast and let everyone know.
  • Retrieving data or you come across a condition that if it occurs you really have no idea what to do. In other words, when writing defensive code and you come across a situation that could occur and that if it does occur you are at a complete loss of what to do. The answer...Fail Fast.
  • Others, still considering this....

Just a few thoughts,

-Mathew Nolton

Posted: Apr 26 2005, 09:34 AM by MatiasN | with 4 comment(s)
Filed under:
Too Funny. One of the best pranks ever.

A friend of mine who always seems to find pranks and funny minutia on the web came across this and sent it to me. It takes a few minutes to read it but well worth the time.

http://www.zug.com/pranks/powerbook

-Mathew Nolton

Posted: Mar 30 2005, 03:50 PM by MatiasN | with 1 comment(s)
Filed under:
More Posts Next page »