Development With A Dot

Blog on development in general, and specifically on .NET

  • Entity Developer Review, Part 3

    Here is part 3 of my exploring Entity Developer series, this time, on Entity Framework. Read the first part here and the second here.

    So, this time I decided to use Entity Developer to generate a database from an Entity Framework Code First model:


    I was then asked if I wanted to synchronize my model with a database later on, which seems like a good idea:


    Next, I am asked about the target server, interesting, all databases for which I have an ADO.NET provider installed are present:


    A naming configuration screen for classes and properties is shown, similar to its NHibernate counterpart, with the curious exception of the option to remove invalid characters:


    Next, some namespace and Entity Framework version options:


    I chose Entity Framework 6 and then I got to chose the template:


    Again, a number of templates exist, for both POCO, self-tracking (Entity Framework 4) entities, repository pattern, MVC, WCF RIA Services, etc, for C# as well as Visual Basic:


    Because I started with model first (or code first), I am presented with an empty diagram. Weird, I got a validation message about not being able to find the Entity Framework 6 assembly, but it didn’t seem to affect the entities’ generation:


    While generating the model, we can add spatial types as properties (if the target database server supports them), which is nice:


    Everything seemed to work, from generating the database from the model, to the actual classes. We get to tune some of the DbContext options such as LazyLoadingEnabled through the model settings:


    It is also possible to specify stored procedures for the DELETE, INSERT and UPDATE operations, but only if one stored procedure exists in the model.

    So, to add it up, everything worked as expected, the functionality is similar to that when using an NHibernate model.

    Coming next, advanced usages of Entity Developer.


  • Lesser-Known NHibernate Features: Validation and Lifecycle Interfaces

    NHibernate offers two interfaces that can be used to validate an entity before it is saved or updated or to cancel its saving, updating and deleting: IValidatable and ILifecycle. They are an alternative to events, and don’t need anything else to work other than be implemented by some entity.

    Here’s what a possible implementation of IValidatable looks like:

       1: public class Product : IValidatable
       2: {
       3:     public String Name { get; set; }
       4:     public Decimal Price { get; set; }
       6:     void IValidatable.Validate()
       7:     {   
       8:         if (String.IsNullOrWhitespace(this.Name) == true)
       9:         {
      10:             throw new InvalidOperationException("Name must be set");
      11:         }
      13:         if (this.Price <= 0)
      14:         {
      15:             throw new InvalidOperationException("Price needs to be greater than 0");
      16:         }
      17:     }
      18: }

    As you can see, you can cancel the current operation - save or update - by throwing an exception.

    As for ILifecycle, it not only allows us to specifically cancel a save, update and also delete operation, but also to be notified as soon as the entity is loaded:

       1: public class Customer : ILifecycle
       2: {
       3:     public IList<Order> Orders { get; set; }
       4:     public Int32 Id { get; set; }
       5:     public String Name { get; set; }
       7:     LifecycleVeto ILifecycle.OnDelete(ISession s)
       8:     {
       9:         return ((this.Orders.Any() == true) ? LifecycleVeto.Veto : LifecycleVeto.NoVeto);
      10:     }
      11:     void ILifecycle.OnLoad(ISession s, Object id)
      12:     {
      13:         Trace.WriteLine(String.Format("A customer with id {0} was loaded", id));
      14:     }
      15:     LifecycleVeto ILifecycle.OnSave(ISession s)
      16:     {
      17:         return ((String.IsNullOrWhitespace(this.Name) == true) ? LifecycleVeto.Veto : LifecycleVeto.NoVeto);
      18:     }
      19:     LifecycleVeto ILifecycle.OnUpdate(ISession s)
      20:     {
      21:         using (var childSession = s.GetSession(s.ActiveEntityMode))
      22:         {
      23:             var any = childSession.CreateQuery("select 1 from Customer where Id != :id and Name = :name").SetParameter("id", this.Id).SetParameter("name", this.Name).List<Int32>();
      24:             return ((any.Count != 0) ? LifecycleVeto.Veto : LifecycleVeto.NoVeto);
      25:         }
      26:     }

    You even have access to the current session.

    This is a classic feature of  NHibernate, but some people don’t like it because it “pollutes” our POCO entities with NHibernate-specific features, which makes our entities less reusable, and forces us to reference the NHibernate DLL. It may, however, come in handy sometimes as a "poor-man's" event system.


  • Entity Developer Review, Part 2

    Here’s my second post on Entity Developer, this time, focused on NHibernate. You can find the first part here.

    I created a new model item of type NHibernate and chose the option to generate from the database:


    Mind you, you need to have the database providers installed for the databases you want to access, Entity Developer does not include them.I was then presented with a list of database objects:


    After I selected a few, Entity Developer shows the class and properties naming rules dialog:


    Next, some more options, pretty complete. Very useful is the option to detect many-to-many associations and table per type inheritances. These will be the default settings, but we can override them on a class by class basis:


    Interesting, the next dialog gives an opportunity to split entities among different diagrams according to the database object’s schema:


    Finally, we get to choose the template that will be used to generate the classes (the only included one is for HBM.XML) and set some of its properties, like, generate a single file or one file per entity, implement standard methods such as Equals or GetHashCode, standard interfaces like NHibernate’s IValidatable, ICloneable and INotifyPropertyChanged, add data contract serialization attributes for WCF and add support for a validation framework (NHibernate Validator, Data Annotations or Enterprise Library Validation Application Block):


    There are several templates available, in both C# and Visual Basic, which we can browse and select (others can be added manually):

    The code is immediately generated, unless there are validation errors in the model, like, a table without primary key:


    In the Model window, selecting a class or a property of a class, we can set its properties, I think all of NHibernate’s features are there, collection types, cascading options, id generators, etc, plus validation settings, which will be translated to the validation framework of choice.

    The HBM.XML files are set as embedded resources, which is also nice, and the Web.config file is updated with the connection string to the selected database. Weird, the dialect was set to MsSql2005Dialect, which doesn’t make much sense since I was connecting to SQL Server 2014. It is possible to select another dialect through the user interface, but dialects and drivers appear as text boxes, not much of a help.

    All in all, it’s pretty impressive, it seems that all NHibernate features are present – did I mention that it is even possible to configure filters? – making Entity Developer a very interesting product. Another strong point is the T4 template system, which, with some effort, allows us to generate code mappings or whatever we want. The only problem I see is that no template for NHibernate's built-in mapping by code is supplied, although one exists for Fluent NHibernate.

    Next, Entity Framework.


  • Entity Developer Review, Part 1


    I recently started to use Entity Developer, by Devart, to generate entities from my databases. It is a commercial product, mind you, and I am in no way affiliated with them, I just happen to like it, so I am going to talk a bit about it.

    A very strong point is that it supports out of the box NHibernate, Entity Framework and the now half-forgotten LINQ to SQL, which makes it stand out from other products. This is not going to be a thorough review, but I will post more as soon as I explore things in more depth.

    Look and Feel

    In general, the UI has a classic Windows appearance, with multiple floating windows, and there are a couple of skins available, but I didn’t pay much attention to it.


    There’s a Database Explorer window which allows us to browse our databases and to see what’s in there (tables, views, functions and stored procedures), plus we can drag objects from it into the Diagram window, which will add this objects to the model.

    The Diagram window allows automatic layout of the objects in it, having each in different colors, adding notes (nice!) and new model items (classes, inheritances, associations, enumerations. A diagram can be “stamped” with the author’s details (name, company, version, date, etc), which is good if we are to print it or generate a bitmap from it (supported natively). It can be zoomed and there’s a Diagram Overview window that shows the full model at once, so that we can easily spot objects outside of the visible area.

    There may be several diagrams for an entity model, the core of an Entity Developer project, and this model has it’s own window, on which we can explore it and see all the mapped database objects and corresponding model items, including inheritances, associations, components (complex/value types), etc. We also have there the list of templates available for the current project type.

    There’s the possibility to generate a SQL script from the model, but in NHibernate projects, only databases SQL Server, SQL Server Compact Edition, Oracle, MySQL and PostgreSQL, this requires that the proper .NET providers are present. For Entity Framework projects, I could only find the option to generate the script for SQL Server, and, of course, LINQ to SQL only supports SQL Server as well. There’s an option for updating the model from the database as well, in case we chose to generate it from the database.

    It is of course possible to validate the model, and the Error List window will present all outcomes of this validation, which can either be errors or warnings. A nice thing: double clicking on a notification, we jump directly to the model item that caused it. It would be interesting as well to have the option to quick fix the problem, but I reckon it’s not always possible or easy to do in an automated fashion.

    When it comes to actually generating the code, there are an endless number of options, from capitalization of properties, lazy loading of associations, inclusion of foreign keys, etc. These settings are of course dependent on the project (NHibernate, Entity Framework or LINQ to SQL), because not all libraries have the same features. The generation is based on T4 templates, which is nice, because they are a standard in .NET. Only one template for each API is include, but it is very easy to write our own, by copying from an existing one and modifying it to our liking. I have yet to find a T4 debugger, Entity Developer, unsurprisingly, does not offer one.


    OK, so far I have talked in general about Entity Developer, in my next posts, I will talk about NHibernate and Entity Framework specifics and also about advanced functionality.


  • Filtering and Sorting SharePoint List Views


    How do I filter thee? Let me count the ways!

    You should that SharePoint usually offers lots of ways to do more or less the same thing, and, of course, filtering a list view is no exception! In this post I am going to talk a bit about the ways that you can do this.

    Built-in URL Filtering

    SharePoint’s list view controls (XsltListViewWebPart, ListViewWebPart, etc) recognize a number of parameters from the query string that can be used to filter and sort automatically a list view, without requiring any additional effort.

    Filter by single value:


    FilterField must be an internal field’s name and FilterValue is case insensitive.

    Filter by single value and an operator:


    The list of possible operators (FilterOp) is:

    • Eq: equal (default);
    • Neq: not equal;
    • Gt: greater than;
    • Lt: less than;
    • Geq: greater or equal to;
    • Leq: less or equal to;
    • BeginsWith: the start of a string;
    • Contains: part of a string.

    You can add up to 10 FilterField/FilterValue/FilterOp entries, the items must match all conditions (AND operator). If you wish to filter on an additional field of a lookup field, use this syntax:


    Filter by one of multiple exact values:


    FilterName is also the internal name of the field and the values in FilterMultiValue are treated as case-insensitive. Separate then by ;.

    Filter by one of multiple partial values:



    The * character acts as a wildcard, matching anything before or after. Placing a term between * is the same as using FilterField/FilterValue with the Contains operator in FilterOp.

    Filter by a taxonomy field:

    If you want to filter by a managed metadata field, you can either search by the display name using FilterField and FilterValue, or you can use the ids of the terms. A little known fact is that all usages of managed metadata values – values of fields of managed metadata type – are stored in the TaxonomyHiddenList hidden list. If you want to filter by an id of a term, you need to use the following syntax (search by one of two values with FilterOp=In, values separated by commas):


    Or by a single value (no need for FilterOp):


    The FilterLookupId=1 is only there to tell SharePoint that it needs to search not by value (FilterValue) passed in the query string, but by term pointed by it. You can find the ids for FilterValue in the TaxonomyHiddenList list.



    Possible values for SortDir are pretty obvious:

    • Asc: ascending (default);
    • Desc: descending.

    You can only sort by a single field.

    Filtering a XSLT Web Part by a Query String Parameter

    The idea is to add a ParameterBinding whose Location is QueryString and to use this parameter in a CAML query:

       1: <WebPartPages:XsltListViewWebPart runat="server" ListName="{A1ADD3D1-B21F-4F93-9B86-B1FE332424D0}" DisplayName="My List">
       2:     <XmlDefinition>
       3:         <View Type="HTML" DisplayName="All Items" Url="/Lists/MyList/AllItems.aspx">
       4:             <Query>
       5:                 <Where>
       6:                     <Or>
       7:                         <Gt>
       8:                             <FieldRef Name='SomeNumber'/>
       9:                             <Value Type='Number'>{param}</Value>
      10:                         </Gt>
      11:                         <Contains>
      12:                             <FieldRef Name='Author'/>
      13:                             <Value Type='Text'><![CDATA[{param}]]></Value>
      14:                         </Contains>
      15:                     </Or>
      16:                 </Where>
      17:             </Query>
      18:             <ViewFields>
      19:                 <FieldRef Name="SomeNumber"/>
      20:                 <FieldRef Name="Title"/>
      21:             </ViewFields>
      22:             <!-- all other properties ommitted for brevity -->
      23:         </View>
      24:     </XmlDefinition>
      25:     <ParameterBindings>
      26:         <ParameterBinding Name="param" Location="QueryString(param)"/>
      27:         <!-- all other parameter bindings ommitted for brevity -->
      28:     </ParameterBindings>
      29: </WebPartPages:XsltListViewWebPart>

    Notice that in case the param value is not present in the query string, the second condition in the Or disjunction will apply, which will try to match an empty string inside of the Author (Created By) field, which will always be true. Be careful, however, that while this works most of the time, if you specify a value for param that is indeed part of the creator of the field, it will be matched. As of now, I don’t have an 100% foolproof solution for this, if you know of any, do let me know!

    Filtering by a Form Field

    This is a variation of the previous technique, where we use a Location of Form in a ParameterBinding. In this case, however, filtering will only be applied when the page is post back. Add the following HTML to your page:

       1: <input type="text" name="param"/>
       2: <input type="submit" value="Search"/>
       3: <input type="button" value="Clear" onclick="document.location.href = document.location.href"/>

    And change the previous ParameterBinding to use the param input:

       1: <ParameterBinding Name="param" Location="Form(param)"/>

    The Search button will cause a postback, thus applying the filter, and the Clear button will navigate to the current page, using the GET verb, thus clearing the filter.

    If instead of HTML we want to use a server side control, we certainly can:

       1: <asp:TextBox runat="server" ID="param"/>

    Just change the ParameterBinding to point to the Text property of this control instead:

       1: <ParameterBinding Name="param" Location="Control(param,Text)"/>

    Filtering by a Web Part Connection

    Filter web parts:

    Another option is to filter by a web part that implements any of the filtering interfaces, used for supplying a value to another web part.

    SharePoint Enterprise contains a collection of filtering web parts. These cover a number of scenarios:

    • Filtering from a query string parameter;
    • Filtering from a text box;
    • Filtering from a date picker;
    • Filtering from a choice of values;
    • Filtering from a SharePoint list;
    • Filtering from a Business Connectivity Service;
    • Filtering from a SQL Server Analysis Services cube;
    • Filtering from current user properties (id, profile property);
    • Filtering from a field in the current page ().

    For the sake of simplicity, let’s pick filtering from the query string. In a web part page’s edit mode, add the Query String (URL) Filter web part to a web part zone:


    If you prefer, you can also add it directly on the markup, just replace the QueryStringParameterName and FilterName for whatever you want:

       1: <%@ Register Tagprefix="SPSWC" Namespace="Microsoft.SharePoint.Portal.WebControls" Assembly="Microsoft.Office.Server.FilterControls, Version=, Culture=neutral, PublicKeyToken=71e9bce111e9429c" %>
       3: <SPSWC:QueryStringFilterWebPart runat="server" QueryStringParameterName="Status" FilterName="Status" />

    Next, add a connection from a list view web part in your page:


    Again, you are free to do it in markup:

       1: <WebPartPages:SPProxyWebPartManager runat="server">
       2:     <SPWebPartConnections>
       3:         <WebPartPages:SPWebPartConnection ConsumerConnectionPointID="DFWP Filter Consumer ID" ConsumerID="g_ef253891_70bd_4996_bf8c_eb19d3d83012" ProviderConnectionPointID="ITransformableFilterValues" ProviderID="g_3c7bedb8_d6d7_42db_85d1_98feabfe5a63">
       4:             <WebPartPages:TransformableFilterValuesToParametersTransformer ConsumerFieldNames="Status" ProviderFieldNames="Status"/>
       5:         <WebPartPages:SPWebPartConnection>
       6:     <SPWebPartConnections>
       7: </WebPartPages:SPProxyWebPartManager>
       8: <asp:ScriptManagerProxy runat="server" />

    Do replace the ConsumerID and ProviderID by the actual IDs of your web parts, where ProviderID is the ID of the QueryStringFilterWebPart and ConsumerID is the ID of the list view web part and also the ConsumerFieldNames for the list field you want to filter on and ProviderFieldNames for the value you used in FilterName.

    Filter by a custom web part:

    All you need is to implement interface IWebPartField. Here’s a sample implementation of a web part that filters by a query string value, think of it as a poor man’s Query String Filter web part:

       1: public class QueryStringFilterWebPart : WebPart, IWebPartField
       2: {
       3:     [WebBrowsable(true)]
       4:     [WebDisplayName("Field")]
       5:     [WebDescription("The query string field")]
       6:     [Personalizable(PersonalizationScope.Shared)]    
       7:     public String QueryStringField { get; set; }
       9:     PropertyDescriptor IWebPartField.Schema
      10:     {
      11:         get
      12:         {
      13:             return (TypeDescriptor.GetProperties(this)["QueryStringField"]);
      14:         }
      15:     }
      17:     void IWebPartField.GetFieldValue(FieldCallback callback)
      18:     {
      19:         callback(this.Request.QueryString[this.QueryStringField]);
      20:     }
      22:     public String Value
      23:     {
      24:         get
      25:         {
      26:             return (this.Request.QueryString[this.QueryStringField]);
      27:         }
      28:     }
      29: }

    After you deploy this web part and add it to a page, you will be able to create a connection to a list view web part so as to enable its filtering.

    Filtering an External List

    An external list is filterable by a finder operation, which is an operation of type Read List that you configure on the external content type. You can use SharePoint Designer to create views on the external list on top of an existing Read List operation. In the view’s page, the finder method – the name of the operation, not its display name – is specified as a Method entry in the CAML for the View element in the XSLT list view web part, for example:

       1: <View Name="{A3B221F8-A146-4880-9740-433195D4D788}" DisplayName="My Custom Finder" Url="/Lists/MyExternalList/MyCustomFinder.aspx">
       2:     <Method Name="MyCustomFinder"/>
       3:     <Query/>
       4:     <ViewFields/>
       5: </View>

    Of course, you can also specify parameters to a finder method (a filter):

       1: <Method Name=”MyCustomFinder”>
       2:     <Filter Name=”MyParameter” Value=”{0}”/>
       3: </Method>

    Filtering by XSLT

    The last technique is done on the “client-side”, through XSLT. This offers possibly the worst performance, since all records are retrieved before they are filtered, but the greatest flexibility, because you can apply conditional rules that would be very difficult, if not impossible, using the previous techniques. XSLT filtering can be applied on top of web part connections, external lists, etc.

    So, in your XSLT list view web part, inside xsl:stylesheet, add something like:

       1: <xsl:template match="/">
       2:     <xsl:for-each select="dsQueryResponse/Rows/Row">
       3:         <xsl:choose>
       4:             <xsl:when test="PercentComplete. &gt; .5">
       5:                 <p>Completion above 50%!</p>
       6:             </xsl:when>
       7:             <xsl:when test="PercentComplete. &lt; .5">
       8:                 <p>Completion below 50%!</p>
       9:             </xsl:when>
      10:             <xsl:otherwise>
      11:                 <p>Completion exactly 50%!</p>
      12:             </xsl:otherwise>
      13:         </xsl:choose>
      14:     </xsl:for-each>
      15: </xsl:template>

    This goes through all the returned records and outputs a message depending on the value of the PercentComplete field. Because it is a number field, we must use the syntax PercentComplete. (a dot in the end).


    As you can see, a lot of functionality is available without having to resort to programming, SharePoint is very good at it. XSLT offers the greatest flexibility, because you can use conditional logic to express your exact intents. Whenever possible, however, try to filter records at CAML level, because less data will be returned.


  • HTML Tags in XSLT Variables

    Sometimes there is the need to show text inside HTML tags. For instance, in one occasion, I wanted to show just a portion of the contents in a Wiki field, but just the text. xsl:value-of will return the contents inside HTML tags:

       1: <xsl:variable name="html">
       2:     This <div style="something"><span>is some</span> HTML <strong>contents</strong></div>
       3: </xsl:variable>
       5: <xsl:value-of select="$html"/>

    This will return This is some HTML contents.

    If in the other hand you want to show the HTML contents as is, use xsl:copy-of instead:

       1: <xsl:copy-of select="$html"/>

    Which will return “This

    is some HTML contents” (line break because of the DIV element).

    However, if your variable is HTML encoded (> are &gt;, < are &lt;, etc), it’s a whole different matter. For that, you need to use a recursive template:

       1: <xsl:template name="StripHtml">
       2:     <xsl:param name="html"/>
       3:     <xsl:choose>
       4:         <xsl:when test="contains($html, '&amp;lt;')">
       5:             <xsl:value-of select="substring-before($html, '&amp;lt;')"/>
       6:             <xsl:call-template name="StripHtml">
       7:                 <xsl:with-param name="html" select="substring-after($html, '&amp;gt;')"/>
       8:             </xsl:call-template>
       9:         </xsl:when>
      10:         <xsl:otherwise>
      11:             <xsl:value-of select="$html"/>
      12:         </xsl:otherwise>
      13:     </xsl:choose>
      14: </xsl:template>
      16: <xsl:variable name="html_encoded">
      17:     This &amp;lt;div&amp;gt;&amp;lt;span&amp;gt;is some&amp;lt;/span&amp;gt; HTML &amp;lt;strong&amp;gt;contents&amp;lt;/strong&amp;gt;&amp;lt;/div&amp;gt;
      18: </xsl:variable>
      20: <xsl:call-template name="StripHtml">
      21:     <xsl:with-param name="html" select="$html_encoded"/>                
      22: </xsl:call-template>

    Will return This is some HTML contents as well. The logic is this:

    1. First, show anything before the first < (&lt;), if it exists;
    2. Check if the parameter contains a <, and if so, call the template recursively passing as a parameter the text after > (&gt;), which is supposed to exist;
    3. Otherwise, just output the parameter.


  • Broadcasting Database Changes Through SignalR

    There are several posts out there on using SignalR to signal changes in a database, but because I really didn’t quite like any of them, I decided to write my own.

    The possibility to detect changes in a database comes from the SqlDependency class. Other implementations exist, such as SqlCacheDependency and SqlNotificationRequest, but with these we don’t really receive a notification event. The process is a bit tricky, lots of people out there have problems. These classes only work with SQL Server non-Express, but there are equivalents for other databases, such as OracleDependency for Oracle.

    Because I like to make things generic and reusable, I decided to encapsulate things in my  class. Unfortunately, since SqlDependency does not have an ADO.NET base class, we have to use it directly.

    So, my purpose is to:

    • Have a generic class that, given a database connection and SQL string that illustrates the changes I would like to detect, starts monitoring them, and raises an event whenever it sees changes;
    • I also want a SignalR hub that I can call from the outside – in this case, the changes event – and broadcasts a message about these changes.

    First things first, here is the ChangeNotifier class:

       1: public sealed class ChangeNotifier : IDisposable
       2: {
       3:     private SqlConnection con;
       4:     private SqlCommand cmd;
       5:     private SqlDependency dependency;
       6:     private String connectionString;
       7:     private String dependencyCheckSql;
       9:     ~ChangeNotifier()
      10:     {
      11:         this.Dispose();
      12:     }
      14:     public event EventHandler<ChangeEventArgs> Change;
      16:     public Boolean Start(String connectionStringName, String dependencyCheckSql)
      17:     {
      18:         new SqlClientPermission(PermissionState.Unrestricted).Demand();
      20:         this.connectionString = ConfigurationManager.ConnectionStrings[connectionStringName].ConnectionString;
      21:         this.dependencyCheckSql = dependencyCheckSql;
      23:         var result = SqlDependency.Start(this.connectionString);
      25:         this.con = new SqlConnection(this.connectionString);
      26:         this.con.Open();
      28:         this.cmd = this.con.CreateCommand();
      29:         this.cmd.CommandText = this.dependencyCheckSql;
      31:         this.Setup(true);
      33:         return (result);
      34:     }
      36:     public Boolean Stop()
      37:     {
      38:         var result = false;
      40:         if (this.cmd != null)
      41:         {
      42:             this.cmd.Notification = null;
      43:             this.cmd.Dispose();
      44:             this.cmd = null;
      45:         }
      47:         if (this.con != null)
      48:         {
      49:             this.con.Close();
      50:             this.con = null;
      51:         }
      53:         if (this.dependency != null)
      54:         {
      55:             result = SqlDependency.Stop(this.connectionString);
      56:             this.dependency.OnChange -= this.OnChange;
      57:             this.dependency = null;
      58:         }
      60:         this.Change = null;
      62:         return (result);
      63:     }
      65:     private void Setup(Boolean initial)
      66:     {
      67:         if (initial == false)
      68:         {
      69:             this.dependency.OnChange -= this.OnChange;
      70:         }
      72:         this.cmd.Notification = null;
      73:         this.dependency = new SqlDependency(this.cmd);
      74:         this.dependency.OnChange += this.OnChange;
      76:         this.cmd.ExecuteScalar();
      77:     }
      79:     private void OnChange(Object sender, SqlNotificationEventArgs e)
      80:     {
      81:         this.Setup(false);
      83:         var handler = this.Change;
      85:         if (handler != null)
      86:         {
      87:             handler(sender, new ChangeEventArgs((ChangeInfo)(Int32)e.Info, (ChangeSource)(Int32)e.Source, (ChangeType)(Int32)e.Type));
      88:         }
      89:     }
      91:     public void Dispose()
      92:     {
      93:         this.Stop();
      94:     }
      95: }

    It all starts with the Start method, which receives the name of a connection string stored in Web.config, and the SQL to use for detecting changes. When they occur, it raises the Change event. Action stops when the Stop or Dispose methods are called – do note that this class implements the Dispose Pattern, but in a particular way, because it is not meant to be subclassed. Whenever the OnChange event is raised, we need to set it up again, that’s how SqlDependency works. Because I don’t want to expose directly .NET classes, I created my own event classes and enumerations:

       1: public enum ChangeInfo
       2: {
       3:     AlreadyChanged = -2,
       4:     Unknown = -1,
       5:     Truncate = 0,
       6:     Insert = 1,
       7:     Update = 2,
       8:     Delete = 3,
       9:     Drop = 4,
      10:     Alter = 5,
      11:     Restart = 6,
      12:     Error = 7,
      13:     Query = 8,
      14:     Invalid = 9,
      15:     Options = 10,
      16:     Isolation = 11,
      17:     Expired = 12,
      18:     Resource = 13,
      19:     PreviousFire = 14,
      20:     TemplateLimit = 15,
      21:     Merge = 16,
      22: }
      24: public enum ChangeSource
      25: {
      26:     Client = -2,
      27:     Unknown = -1,
      28:     Data = 0,
      29:     Timeout = 1,
      30:     Object = 2,
      31:     Database = 3,
      32:     System = 4,
      33:     Statement = 5,
      34:     Environment = 6,
      35:     Execution = 7,
      36:     Owner = 8,
      37: }
      39: public enum ChangeType
      40: {
      41:     Unknown = -1,
      42:     Change = 0,
      43:     Subscribe = 1,
      44: }
      46: [Serializable]
      47: public sealed class ChangeEventArgs : EventArgs
      48: {
      49:     public ChangeEventArgs(ChangeInfo info, ChangeSource source, ChangeType type)
      50:     {
      51:         this.Info = info;
      52:         this.Source = source;
      53:         this.Type = type;
      54:     }
      56:     public ChangeInfo Info { get; private set; }
      57:     public ChangeSource Source { get; private set; }
      58:     public ChangeType Type { get; private set; }
      59: }

    We start the notifier in Global.asax.cs, maybe in the Application_Start method:

       1: var notifier = new ChangeNotifier();
       2: notifier.Change += this.OnChange;
       3: notifier.Start("MyConnection", "SELECT SomeColumn FROM dbo.SomeTable");

    A connection named MyConnection should exist in Web.config, as well as a table SomeTable with a column SomeColumn. Do change those for your own.

    Next, a SignalR notification hub:

       1: public class ChangeNotificationHub : Hub
       2: {
       3:     public void OnChange(Int32 info, Int32 source, Int32 type)
       4:     {
       5:         this.Clients.All.onChange(info, source, type);
       6:     }
       7: }

    What it does is merely broadcast to connected parties an OnChange message. We can get a reference to it by calling GetHubContext:

       1: private void OnChange(object sender, ChangeEventArgs e)
       2: {
       3:     var context = GlobalHost.ConnectionManager.GetHubContext<ChangeNotificationHub>();
       4:     context.Clients.All.onChange(e.Info, e.Source, e.Type);
       5: }

    Notice that we call an onChange method (uncapitalized), not OnChange, this is so as to respect JavaScript conventions. We hook our page to SignalR using the following script:

       1: <script type="text/javascript">
       3:     function onChange(info, source, type)
       4:     {
       5:         //do something
       6:     }
       8:     window.addEventListener('load', function() {
       9:         var notifier = $.connection.changeNotificationHub;
      10:         notifier.client.onChange = onChange;
      11:         $.connection.hub.start();
      12:     });

    The call to $.connection should match you hub class, but using camel casing. The callback function onChange is the same as you call in ChangeNotificationHub and in the OnChange event handler.

    I take it for granted that you know how to setup SignalR, if not, do read the introduction in the ASP.NET site.

    And that’s basically it!

    Some common gotchas:

    • Does not work with SQL Server Express;
    • There’s a specific order by which the connection and SqlDependency.Start must be called, if you stick to my code, it should work;
    • Needs SQL Server Service Broker enabled for the database that we are monitoring, but does not require creating a queue or service, SqlDependency does that for us automatically;
    • Your application pool identity needs to have permissions to connect to the notification queue, see them here;
    • Won’t work with any SQL query, a number of restrictions apply; see here;
    • SqlDependency is very sensitive, and you can easily run into problems if you try different things.


  • SharePoint References

    I just updated my Excel I mentioned before with lots of hopefully useful SharePoint references:

    • Fields (enumeration, display name, internal name, id, type, CLR type, resource, group, description, etc);
    • Content types (enumeration, id, parent, group, description, etc);
    • List templates (enumeration, id, name, internal name, base type, category, description, etc);
    • Site templates (enumeration, name, code, id, description, etc);
    • List fields: fields of lists;
    • Document library fields: fields of document libraries;
    • SPDataSource fields: fields returned by SPDataSource.

    Get it here and give me your feedback.


  • The SPDataSource Control


    SPDataSource, together with XsltListViewWebPart, DataFormWebPart and ContentByQueryWebPart, is one of the most powerful tools that SharePoint provides. Because it can be used in different ways, it can also get complicated. In this post I am going to talk about is different modes and what can they give us. Fist, you need to know that the mode comes from the DataSourceMode property and can be one of:

    Let’s see them one by one.


    Webs mode returns subsites of the current site or the root site. You need to supply a select parameter to SPDataSource named WebID, which can take one of the following values:

    • RootWeb: returns subsites of the root site;
    • SubWeb: returns subsites of the current site;
    • <ID>: returns subsites of a given site, indicated by its ID GUID.

    You can also pass a WebURL parameter, taking the relative URL of a site, or {sitecollectionroot} for the root site of the current site collection.

    SPDataSource will return one row for each site found, and for each site, it will return attributes that correspond to the properties of SPWeb, with the __sp prefix. They are:



    ListOfLists returns lists of a given site. It expects a WebID (same values as per the Webs mode) or a WebURL (relative address of a subsite) parameter, if none is supplied, it defaults to the current site. This mode allows restricting the lists to return by applying a CAML query to the SelectCommand property. For example, the following query filters lists of template 101 = Document Library in any of the subsites:

       1: <Webs Scope='Recursive'></Webs> <Lists ServerTemplate='101'></Lists>

    SPDataSource returns the following fields, which are equivalent to the properties with the same name in the SPList class, minus the __sp prefix:



    CrossList performs a query in a number of lists, in pretty much the same way as SPSiteDataQuery does, returning items from these lists. It needs a CAML query in SelectCommand, one that you would pass to SPQuery, and you should specify there the fields to return and an optional query, like the following, which returns all items created by the current user:

       1: <Query><Where><Eq><FieldRef Name='Author'/><Value Type='Number'><UserID/></Value></Eq></Where></Query>

    Regardless of those, it will always return the following fields:



    List returns items from a single list. It expects a WebID or WebURL parameter, defaulting to the current site if none is present, plus ListName, which is, of course, mandatory, and optionally RootFolder or FolderID (for a starting folder name, from SPFolder.Name, like Path/SubPath, or id, from SPFolder.UniqueId). Do set the UseInternalName and IncludeHidden property to true in SPDataSource. There’s another property that affects the results of List, which is Scope. It is similar to SPQuery. It’s possible values are:

    • Default: shows files and subfolders in the starting folder;
    • Recursive: shows all files in all folders;
    • RecursiveAll: shows all files and all folders;
    • FilesOnly: shows all files in the starting folder.

    The fields that are returned depends on the actual list, but the following should be always present:



    Finally, ListItem returns fields from a single list item. You need to feed SPDataSource parameters WebID/WebURL, ListName and ListItemID/ListItemGUID, where ListItemGUID corresponds to the UniqueId property of SPListItem and ListItemID to ID. The fields that are returned depend on the list and content type, of course, but the following are always returned, in the case of a list:


    Or of a document library:


    Parameters are supplied in the SelectParameters collection and normally are one or more of:

    In SelectCommand, you can use these parameters, by referring their Name inside {}, for example:

       1: <View><ViewFields><RowLimit>{Count}</RowLimit></ViewFields></View>

    Accessing Result

    You can access the results in one of several ways:

    • Bind the SPDataSource to a data web part that works with XSLT, such as DataFormWebPart;
    • Use a grid control, such as SPGridView, to display fields, but you have to add columns yourself; one disadvantage is that you cannot see the contents of complex properties, such as, for example, __spParentWeb (the ParentWeb property of SPWeb);
    • Use instead a data-bound control, such as Repeater, where you can access public properties of returned fields, using Eval, for example, <%# Eval("__spParentWeb.Url") %>.


    Together with DataFormWebPart, SPDataSource is a precious tool, which offers great flexibility without going into code. Learn to use it and it will become a great friend!