Development With A Dot

Blog on development in general, and specifically on .NET. Created and maintained by Ricardo Peres.

  • Entity Framework Pitfalls: Command Interceptors and Identity Keys

    The IDbCommandInterceptor (sorry, no official documentation) interface was introduced in Entity Framework 6 as part of the new interception and logging API, and allows the interception of the SQL and its parameters that are sent to the database as the result of DbContext CRUD operations.

    It offers two methods for each of the basic ADO.NET operations, those defined in DbCommand, one called before the operation is executed, and the other called afterwards:

    • ExecuteNonQuery (UPDATEs, INSERTs, DELETEs) –> NonQueryExecuting, NonQueryExecuted;
    • ExecuteScalar (SELECTs returning a single value) –> ScalarExecuting, ScalarExecuted;
    • ExecuteReader (SELECTs returning rows) –> ReaderExecuting, ReaderExecuted.


    As usual, the methods ending with “ing” are executed before and those ending with “ed” are executed afterwards, always synchronously.

    One might naively assume that INSERTs would always trigger a NonQueryExecuting/NonQueryExecuted call, and indeed it is so, unless we have an IDENTITY primary key, in which case, Entity Framework will instead call ReaderExecuting/ReaderExecuted. It’s easy to understand why: when we use an IDENTITY, we need to retrieve the generated value immediately after the INSERT, hence Entity Framework will generate code like:

       1: INSERT INTO [dbo].[SomeTable] (...)
       2: VALUES (...)
       3: SELECT [Id]
       4: FROM [dbo].[SomeTable]
       5: WHERE @@ROWCOUNT > 0 AND [Id] = SCOPE_IDENTITY()

    The INSERT and SELECT are combined in the same command, which justifies the choice of ReaderExecuting/ReaderExecuted. Because in Entity Framework we always use IDENTITY – don’t we? –, it’s ReaderExecuting/ReaderExecuted that you should be implementing if you want to change the INSERT SQL or its parameters.

    Read more...

  • SQL Server Auditing

    Probably the best mechanism for auditing changes to tables in SQL Server is Change Data Capture, which I already covered here. Unfortunately, it only works with the Enterprise editions of SQL Server, something that not everyone has access to.

    I once had to implement a similar solution that should work on any edition of SQL Server, which is what I am going to talk about. Its only requirement was that for any changes made to target tables, their old values would be stored in a shared history table, with the indication of the timestamp and user who made the change.

    Let’s imagine we have some table, anything will do:

       1: CREATE TABLE dbo.audit_target
       2: (
       3:     id INT IDENTITY PRIMARY KEY,
       4:     a NVARCHAR(50),
       5:     b INT
       6: )

    As you can see, it’s just a regular table with a single-column primary key and a couple of other columns, any number and type will do, doesn’t really matter.

    Now let’s create a table for holding the change history:

       1: CREATE TABLE dbo.history
       2: (
       3:     id INT IDENTITY PRIMARY KEY,
       4:     [table_name] NVARCHAR(30) NOT NULL,
       5:     table_id INT NOT NULL,
       6:     [changes] NVARCHAR(MAX),
       7:     [timestamp] SMALLDATETIME NOT NULL DEFAULT GETDATE(),
       8:     [user] NVARCHAR(30) NOT NULL
       9: )

    The table_name column shall hold the name of the table whose changes we are tracking, table_id, the id of the updated row, changes will get a XML containing all the old values, timestamp the date and time the change occurred, and user, the person who made the change (more on this later).

    OK, next we need a stored procedure to populate the changes table:

       1: CREATE PROCEDURE dbo.generic_audit_procedure
       2: (
       3:     @procid INT,
       4:     @xml XML
       5: )
       6: AS
       7: BEGIN
       8:     SET NOCOUNT ON
       9:     
      10:     -- check if there are changes, in which case, xml will not be empty
      11:     IF ((@xml IS NULL) OR (CAST(@xml AS NVARCHAR(MAX)) = N''))
      12:     BEGIN
      13:         RETURN 0
      14:     END
      15:     
      16:     -- get the current table id for the current process (trigger)
      17:     DECLARE @tablename NVARCHAR(30)
      18:  
      19:     SELECT @tablename = OBJECT_NAME(t.id)
      20:     FROM sys.sysobjects p
      21:     INNER JOIN sys.sysobjects t
      22:     ON p.parent_obj = t.id 
      23:     WHERE p.id = @procid
      24:     
      25:     IF ((@tablename IS NULL) OR (@tablename = N''))
      26:     BEGIN
      27:         RAISERROR(N'Could not get table name', 16, 16)
      28:         ROLLBACK TRAN
      29:         RETURN
      30:     END
      31:     
      32:     -- get the primary key column for the current table
      33:     DECLARE @idname NVARCHAR(30)
      34:  
      35:     SELECT @idname = name 
      36:     FROM sys.syscolumns
      37:     WHERE id = OBJECT_ID(@tablename)
      38:     AND colstat = 1
      39:  
      40:     IF ((@idname IS NULL) OR (@idname = ''))
      41:     BEGIN
      42:         RAISERROR(N'Could not get id column for the current table', 16, 16)
      43:         ROLLBACK TRAN
      44:         RETURN
      45:     END
      46:     
      47:     -- get the current user from the context
      48:     DECLARE @username NVARCHAR(30)
      49:  
      50:     SELECT @username = CONVERT(NVARCHAR(30), CONTEXT_INFO())
      51:  
      52:     IF ((@username IS NULL) OR (@username = ''))
      53:     BEGIN
      54:         RAISERROR(N'Could not get current username', 16, 16)
      55:         ROLLBACK TRAN
      56:         RETURN
      57:     END
      58:  
      59:     DECLARE @tmp TABLE([key] NVARCHAR(MAX), [value] XML)
      60:  
      61:     INSERT INTO @tmp SELECT q.[key], q.[value]
      62:     FROM
      63:     (
      64:         SELECT T.N.value(N'(*[local-name(.)=sql:variable("@idname")])[1]', N'INT') AS [key], T.N.query(N'*') AS [value]
      65:         FROM @xml.nodes('/row') AS T(N)    
      66:     ) q
      67:  
      68:     DECLARE @ROWS INT
      69:     SET @ROWS = 0
      70:  
      71:     -- loop all modified records
      72:     WHILE (1 = 1)
      73:     BEGIN
      74:         DECLARE @id NVARCHAR(MAX)
      75:         DECLARE @value XML
      76:         
      77:         -- pick the first record
      78:         SELECT TOP 1 @id = [key], @value = [value]
      79:         FROM @tmp
      80:         
      81:         IF (@@ROWCOUNT = 0)
      82:         BEGIN
      83:             BREAK
      84:         END
      85:                         
      86:         -- insert into the shared table
      87:         INSERT INTO dbo.history (table_name, table_id, [user], [changes])
      88:         VALUES (@tablename, @id, @username, CAST(@value AS NVARCHAR(MAX)))
      89:         
      90:         -- increment the updated rows
      91:         SET @ROWS = @ROWS + @@ROWCOUNT
      92:         
      93:         -- remove the processed record
      94:         DELETE FROM @tmp
      95:         WHERE [key] = @id
      96:     END
      97:  
      98:     RETURN @ROWS
      99: END

    The generic_audit_procedure will get the current table from the current process id, as passed in the @@procid parameter, the current user from CONTEXT_INFO (again, more on this later), the changes from the @xml parameter and will insert everything into the history table.

    And the final part, a trigger to “glue” together the changes made to a table to the stored procedure:

       1: CREATE TRIGGER dbo.generic_audit_trigger
       2: ON dbo.test -- replace this with other target tables' names
       3: AFTER UPDATE
       4: AS
       5: BEGIN
       6:     SET NOCOUNT ON
       7:     
       8:     -- get all changes
       9:     -- this needs to be done on the trigger
      10:     DECLARE @xml XML
      11:     SET @xml = (SELECT * FROM inserted FOR XML PATH('row'))
      12:     
      13:     -- call the generic stored procedure
      14:     EXEC dbo.generic_audit_procedure @@PROCID, @xml
      15: END

    This trigger will run after changes are accepted (AFTER UPDATE) and will cast all changes (the INSERTED pseudo-table) as XML, and then invoke the generic_audit_procedure stored procedure with it as an argument, together with the current process id (@@PROCID). Of course, you need to call this for every table that you wish to audit.

    The only thing left is, we need to provide a way for SQL Server to know the name of the current user. We do that by manually executing the following query (actually, it was an automated process that was executed by the data access library):

       1: DECLARE @username VARBINARY(128) = CAST(N'rjperes' AS VARBINARY(128))
       2: SET CONTEXT_INFO @username

    So, whenever a change is made to a table, it will appear as this:

    image

    You can see that the changes column will contain an XML node with all the old values that were changed by the update made to the target table.

    In a nutshell:

    1. We add a trigger that calls a stored procedure upon change to a number of target tables;
    2. Save the application user in CONTEXT_INFO;
    3. Modify the data in a table;
    4. Get the old values in an auditing table;


    There are some limitations with this implementation:

    • Only single-column primary key target tables are supported;
    • All of the “old” values are recorded, not just those that changed;
    • It only keeps the “old” values, not the new ones, although this is by design and easy to change;
    • Binary columns could use a better treatment;
    • Doesn’t keep track of deleted or inserted records;
    • Doesn’t track schema changes (add/drop/modify columns, etc);
    • It is necessary to set CONTEXT_INFO explicitly with the application user that we want to record;
    • Although I didn’t implement it, it would be easy to add a version column to the history table, to keep track of how many changes were made to each target table.

    However, it suited perfectly what I needed, maybe it will work for you too! Winking smile








    Read more...

  • Custom Entity Framework Code First Convention for Discriminator Values

    Since version 6, Entity Framework Code First allows the injection of custom conventions. These conventions define rules that will be applied by default to all mapped entities and properties, unless explicitly changed.

    The conventions API includes a couple of interfaces: IConvention (marker only, should always be included), IConceptualModelConvention<T> (for the conceptual space of the model) and IStoreModelConvention<T> (for the store, or physical, side of the model). Worthy of mention, there is also a convenience class, Convention, that allows access to all mapped types and properties and doesn’t override any of the other conventions, and also TypeAttributeConfigurationConvention<T>, for tying a convention to a custom attribute. Some of the included attributes leverage these interfaces to configure some aspects of the mappings at design time, other configuration needs to be done explicitly in an override of OnModelCreating.

    Entity Framework permits using a column for distinguishing between different types, when the Table Per Class Hierarchy / Single Table Inheritance pattern (please see Entity Framework Code First Inheritance for more information) is used for mapping a hierarchy of classes to a single table, as part of “soft delete” solutions, or, less known, for differentiating between multiple tenants. This column is called a discriminator.

    In order to configure an entity to use a discriminator column, there is no out of the box attribute, so we must resort to code configuration:

       1: protected override void OnModelCreating(DbModelBuilder modelBuilder)
       2: {
       3:     modelBuilder.Entity<MyMultiTenantEntity>().Map(m => m.Requires("tenant_id").HasValue("first_tenant"));
       4:  
       5:     base.OnModelCreating(modelBuilder);
       6: }

    Because there’s really no need to keep repeating this code, let’s implement an attribute for indicating a discriminator column in an entity:

       1: [Serializable]
       2: [AttributeUsage(AttributeTargets.Class, AllowMultiple = false, Inherited = true)]
       3: public sealed class DiscriminatorAttribute : Attribute
       4: {
       5:     public DiscriminatorAttribute(String columnName, Object discriminatorValue)
       6:     {
       7:         this.ColumnName = columnName;
       8:         this.DiscriminatorValue = discriminatorValue;
       9:     }
      10:  
      11:     public String ColumnName { get; private set; }
      12:  
      13:     public Object DiscriminatorValue { get; private set; }
      14:  
      15:     public override Boolean Equals(Object obj)
      16:     {
      17:         var other = obj as DiscriminatorAttribute;
      18:  
      19:         if (other == null)
      20:         {
      21:             return (false);
      22:         }
      23:  
      24:         return ((this.ColumnName == other.ColumnName) && (Object.Equals(this.DiscriminatorValue, other.DiscriminatorValue) == true));
      25:     }
      26:  
      27:     public override Int32 GetHashCode()
      28:     {
      29:         return (String.Concat(this.ColumnName, ":", this.DiscriminatorValue).GetHashCode());
      30:     }
      31: }

    As you can see, the DiscriminatorAttribute attribute can only be applied to a class, at most once. This makes sense, because most likely you will only have a single discriminator column per entity:

       1: [Discriminator("tenant_id", "first_tenant")]
       2: public class MyMultiTenantEntity
       3: {
       4:     //...
       5: }

    You need to specify both a column name and a discriminator value, which can be of any type, usually, a string or an integer.

    Now, let’s write a custom convention that knows how to handle our custom attribute and perform the mapping:

    WARNING! DYNAMICS AND REFLECTION AHEAD!

    PROCEED WITH CAUTION!

       1: public sealed class DiscriminatorConvention : TypeAttributeConfigurationConvention<DiscriminatorAttribute>
       2: {
       3:     private static readonly MethodInfo entityMethod = typeof(DbModelBuilder).GetMethod("Entity");
       4:     private static readonly MethodInfo hasValueMethod = typeof(ValueConditionConfiguration).GetMethods().Single(m => (m.Name == "HasValue") && (m.IsGenericMethod == false));
       5:  
       6:     private readonly DbModelBuilder modelBuilder;
       7:     private readonly ISet<Type> types = new HashSet<Type>();
       8:  
       9:     public DiscriminatorConvention(DbModelBuilder modelBuilder)
      10:     {
      11:         this.modelBuilder = modelBuilder;
      12:     }
      13:  
      14:     public override void Apply(ConventionTypeConfiguration configuration, DiscriminatorAttribute attribute)
      15:     {
      16:         if (this.types.Contains(configuration.ClrType) == true)
      17:         {
      18:             //if the type has already been processed, bail out
      19:             return;
      20:         }
      21:  
      22:         //add the type to the list of processed types
      23:         this.types.Add(configuration.ClrType);
      24:  
      25:         dynamic entity = entityMethod.MakeGenericMethod(configuration.ClrType).Invoke(modelBuilder, null);
      26:  
      27:         Action<dynamic> action = arg =>
      28:         {
      29:             var valueConditionConfiguration = arg.Requires(attribute.ColumnName);
      30:             hasValueMethod.Invoke(valueConditionConfiguration, new Object[] { attribute.DiscriminatorValue });
      31:         };
      32:  
      33:         entity.Map(action);
      34:     }
      35: }

    This class uses a bit of dynamics and reflection because types are not known at compile time, and hence we cannot use generics directly. Because the Apply method will be called multiple times, we need to keep track of which entities have already been processed by this convention, so as to avoid reprocessing them. We need to pass it the instance of DbModelBuilder, because otherwise our custom convention would have no way to apply the mapping, but I think it is a reasonable trade off.

    Et voilà! In order to make use of it, we need to register the convention in OnModelCreating:

       1: protected override void OnModelCreating(DbModelBuilder modelBuilder)
       2: {
       3:     modelBuilder.Conventions.Add(new DiscriminatorConvention(modelBuilder));
       4:  
       5:     base.OnModelCreating(modelBuilder);
       6: }

    And that’s it! Happy conventions! Winking smile




    Read more...

  • Entity Framework Pitfalls: Mapping Discriminator Columns

    When you use a discriminator column, that is, a column that holds a value that tells Entity Framework what type the row refers to, or what restriction it shall use when querying the entity, you cannot map this column as a property in your entity.

    For example, imagine you want to use “soft deletes”, that is, use a database column to represent the “deleted” state of a record instead of actually physically deleting it from the table; you might have an IS_DELETED column of some integer type, that would hold either a 1 or a 0, depending on whether the record is deleted (1) or not (0). In that case, you would build a mapping like this in OnModelCreating:

       1: modelBuilder.Entity<MySoftDeletableEntity>().Map<MySoftDeletableEntity>(m => m.Requires("is_deleted").HasValue(0));

    However, you won’t be able to have a corresponding IsDeleted property in the MySoftDeletableEntity, because Entity Framework will throw an exception complaining about it. It is somewhat sad, because it might be useful to refer to it, but that’s the way it is.

    Read more...

  • Lesser-Known NHibernate Features: Result Transformers

    A result transformer, in NHibernate, is some class that implements the IResultTransformer interface:

       1: public interface IResultTransformer
       2: {
       3:     IList TransformList(IList collection);
       4:     Object TransformTuple(Object[] tuple, String[] aliases);
       5: }

    Most query APIs, except LINQ, support specifying a result transformer. So, what is a result transformer used for? Just what the name says: it turns the values obtained from a query into some object. Normally, we just let NHibernate transform these values into instances of our entities, but we may want to do something different, either because we haven’t mapped some class that we want to use, or because we are not returning all of the entity’s properties, etc.

    NHibernate includes some result transformers:

    • AliasToBeanResultTransformer: allows to transform a result to a user specified class which will be populated via setter methods or fields matching the alias names;
    • AliasToBeanConstructorResultTransformer: identical to AliasToBeanResultTransformer, but we specify a constructor for creating new instances of the target class;
    • AliasToEntityMapResultTransformer: returns a dictionary where the keys are the aliases and the values the corresponding columns;
    • AliasedTupleSubsetResultTransformer: ignores a tuple element if its corresponding alias is null;
    • CacheableResultTransformer: used to transform tuples to a value(s) that can be cached;
    • DistinctRootEntityResultTransformer: for joined queries, returns distinct root entities only;
    • PassThroughResultTransformer: just returns the row as it was obtained from the database;
    • RootEntityResultTransformer; returns the root entity of a joined query;
    • ToListResultTransformer: transforms each result row from a tuple into a IList, such that what you end up with is a IList of ILists.


    All of these can be obtained from static properties in class NHibernate.Transform.Transformers. NHibernate implicitly uses some of these, for example, LINQ queries always use DistinctRootEntityResultTransformer.

    It is easy to build our own transformer. Have a look at the following example:

       1: public class ExpressionsResultTransformer : IResultTransformer
       2: {
       3:     private readonly Type type;
       4:     private readonly Func<Object> constructorFunc;
       5:     private readonly ConstructorInfo constructor;
       6:     private readonly Object[] parameters;
       7:     private readonly Dictionary<Int32, String> expressions = new Dictionary<Int32, String>();
       8:     private PropertyDescriptorCollection props;
       9:  
      10:     public ExpressionsResultTransformer(ConstructorInfo constructor, params Object[] parameters)
      11:     {
      12:         this.constructor = constructor;
      13:         this.parameters = parameters;
      14:     }
      15:  
      16:     public ExpressionsResultTransformer(Func<Object> constructorFunc)
      17:     {
      18:         this.constructorFunc = constructorFunc;
      19:     }
      20:  
      21:     public ExpressionsResultTransformer(Type type)
      22:     {
      23:         this.type = type;
      24:     }
      25:  
      26:     private Object CreateInstance()
      27:     {
      28:         if (this.type != null)
      29:         {
      30:             return (Activator.CreateInstance(this.type));
      31:         }
      32:         else if (this.constructorFunc != null)
      33:         {
      34:             return (this.constructorFunc());
      35:         }
      36:         else
      37:         {
      38:             return (this.constructor.Invoke(this.parameters));
      39:         }
      40:     }
      41:  
      42:     public ExpressionsResultTransformer Add(Int32 index, String property)
      43:     {
      44:         this.expressions[index] = property;
      45:         return (this);
      46:     }
      47:  
      48:     public ExpressionsResultTransformer Add(params String[] properties)
      49:     {
      50:         foreach (var property in properties)
      51:         {
      52:             this.Add(property);
      53:         }
      54:  
      55:         return (this);
      56:     }
      57:  
      58:     public ExpressionsResultTransformer Add(String property)
      59:     {
      60:         var max = this.expressions.Keys.DefaultIfEmpty(-1).Max();
      61:         return (this.Add(max + 1, property));
      62:     }
      63:  
      64:     public static ExpressionsResultTransformer Add<T>(params String[] properties) where T : new()
      65:     {
      66:         return (new ExpressionsResultTransformer<T>().Add(properties));
      67:     }
      68:  
      69:     #region IResultTransformer Members
      70:  
      71:     IList IResultTransformer.TransformList(IList collection)
      72:     {
      73:         return (collection);
      74:     }
      75:  
      76:     Object IResultTransformer.TransformTuple(Object[] tuple, String[] aliases)
      77:     {
      78:         var entity = this.CreateInstance();
      79:  
      80:         if (this.props == null)
      81:         {
      82:             this.props = TypeDescriptor.GetProperties(entity);
      83:         }
      84:  
      85:         foreach (var expression in this.expressions)
      86:         {
      87:             this.props[expression.Value].SetValue(entity, tuple[expression.Key]);
      88:         }
      89:  
      90:         return (entity);
      91:     }
      92:  
      93:     #endregion
      94: }
      95:  
      96: public sealed class ExpressionsResultTransformer<T> : ExpressionsResultTransformer where T : new()
      97: {
      98:     public ExpressionsResultTransformer() : base(typeof(T))
      99:     {
     100:     }
     101:  
     102:     public ExpressionsResultTransformer(Func<T> constructor) : base(new Func<Object>(() => (Object) constructor()))
     103:     {
     104:     }
     105:  
     106:     public ExpressionsResultTransformer(ConstructorInfo constructor, params Object[] parameters) : base(constructor, parameters)
     107:     {
     108:     }
     109: }

    The TransformTuple method is the one used to turn each returned record into an instance of something. TransformList is called at the end, when all the records have been processed.

    The ExpressionResultTransformer class allows us to select which indexes, in the database record, map to which properties in some entity. For our convenience, it offers a number of options to construct an instance (type, constructor + parameters and delegate). We would use it like this:

       1: var products = session.CreateSQLQuery("select p.Name, p.Price, p.ProductId from Product p").SetResultTransformer(ExpressionsResultTransformer.Add<Product>("Name", "Price", "ProductId")).List<Product>();

    Read more...

  • Getting the SQL for HQL and Criteria Queries

    OK, so, I already showed how we can get the SQL that was generated from a LINQ query. Of course, we can do the same for both HQL and Criteria APIs as well (QueryOver is just a wrapper around Criteria, mind you).

    So, for HQL (and SQL), it goes like this:

       1: public static class QueryExtensions
       2: {
       3:     private static readonly PropertyInfo sessionProperty = typeof (QueryImpl).GetProperty("Session", BindingFlags.NonPublic | BindingFlags.Instance);
       4:  
       5:     public static String ToSql(this IQuery query)
       6:     {
       7:         if (query is QueryImpl)
       8:         {
       9:             var session = sessionProperty.GetValue(query, null) as ISession;
      10:             var sessionImpl = session.GetSessionImplementation();
      11:             var translatorFactory = new ASTQueryTranslatorFactory();
      12:             var translators = translatorFactory.CreateQueryTranslators(query.QueryString, null, false, sessionImpl.EnabledFilters, sessionImpl.Factory);
      13:  
      14:             return translators[0].SQLString;
      15:         }
      16:         else if (query is SqlQueryImpl)
      17:         {
      18:             return (query.QueryString);
      19:         }
      20:  
      21:         throw (new NotSupportedException("Query type is not supported."));
      22:     }
      23: }

    You can pass any implementation if IQuery, such as one produced from ISession.CreateQuery() or ISession.CreateSQLQuery(). The static field is merely for performance reasons.

    As for Criteria:

       1: public static class CriteriaExtensions
       2: {
       3:     public static String ToSql(this ICriteria criteria)
       4:     {
       5:         var criteriaImpl = criteria as CriteriaImpl;
       6:         var sessionImpl = criteriaImpl.Session;
       7:         var factory = sessionImpl.Factory;
       8:         var implementors = factory.GetImplementors(criteriaImpl.EntityOrClassName);
       9:         var loader = new CriteriaLoader(factory.GetEntityPersister(implementors[0]) as IOuterJoinLoadable, factory, criteriaImpl, implementors[0], sessionImpl.EnabledFilters);
      10:  
      11:         return loader.SqlString.ToString();
      12:     }
      13: }

    And finally, QueryOver, just a small wrapper around the Criteria version:

       1: public static class QueryOverExtensions
       2: {
       3:     public static String ToSql(this IQueryOver queryOver)
       4:     {
       5:         var criteria = queryOver.UnderlyingCriteria;
       6:         return (criteria.ToSql());
       7:     }
       8: }

    Hope you find this useful! Winking smile

    Read more...

  • Adding Support for ISupportInitialize in NHibernate

    The .NET ISupportInitialize interface is used when we want to support staged initialization for objects. Its BeginInit method is called when initialization is about to start and EndInit when it is finished.

    If we want, it is easy to add support for it in NHibernate. An option would be:

    • BeginInit is called when the object is instantiated, like when NHibernate has loaded a record from the database and is about to hydrate the entity, and immediately after the Id property is set;
    • EndInit is called after all properties are set.

    We do this by using a custom interceptor, like we have in the past. We start by writing a class that inherits from EmptyInterceptor, and implements the listener interface for the PostLoad event, IPostLoadEventListener:

       1: public sealed class SupportInitializeInterceptor : EmptyInterceptor, IPostLoadEventListener
       2: {
       3:     private ISession session = null;
       4:  
       5:     public override void SetSession(ISession session)
       6:     {
       7:         this.session = session;
       8:         base.SetSession(session);
       9:     }
      10:  
      11:     public override Object Instantiate(String clazz, EntityMode entityMode, Object id)
      12:     {
      13:         var listeners = (this.session.SessionFactory as SessionFactoryImpl).EventListeners;
      14:         var metadata = this.session.SessionFactory.GetClassMetadata(clazz);
      15:         var proxy = metadata.Instantiate(id, entityMode);
      16:         var initializable = proxy as ISupportInitialize;
      17:  
      18:         if (initializable != null)
      19:         {
      20:             initializable.BeginInit();
      21:         }
      22:  
      23:         if (listeners.PostLoadEventListeners.OfType<SupportInitializeInterceptor>().Any() == false)
      24:         {
      25:             listeners.PostLoadEventListeners = listeners.PostLoadEventListeners.Concat(new IPostLoadEventListener[] { this }).ToArray();
      26:         }
      27:  
      28:         return (proxy);
      29:     }
      30:  
      31:     #region IPostLoadEventListener Members
      32:  
      33:     void IPostLoadEventListener.OnPostLoad(PostLoadEvent @event)
      34:     {
      35:         var initializable = @event.Entity as ISupportInitialize;
      36:  
      37:         if (initializable != null)
      38:         {
      39:             initializable.EndInit();
      40:         }
      41:     }
      42:  
      43:     #endregion
      44: }
    Then, before creating a session factory, we need to register it in the Configuration instance:
       1: var sessionFactory = cfg.SetInterceptor(new SupportInitializeInterceptor()).BuildSessionFactory();
    Now, if your entity implements ISupportInitialize, NHibernate will automagically call its methods at the proper time. As simple as this! Winking smile

    Read more...

  • Lesser-Known NHibernate Features – Generating Database Scripts

    As you may know, NHibernate knows how to generate the database for you from its model. Any of the supported databases will do.

    What you may not know is that you can simply generate the SQL scripts that would be used to either generate the database from scratch or just update it to match the model as it is. For that you use the SchemaExport and SchemaValidator classes.

    SchemaValidator takes a Configuration instance and basically throws an exception if, when its Validate method is called, the database does not match the model.

    SchemaExport has far more options; it can:

    • Create or drop a database model based on the entity model;
    • Output the SQL to a file, either executing it at the same time or not;
    • Execute the SQL to another arbitrary database connection;
    • Execute a custom action after each line of the generated SQL script.

    An example:

       1: var validator = new SchemaValidator(cfg);
       2:  
       3: try
       4: {
       5:     validator.Validate();
       6: }
       7: catch
       8: {
       9:     var export = new SchemaExport(cfg).SetOutputFile("Script.sql");
      10:  
      11:     //send the script to standard output and execute it
      12:     export.Execute(useStdOut: true, execute: true, justDrop: false);
      13:     //or
      14:     //send all lines of the script to the System.Console.WriteLine method
      15:     export.Execute(scriptAction: System.Console.WriteLine, execute: false, justDrop: false);
      16: }

    Read more...

  • Unity, Part 9: Integration With Managed Extensibility Framework

    This time, I will be talking about integrating Unity with Managed Extensibility Framework (MEF). You can find the other posts in the series here (how to use Unity in a web application), here (adding Interfaces), here (registration by convention), here (injecting values), here (extensions), here (aspect-oriented programming), here (dependency injection) and the first one here (introduction).

    The Managed Extensibility Framework (MEF) has been around since the release of .NET 4.0, and even before as a beta, stand-alone package. Basically, it provides an extensible mechanism for detecting and loading plugins. It’s easier to use than the similarly-named Managed Add-In Framework (MAF), and even if it’s not so feature-rich (it doesn’t support sandboxing, for once), unlike MAF, it is well alive!

    So, what does MEF offer that can be of use to Unity? Well, MEF knows how to locate exports/plugins from a number of locations, like assemblies and file system directories. It’s just a matter of finding the exports we’re interested in and registering them with Unity.

    An export in MEF is some class that is decorated with an ExportAttribute (technically speaking, this is just when using the Attributed Programming Model, since .NET 4.5 there is also the Convention-Based Programming Model). This attribute allows specifying the type to export (ContractType) and also the contract name (ContractName). This matches closely the Unity/IoC concept of contract type and name.

    We could find all exports under a given path using MEF using an AssemblyCatalog, a particular implementation of a ComposablePartCatalog:

       1: var catalog = new AssemblyCatalog("some path");

    A couple of helper functions for picking up the export’s contract type and name, by leveraging the ReflectionModelServices class:

       1: public static IDictionary<String, Type> GetExportedTypes<T>(this ComposablePartCatalog catalog)
       2: {
       3:     return (GetExportedTypes(catalog, typeof(T)));
       4: }
       5:  
       6: public static IDictionary<String, Type> GetExportedTypes(this ComposablePartCatalog catalog, Type type)
       7: {
       8:     return (catalog.Parts.Where(part => IsComposablePart(part, type) == true).ToDictionary(part => part.ExportDefinitions.First().ContractName, part => ReflectionModelServices.GetPartType(part).Value));
       9: }
      10:  
      11:  
      12: private static Boolean IsComposablePart(ComposablePartDefinition part, Type type)
      13: {
      14:     return (part.ExportDefinitions.Any(def => (def.Metadata.ContainsKey("ExportTypeIdentity") == true) && (def.Metadata["ExportTypeIdentity"].Equals(type.FullName) == true)));
      15: }

    This will return a collection of key-value pairs, where the key is the contract name and the value the contract type; this is so there can be multiple contract names for a given contract type. After we have this, it’s just a matter of iterating the results and registering each occurrence:

       1: var type = typeof(ISomeType);
       2: var exports = catalog.GetExportedTypes(type);
       3:  
       4: foreach (var entry in exports)
       5: {
       6:     unity.RegisterType(type, entry.Value, entry.Key);
       7: }

    So, given the following contract and implementations:

       1: public interface ISomeType
       2: {
       3:     void SomeMethod();
       4: }
       5:  
       6: [Export("Some", typeof(ISomeType))]
       7: public class SomeImplementation : ISomeType
       8: {
       9:     public void SomeMethod() { }
      10: }
      11:  
      12: [Export("Another", typeof(ISomeType))]
      13: public class AnotherImplementation : ISomeType
      14: {
      15:     public void SomeMethod() { }
      16: }

    We can obtain a specific contract type implementation given it’s name:

       1: var myImplementation = unity.Resolve<ISomeType>("MyName");

    And also all implementations of the contract that were found:

       1: var all = unity.ResolveAll<ISomeType>();

    This can be enhanced in a couple of ways:

    • Use a Unity extension to automatically find and register exports at runtime;
    • Make use of MEF metadata to tell Unity which lifetime managers to use, and other useful properties, such as the default implementation for the contract type.

    As usual, I’m looking forward for your comments!

    Read more...

  • ASP.NET Web Forms Prompt Validator

    For those still using Web Forms and Microsoft’s validation framework, like yours truly - and I know you’re out there! -, it is very easy to implement custom validation by leveraging the CustomValidator control. It allows us to specify both a client-side validation JavaScript function and a server-side validation event handler.

    In the past, I had to ask for confirmation before a form was actually submitted; the native way to ask for confirmation is through the browser’s confirm function, which basically displays a user-supplied message and two buttons, OK and Cancel. I wrapped it in a custom reusable validation control, which I am providing here:

       1: [DefaultProperty("PromptMessage")]
       2: public sealed class PromptValidator : CustomValidator
       3: {
       4:     [DefaultValue("")]
       5:     public String PromptMessage { get; set; }
       6:  
       7:     protected override void OnPreRender(EventArgs e)
       8:     {
       9:         var message = String.Concat("\"", this.PromptMessage, "\"");
      10:  
      11:         if ((this.PromptMessage.Contains("{0}") == true) && (this.ControlToValidate != String.Empty))
      12:         {
      13:             message = String.Concat("String.format(\"", this.PromptMessage, "\", args.Value)");
      14:         }
      15:  
      16:         this.ClientValidationFunction = String.Concat("new Function('sender', 'args', 'args.IsValid = confirm(", message, ")')");
      17:         this.EnableClientScript = true;
      18:  
      19:         base.OnPreRender(e);
      20:     }
      21: }

    A sample usage without any target control might be:

       1: <web:PromptValidator runat="server" PromptMessage="Do you want to submit your data?" ErrorMessage="!"/>

    And if you want to specifically validate a control’s value:

       1: <web:PromptValidator runat="server" PromptMessage="Do you want to accept {0}?" ErrorMessage="!" ControlToValidate="text" ValidateEmptyText="true"/>

    When submitting your form, you will get a confirmation prompt similar to this (Chrome):

    image

    Read more...