Contents tagged with Extensibility
A result transformer, in NHibernate, is some class that implements the IResultTransformer interface:
Most query APIs, except LINQ, support specifying a result transformer. So, what is a result transformer used for? Just what the name says: it turns the values obtained from a query into some object. Normally, we just let NHibernate transform these values into instances of our entities, but we may want to do something different, either because we haven’t mapped some class that we want to use, or because we are not returning all of the entity’s properties, etc.
NHibernate includes some result transformers:
- AliasToBeanResultTransformer: allows to transform a result to a user specified class which will be populated via setter methods or fields matching the alias names;
- AliasToBeanConstructorResultTransformer: identical to AliasToBeanResultTransformer, but we specify a constructor for creating new instances of the target class;
- AliasToEntityMapResultTransformer: returns a dictionary where the keys are the aliases and the values the corresponding columns;
- AliasedTupleSubsetResultTransformer: ignores a tuple element if its corresponding alias is null;
- CacheableResultTransformer: used to transform tuples to a value(s) that can be cached;
- DistinctRootEntityResultTransformer: for joined queries, returns distinct root entities only;
- PassThroughResultTransformer: just returns the row as it was obtained from the database;
- RootEntityResultTransformer; returns the root entity of a joined query;
All of these can be obtained from static properties in class NHibernate.Transform.Transformers. NHibernate implicitly uses some of these, for example, LINQ queries always use DistinctRootEntityResultTransformer.
It is easy to build our own transformer. Have a look at the following example:
The TransformTuple method is the one used to turn each returned record into an instance of something. TransformList is called at the end, when all the records have been processed.
The ExpressionResultTransformer class allows us to select which indexes, in the database record, map to which properties in some entity. For our convenience, it offers a number of options to construct an instance (type, constructor + parameters and delegate). We would use it like this:
OK, so, I already showed how we can get the SQL that was generated from a LINQ query. Of course, we can do the same for both HQL and Criteria APIs as well (QueryOver is just a wrapper around Criteria, mind you).
So, for HQL (and SQL), it goes like this:
You can pass any implementation if IQuery, such as one produced from ISession.CreateQuery() or ISession.CreateSQLQuery(). The static field is merely for performance reasons.
As for Criteria:
And finally, QueryOver, just a small wrapper around the Criteria version:
Hope you find this useful!
The .NET ISupportInitialize interface is used when we want to support staged initialization for objects. Its BeginInit method is called when initialization is about to start and EndInit when it is finished.
If we want, it is easy to add support for it in NHibernate. An option would be:
- BeginInit is called when the object is instantiated, like when NHibernate has loaded a record from the database and is about to hydrate the entity, and immediately after the Id property is set;
- EndInit is called after all properties are set.
We do this by using a custom interceptor, like we have in the past. We start by writing a class that inherits from EmptyInterceptor, and implements the listener interface for the PostLoad event, IPostLoadEventListener:Then, before creating a session factory, we need to register it in the Configuration instance: Now, if your entity implements ISupportInitialize, NHibernate will automagically call its methods at the proper time. As simple as this!
It is possible to register custom functions that exist in the database so that they can be called by Entity Framework Code First LINQ queries.
If we want this method to be callable by a LINQ query, we need to add the DbFunctionAttribute to it, specifying the name of the database function we wish to call, because the .NET method and the database function names can be different:
And for calling it:
However, for certain database functions, it requires a bit more work to get done. Let us consider now the FORMAT function and a .NET implementation:
Besides adding the DbFunctionAttribute attribute:
it also requires that we register it explicitly in our model, for that, we override the OnModelCreating method and add a custom convention:
The convention being:
Now we have the FORMAT function available to LINQ:
Now, I hear you ask: why for SOUNDEX we just need to add a simple attribute and for FORMAT we need so much more? Well, it just happens that SOUNDEX is defined in the Entity Framework SQL Server provider manifest - see it here. All of the functions in SqlFunctions are present in the manifest, but the opposite is not true - not all functions in the manifest are in SqlFunctions, but that's the way it is! Thanks to @divega for the explanation.
Some things worth mentioning:
You cannot specify two functions with the same name and different parameters.
There is an open request to add the FORMAT function to the list of functions supported out of the box by Entity Framework Code First: https://entityframework.codeplex.com/workitem/2586 through the SqlFunctions class, but in the meantime, this might be useful!
Since version 4, ASP.NET offers an extensible mechanism for encoding the output. This is the content that will be returned to the browser. I already refered it in Providers.
The actual implementation to use can be configured by code or XML configuration (the Web.config file).
The default implementation is always available in the read only property HttpEncoder.Default.
If you prefer to change the Web.config file, you need to set the encoderType attribute of the httpRuntime section:
It is a nice addition, especially together with the validation provider model introduced with ASP.NET 4, which will be the topic of my next post on ASP.NET Web Forms extensibility.
In case you ever want to have a look at the generated SQL before it is actually executed, you can use this extension method:
Parameters such as 100 will be located in the nhLinqExpression.ParameterValuesByName collection.
This is part of a series of posts about NHibernate Pitfalls. See the entire collection here.
Suppose you have some properties of an entity, including its id, and you want to refresh its state from the database:
Image is a Byte and is configured as lazy. The problem is that NHibernate is not capable of returning a proxy for it, because the entity itself is not a proxy.
This does not happen for associated entities (one-to-one, many-to-one, one-to-many and many-to-many):
In this case, Customer is another entity, and NHibernate can assign the Order.Customer property a proxy to it.
Because of this problem, I created a simple extension method that loads all properties. It is even smart enough to use proxies, if we so require it:
Of course, it requires that at leat the id property is set. It can be used as:
ASP.NET includes a valuable yet not well known extension point called the page parser filter. I once briefly talked about it in the context of SharePoint, but I feel a more detailed explanation is in order.
A page parser filter is a class with a public parameterless constructor that inherits from PageParserFilter and is registered on the Web.config file, in the pages section, pageParserFilterType attribute. It is called when ASP.NET is compiling a page, the first time it is called, for every page. There can be only one page parser filter per web application.
Why is it a parser? Well, it parses – or, better, receives a notification for - every control declared on the markup of a page (those with runat=”server” or contained inside of), as well as all of the page’s directives (<%@ … %>). The control declarations include all of its attributes and properties, the recognized control type and any complex properties that the markup contains. This allows us to do all kinds of crazy stuff:
Inspect, add and remove page directives;
Setting the page’s compilation mode;
Insert or remove controls or text literals dynamically at specific places;
Add/change/remove a control’s properties or attributes;
Even (with some reflection magic) change a control’s type or tag.
So, how do we all this? First, the parser part. We can inspect all page directives by overriding the PreprocessDirective method. This is called for all page directives:
The page’s compilation mode is controlled by GetCompilationMode:
As for adding controls dynamically, we make use of the ParseComplete method:
Same for changing a control’s properties:
And even changing the control’s output tag or instance type:
Why would we want to change a control’s type? Well, thing about generics, for once.
And now the filtering part: why is it a filter? Because it allows us to filter and control a number of things:
The allowed master page, base page class and source file;
The allowed controls;
The total number of controls allowed on a page;
The total number of direct and otherwise references on a page;
Allow or disallow code and event handler declarations;
Allow or disallow code blocks (<%= … %>, <%: … %>, <% … %>);
Allow or disallow server-side script tags (<script runat=”server”>…</script>);
Allow, disallow and change data binding expressions (<%# … %>);
Add, change or remove event handler declarations.
All of the filtering methods and properties described below return a Boolean flag and its base implementation may or may not be called, depending on the logic that we want to impose.
Allowing or disallowing a base page class is controlled by the AllowBaseType method (the default is to accept):
For master pages, user controls or source files we have the AllowVirtualReference virtual method (again, the default is true):
Controls are controlled (pun intended) by AllowControl, which also defaults to accept:
This may come in handy to disallow the usage of controls in ASP.NET MVC ASPX views!
The number of controls and dependencies on a page is defined by NumberOfControlsAllowed, NumberOfDirectDependenciesAllowed and TotalNumberOfDependenciesAllowed. Interesting, the default for all these properties is 0, so we have to return –1:
Direct dependencies are user controls directly declared in the page and indirect ones are those declared inside other user controls.
Code itself, including event handler declarations, are controlled by AllowCode (default is true):
If we want to change a data binding expression, we resort to ProcessDataBindingAttribute, which also returns true by default:
For intercepting event handlers, there’s the ProcessEventHook, which likewise returns true by default:
And finally, for code blocks, server-side scripts and data binding expressions, there’s the ProcessCodeConstruct method, which likewise also allows everything by default:
This was in no means an in-depth description of page parser filters, I just meant to give you an idea of its (high) potential. It is very useful to restrict what end users can place on their pages (SharePoint style) as well as for adding dynamic control programmatically in specific locations of the page, before it is actually built.
As usual, let me hear your thoughts!
NHibernate’s HiLo (High-Low) id generation algorithm is one of the most commonly used, and for good reasons:
- It is database-independent, that is, does not rely on any database-specific functionality such as SQL Server’s IDENTITY and Oracle’s SEQUENCE;
- It allows batching of inserts;
- It complies with the Unit of Work pattern, because it sends all writes at the same time (when the session is flushed);
- Your code does not need to know or care about it.
Now, this post does not intent to explain this algorithm in depth, for that I recommend the NHibernate HiLo Identity Generator article or Choosing a Primary Key: Natural or Surrogate?, for a more in-depth discussion of id generation strategies. Here I will talk about how to make better use of the NHibernate implementation.
First of all, you can configure the max low value for the algorithm, using by code mapping, like this:
The default max low value is 32767. When choosing a lower or a higher value, you should take into consideration:
The next high value is updated whenever a new session factory is created, or the current low reaches the max low value;
If you have a big number of inserts, it might pay off to have a higher max low, because NHibernate won’t have to go to the database when the current range is exhausted;
If the session factory is frequently restarted, a lower value will prevent gaps.
There is no magical number, you will need to find the one that best suits your needs.
One Value for All Entities
With the default configuration of HiLo, a single table, row and column will be used to store the next high value for all entities using HiLo. The by code configuration is as follows:
The default table is called HIBERNATE_UNIQUE_KEY, and its schema is very simple:
Whenever NHibernate wants to obtain and increment the current next high value, it will issue SQL like this (for SQL Server):
There are pros and cons to this default approach:
Each record will have a different id, there will never be two entities with the same id;
Because of the sharing between all entities, the ids will grow much faster;
When used simultaneously by several applications, there will be some contention on the table, because it is being locked whenever the next high value is obtained and incremented;
The HIBERNATE_UNIQUE_KEY table is managed automatically by NHibernate (created, dropped and populated).
One Row Per Entity
Another option to consider, which is supported by NHibernate’s HiLo generator, consists of having each entity storing its next high value in a different row. You achieve this by supplying a where parameter to the generator:
In it, you would specify a restriction on an additional column. The problem is, NHibernate knows nothing about this other column, so it won’t create it.
One way to go around this is by using an auxiliary database object (maybe a topic for another post). This is a standard NHibernate functionality that allows registering SQL to be executed when the database schema is created, updated or dropped. Using mapping by code, it is applied like this:
Keep in mind that this needs to go before the session factory is built. Basically, we are creating a SQL ALTER TABLE followed by an INSERT statement that change the default HiLo table and add another column that will serve as the discriminator. For making it cross-database, I used the registered Dialect class.
Its schema will then look like this:
When NHibernate needs the next high value, this is what it does:
This approach only has advantages:
The HiLo table is still managed by NHibernate;
You have different id generators per entity (of course, you can still combine multiple entities under the same where clause), which will make them grow more slowly;
No contention occurs, because each entity is using its own record on the HIBERNATE_UNIQUE_KEY table.
One Column Per Entity
Yet another option is to have each entity using its own column for storing the high value. For that, we need to use the column parameter:
Like in the previous option, NHibernate does not know and therefore does not create this new column automatically. For that, we resort to another auxiliary database object:
The schema, with an additional column, would look like this:
And NHibernate executes this SQL for getting/updating the next high value:
The only advantage in this model is to have separate ids per entity, contention on the HiLo table will still occur.
One Table Per Entity
The final option to consider is having a separate table per entity (or group of entities). For that, we use the table parameter:
In this case, NHibernate generates the new HiLo table for us, together with the default HIBERNATE_UNIQUE_KEY, if any entity uses it, with exactly the same schema:
And the SQL is, of course, also identical, except for the table name:
Again, all pros and no cons:
Table still fully managed by NHibernate;
Different ids per entity or group of entities means they will grow slower;
Contention will only occur if more than one entity uses the same HiLo table.
As you can see, NHibernate is full of extensibility points. Even when it does not offer out of the box what we need, we usually have a way around it.
Let me hear from you!
Entity Framework 6 included a feature known as connection resiliency. Basically, what it says is, when EF is trying to connect to a database, it will try a number of times before giving up. After each unsuccessful attempt, it will wait some time and then try again. As you can imagine, this is very useful, especially when we are dealing with cloud storage.
NHibernate does not natively offer this, however, because it is highly extensible, it isn’t too hard to build one such mechanism, which is what I did.
The code is below, as you can see, it consists of a custom implementation of DriverConnectionProvider, the component of NHibernate that opens connections for us.
The code wraps the attempt to open a connection and retries it a number of times, with some delay in between.
The way to configure this, in fluent configuration, would be:
Or if you prefer to use string properties, in either XML or fluent configuration, you can do:
From looking at the class, you can see that it supports two properties:
MaxTries: the maximum number of connect attempts;
DelayBetweenTries: the amount of time to wait between two connection attempts.
It is possible to supply this values by configuration:
As usual, hope you find this useful!