Contents tagged with Extensibility
It is possible to register custom functions that exist in the database so that they can be called by Entity Framework Code First LINQ queries.
If we want this method to be callable by a LINQ query, we need to add the DbFunctionAttribute to it, specifying the name of the database function we wish to call, because the .NET method and the database function names can be different:
And for calling it:
However, for certain database functions, it requires a bit more work to get done. Let us consider now the FORMAT function and a .NET implementation:
Besides adding the DbFunctionAttribute attribute:
it also requires that we register it explicitly in our model, for that, we override the OnModelCreating method and add a custom convention:
The convention being:
Now we have the FORMAT function available to LINQ:
Now, I hear you ask: why for SOUNDEX we just need to add a simple attribute and for FORMAT we need so much more? Well, it just happens that SOUNDEX is defined in the Entity Framework SQL Server provider manifest - see it here. All of the functions in SqlFunctions are present in the manifest, but the opposite is not true - not all functions in the manifest are in SqlFunctions, but that's the way it is! Thanks to @divega for the explanation.
Some things worth mentioning:
You cannot specify two functions with the same name and different parameters.
There is an open request to add the FORMAT function to the list of functions supported out of the box by Entity Framework Code First: https://entityframework.codeplex.com/workitem/2586 through the SqlFunctions class, but in the meantime, this might be useful!
Since version 4, ASP.NET offers an extensible mechanism for encoding the output. This is the content that will be returned to the browser. I already refered it in Providers.
The actual implementation to use can be configured by code or XML configuration (the Web.config file).
The default implementation is always available in the read only property HttpEncoder.Default.
If you prefer to change the Web.config file, you need to set the encoderType attribute of the httpRuntime section:
It is a nice addition, especially together with the validation provider model introduced with ASP.NET 4, which will be the topic of my next post on ASP.NET Web Forms extensibility.
In case you ever want to have a look at the generated SQL before it is actually executed, you can use this extension method:
Parameters such as 100 will be located in the nhLinqExpression.ParameterValuesByName collection.
This is part of a series of posts about NHibernate Pitfalls. See the entire collection here.
Suppose you have some properties of an entity, including its id, and you want to refresh its state from the database:
Image is a Byte and is configured as lazy. The problem is that NHibernate is not capable of returning a proxy for it, because the entity itself is not a proxy.
This does not happen for associated entities (one-to-one, many-to-one, one-to-many and many-to-many):
In this case, Customer is another entity, and NHibernate can assign the Order.Customer property a proxy to it.
Because of this problem, I created a simple extension method that loads all properties. It is even smart enough to use proxies, if we so require it:
Of course, it requires that at leat the id property is set. It can be used as:
ASP.NET includes a valuable yet not well known extension point called the page parser filter. I once briefly talked about it in the context of SharePoint, but I feel a more detailed explanation is in order.
A page parser filter is a class with a public parameterless constructor that inherits from PageParserFilter and is registered on the Web.config file, in the pages section, pageParserFilterType attribute. It is called when ASP.NET is compiling a page, the first time it is called, for every page. There can be only one page parser filter per web application.
Why is it a parser? Well, it parses – or, better, receives a notification for - every control declared on the markup of a page (those with runat=”server” or contained inside of), as well as all of the page’s directives (<%@ … %>). The control declarations include all of its attributes and properties, the recognized control type and any complex properties that the markup contains. This allows us to do all kinds of crazy stuff:
Inspect, add and remove page directives;
Setting the page’s compilation mode;
Insert or remove controls or text literals dynamically at specific places;
Add/change/remove a control’s properties or attributes;
Even (with some reflection magic) change a control’s type or tag.
So, how do we all this? First, the parser part. We can inspect all page directives by overriding the PreprocessDirective method. This is called for all page directives:
The page’s compilation mode is controlled by GetCompilationMode:
As for adding controls dynamically, we make use of the ParseComplete method:
Same for changing a control’s properties:
And even changing the control’s output tag or instance type:
Why would we want to change a control’s type? Well, thing about generics, for once.
And now the filtering part: why is it a filter? Because it allows us to filter and control a number of things:
The allowed master page, base page class and source file;
The allowed controls;
The total number of controls allowed on a page;
The total number of direct and otherwise references on a page;
Allow or disallow code and event handler declarations;
Allow or disallow code blocks (<%= … %>, <%: … %>, <% … %>);
Allow or disallow server-side script tags (<script runat=”server”>…</script>);
Allow, disallow and change data binding expressions (<%# … %>);
Add, change or remove event handler declarations.
All of the filtering methods and properties described below return a Boolean flag and its base implementation may or may not be called, depending on the logic that we want to impose.
Allowing or disallowing a base page class is controlled by the AllowBaseType method (the default is to accept):
For master pages, user controls or source files we have the AllowVirtualReference virtual method (again, the default is true):
Controls are controlled (pun intended) by AllowControl, which also defaults to accept:
This may come in handy to disallow the usage of controls in ASP.NET MVC ASPX views!
The number of controls and dependencies on a page is defined by NumberOfControlsAllowed, NumberOfDirectDependenciesAllowed and TotalNumberOfDependenciesAllowed. Interesting, the default for all these properties is 0, so we have to return –1:
Direct dependencies are user controls directly declared in the page and indirect ones are those declared inside other user controls.
Code itself, including event handler declarations, are controlled by AllowCode (default is true):
If we want to change a data binding expression, we resort to ProcessDataBindingAttribute, which also returns true by default:
For intercepting event handlers, there’s the ProcessEventHook, which likewise returns true by default:
And finally, for code blocks, server-side scripts and data binding expressions, there’s the ProcessCodeConstruct method, which likewise also allows everything by default:
This was in no means an in-depth description of page parser filters, I just meant to give you an idea of its (high) potential. It is very useful to restrict what end users can place on their pages (SharePoint style) as well as for adding dynamic control programmatically in specific locations of the page, before it is actually built.
As usual, let me hear your thoughts!
NHibernate’s HiLo (High-Low) id generation algorithm is one of the most commonly used, and for good reasons:
- It is database-independent, that is, does not rely on any database-specific functionality such as SQL Server’s IDENTITY and Oracle’s SEQUENCE;
- It allows batching of inserts;
- It complies with the Unit of Work pattern, because it sends all writes at the same time (when the session is flushed);
- Your code does not need to know or care about it.
Now, this post does not intent to explain this algorithm in depth, for that I recommend the NHibernate HiLo Identity Generator article or Choosing a Primary Key: Natural or Surrogate?, for a more in-depth discussion of id generation strategies. Here I will talk about how to make better use of the NHibernate implementation.
First of all, you can configure the max low value for the algorithm, using by code mapping, like this:
The default max low value is 32767. When choosing a lower or a higher value, you should take into consideration:
The next high value is updated whenever a new session factory is created, or the current low reaches the max low value;
If you have a big number of inserts, it might pay off to have a higher max low, because NHibernate won’t have to go to the database when the current range is exhausted;
If the session factory is frequently restarted, a lower value will prevent gaps.
There is no magical number, you will need to find the one that best suits your needs.
One Value for All Entities
With the default configuration of HiLo, a single table, row and column will be used to store the next high value for all entities using HiLo. The by code configuration is as follows:
The default table is called HIBERNATE_UNIQUE_KEY, and its schema is very simple:
Whenever NHibernate wants to obtain and increment the current next high value, it will issue SQL like this (for SQL Server):
There are pros and cons to this default approach:
Each record will have a different id, there will never be two entities with the same id;
Because of the sharing between all entities, the ids will grow much faster;
When used simultaneously by several applications, there will be some contention on the table, because it is being locked whenever the next high value is obtained and incremented;
The HIBERNATE_UNIQUE_KEY table is managed automatically by NHibernate (created, dropped and populated).
One Row Per Entity
Another option to consider, which is supported by NHibernate’s HiLo generator, consists of having each entity storing its next high value in a different row. You achieve this by supplying a where parameter to the generator:
In it, you would specify a restriction on an additional column. The problem is, NHibernate knows nothing about this other column, so it won’t create it.
One way to go around this is by using an auxiliary database object (maybe a topic for another post). This is a standard NHibernate functionality that allows registering SQL to be executed when the database schema is created, updated or dropped. Using mapping by code, it is applied like this:
Keep in mind that this needs to go before the session factory is built. Basically, we are creating a SQL ALTER TABLE followed by an INSERT statement that change the default HiLo table and add another column that will serve as the discriminator. For making it cross-database, I used the registered Dialect class.
Its schema will then look like this:
When NHibernate needs the next high value, this is what it does:
This approach only has advantages:
The HiLo table is still managed by NHibernate;
You have different id generators per entity (of course, you can still combine multiple entities under the same where clause), which will make them grow more slowly;
No contention occurs, because each entity is using its own record on the HIBERNATE_UNIQUE_KEY table.
One Column Per Entity
Yet another option is to have each entity using its own column for storing the high value. For that, we need to use the column parameter:
Like in the previous option, NHibernate does not know and therefore does not create this new column automatically. For that, we resort to another auxiliary database object:
The schema, with an additional column, would look like this:
And NHibernate executes this SQL for getting/updating the next high value:
The only advantage in this model is to have separate ids per entity, contention on the HiLo table will still occur.
One Table Per Entity
The final option to consider is having a separate table per entity (or group of entities). For that, we use the table parameter:
In this case, NHibernate generates the new HiLo table for us, together with the default HIBERNATE_UNIQUE_KEY, if any entity uses it, with exactly the same schema:
And the SQL is, of course, also identical, except for the table name:
Again, all pros and no cons:
Table still fully managed by NHibernate;
Different ids per entity or group of entities means they will grow slower;
Contention will only occur if more than one entity uses the same HiLo table.
As you can see, NHibernate is full of extensibility points. Even when it does not offer out of the box what we need, we usually have a way around it.
Let me hear from you!
Entity Framework 6 included a feature known as connection resiliency. Basically, what it says is, when EF is trying to connect to a database, it will try a number of times before giving up. After each unsuccessful attempt, it will wait some time and then try again. As you can imagine, this is very useful, especially when we are dealing with cloud storage.
NHibernate does not natively offer this, however, because it is highly extensible, it isn’t too hard to build one such mechanism, which is what I did.
The code is below, as you can see, it consists of a custom implementation of DriverConnectionProvider, the component of NHibernate that opens connections for us.
The code wraps the attempt to open a connection and retries it a number of times, with some delay in between.
The way to configure this, in fluent configuration, would be:
Or if you prefer to use string properties, in either XML or fluent configuration, you can do:
From looking at the class, you can see that it supports two properties:
MaxTries: the maximum number of connect attempts;
DelayBetweenTries: the amount of time to wait between two connection attempts.
It is possible to supply this values by configuration:
As usual, hope you find this useful!
NHibernate allows executable queries, that is, plain old UPDATEs, INSERTs and DELETEs. This is great, for example, for deleting an entity by its id without actually loading it. Note, that the following won’t give you that:
NHibernate will load the proxy before it actually deletes it. But the following does work perfectly:
A single DELETE SQL command is sent to the database with this approach.
In the .NET world, all HTTP requests, whether they be for web services (XML, WCF, Web API), pages (Web Forms and MVC), etc, are processed by a handler. Basically, a handler is a particular implementation of the IHttpHandler interface, and requests are routed to a particular handler class by one of four ways:
- An entry on the Web.config file, on the httpHandlers section;
- An instance returned from a Handler Factory;
- A route handler, like in MVC or Dynamic Data;
- Explicitly requested by the URL, in the case of ASHX generic handlers.
The httpHandlers section can specify both a handler or a handler factory for a specific URL pattern (say, for example, /images/*.png), which may be slightly confusing. I have already discussed handler factories in another post, have a look at it if you haven’t already. A simple registration would be:
Another option is through a route. The IRouteHandler interface defines a method GetHttpHandler which returns the route handler that will handle the request. You can register a IRouteHandler instance for a specific route by setting the RouteHandler property inside the Route class.
Finally, there’s another kind of handler that doesn’t need registering and that is called explicitly: generic handlers. These are .ASHX markup files without any user interface elements that merely reference a code-behind class, which must implement IHttpHandler (you can also place code in the .ASHX file, inside a <script runat=”server”> declaration). Here’s an example:
Having said that, what is a handler good for? The IHttpHandler interface only defines one method, ProcessRequest, and a property, IsReusable. As you can tell, this is considerably more simple than, for example, the Page class, with its myriad of virtual methods and events, which, of course, is also an implementation of IHttpHandler. Because of that, it is much more useful for handling requests that do not need a complex lifecycle. Some scenarios:
Downloading a file;
Uploading a file;
Generating content dynamically, such as images.
The IsReusable indicates to the ASP.NET infrastructure if the hander’s instance can be reused for different identical requests or if a new instance needs to be created. If you don’t store state on the handler’s class, it is safe to return true.
As for the ProcessRequest method, a simple implementation might be:
This will create an image with a text string that is obtained from the query string, that is, anything in the URL after the ? symbol.
Don’t forget to always return the appropriate content type, because the browser won’t know how to handle the content you send without it.
One final note: from an handler you normally don't have access to the session - the Session property is null. If you need to use it, you must declare that your handler implements IRequiresSessionState or IReadOnlySessionState, the later for read-only access. That's basically what ASP.NET does when, on your page's markup, you place a EnableSessionState attribute.
Updated on November 17th