Gunnar Peipman's ASP.NET blog

ASP.NET, C#, SharePoint, SQL Server and general software development topics.

Sponsors

News

 
 
 
DZone MVB

Links

Social

September 2010 - Posts

Bing Maps: Adding and tracking pushpins using JavaScript

I played with Bing Maps AJAX-based API and found it to be very easy to use after you know basic objects and you are able to use them. In this posting I will show you simple example about how to add pushpins to map and how to show coordinates of pushpins to users.

Our Bing Maps solution

Before we dig ourselves to pretty simple code let’s see the end result so we are motivated to read on. On this map I tracked down one of my walking paths when I want some good coffee and clear my mind. Click on image to see it at original size.

Bing Maps interactive map

To add pushpin to map I just have to double-click on point where I want to add new pushpin. New row will be added to table on right automatically.

Mark-up

Here is the HTML for my simple map solution.


<div id="myMap" class="mapLayer"></div>
<div id="routeLayer">
    <table id="routeTable">
        <tr>
        <th>LAT</th>
        <th>LON</th>

        </tr>
    </table>
</div>
<div style="clear:both;">&nbsp;</div>

That’s all about additional mark-up. Of course, I modified styles of default MVC application a little bit but it was nothing that is worth showing here. When my app is ready to use I will publish it to my Visual Studio experiments page at GitHub. You can hear more about my solution from my new blog that I will announce during next couple of months. Meanwhile let it be secret, okay? :)

Including API and loading map

The following JavaScript block shows how to include Bing Maps AJAX API to page and how to load map. One more thing – the div where map is shown is 800 pixels wide and 600 pixel high. It is set in style sheet.


<script type="text/javascript" 
   
src="http://ecn.dev.virtualearth.net/mapcontrol/mapcontrol.ashx?v=6.2">
</
script>

<script type="text/javascript">
var map = null;

function GetMap() {
    map = new VEMap('myMap');
    map.AttachEvent("ondoubleclick", AddPushpin);
    map.LoadMap(new VELatLong(59.43655681809183, 24.75275516510011),
                15, VEMapStyle.Road,
false);

}
 
onload = GetMap;
</script>

NB! To get map loaded right now comment in second line of GetMap() method!

The first script reference loads Bing Maps API. GetMap() function initializes map with coordinates and zoom level. In the end of script block windows onload event is made to fire GetMap() method. This is how we get map work and this is how we show there what user should see when map is loaded.

Adding pushpins to map

In the GetMap() method we registered double-click event for our map. Double-click event is handled by AddPushpin() method that adds new pushpin to map.


<script type="text/javascript">
function AddPushpin(e)
{
    var pixel = new VEPixel(e.mapX, e.mapY);
    var point = map.PixelToLatLong(pixel);

    var pin = new VEShape(VEShapeType.Pushpin, point);
    pin.SetTitle("Double Click");
    pin.SetDescription("DoubleClick Event");
    map.AddShape(pin);

    AddPointToRoute(point);

    return true;
}
</script>

To get pushpin to correct location and to find out coordinates we are using Bing Maps API objects. point is point on map in map coordinates. pin is pushpin objects that is added to map. The last method that is called – AddPointToRoute() – adds the coordinates of new pushpin to table.

Adding pushpin coordinates to table

To add pushpin coordinates to table I call AddPointToRoute() method in double-click handler. In this  method I ask table from DOM, create new row and add columns to it with latitude and longitude values. The code is here.


<script type="text/javascript">
function AddPointToRoute(point)
{
    var tr = document.createElement("tr");
    var td = document.createElement("td");
    td.innerText = point.Latitude;
    tr.appendChild(td);

    td = document.createElement("td");
    td.innerText = point.Longitude;    
    tr.appendChild(td);


    var table = document.getElementById('routeTable');    
    table.appendChild(tr
);
}
</script>

One note to holy warriors of all these JavaScript selectors – I know jQuery and I know very well how to use it. In current case I see no need to include another load of JavaScript to my page if I only win in couple of code rows.

Conclusion

This posting shows us how easy it is to use Bing Maps and client-side API to handle and control maps. To get map work like expected we only needed some rows of HTML and some short JavaScript methods. All the complexity is hidden from us behind Bing Maps API. All we need to use is simple interface that is very well documented and illustrated.

Announcing public source code repository with my blog samples

My blog is about three years old and I must they that it has been very interesting and exciting time with you, guys! Now it is time to add something new to make it easier for you to use this blog and the knowledge you can find here. This posting is specially dedicated to all my frequent readers and if you are one of them then your feedback to me is extremely important to me. Now, let’s go to topic – better availability of sample code is coming soon!

Some backgrounds

During years I have published here many code samples. You can find here a lot of code samples that are fragments of something bigger but that give you an idea about how things work and how I solved one or another problem. It is fine for seasoned professionals who have enough free time to mess with different coding stuff and educate themselves.

But we have here also novice programmers who are eager to learn but who have not so much experience on coding as have dinosaurs like me. For these guys those examples may be too lifeless and far away from problems they solve at work. I have also admit that high level professionals need sometimes examples that are packaged better and ready to run.

I don’t plan to make this blog beginners playground – far from that. I stated those problems because I see real need for better organized code and ready to run samples. It doesn’t affect technical level of this blog anyhow. Don’t worry about that. I still work hard to keep the technical level in the range from 300 to 500 (okay, joke, sky is the limit :P). But I feel like we need some new quality.

Public source code repository is here!

To keep my experimental and sample code better organized and to make new code available immediately I created new GitHub repository where you can find all my experiments and samples. I still have a lot of work to do to get all more popular samples packaged as solutions and to document them to level you can use them by yourselves. I don’t tell you about my plan, I am telling what is happening right now. :)

My experiments @ GitHub

Currently I have 8 projects listed in my repository. During next weeks the number of projects should raise to 30 and for the end of this year the number should be about 50 or 60. All these sample solutions should be easy to use and download’n’go for the end of this year. I also opened wiki for experimental and sample solutions to provide notes and guides that help you with additional information.

As you have seen form my last postings I have published two links with postings:

  • download link for source and binaries package,
  • URL to experiments repository at GitHub.

Right now I publish releases as part of my blog postings but I hope I can find some better solution.

Experiments will be public right from start

GitHub is great social environment and Git is something that saves my time a lot when packaging new releases. I love this system and I think it is good plan to keep my experiments there. From now on I will publish all my new experimental and sample projects to my experiments repository as soon as there is something to commit. I make these public commits from now on in very early stages of experiments and this way I can be sure that you can also take the code and play with it.

Want to be notified when repository changes?

GitHub provides very good way how people can stay in touch with each other and monitor activities. GitHub users can follow other developers and add watches to repositories. The ones who are not GitHub users can subscribe to my GitHub RSS feed. RSS feed is updated quickly as soon as something changes in repository or in wiki.

Feedback

As I am still working on all the code and organization details to get everything done the best way I can imagine I still need your feedback in this early stage.

  • What do you thing about ideas introduced so far?
  • Do you have suggestions for me?
  • Do you have any ideas about how I should organize source code and releases availability?
  • What you expect from sample solutions?
  • Do you have any other feedback?

Please let me know guys – together we can make the tech world better!

ASP.NET MVC: Using asynchronous controller to call web services

Lately I wrote about how to make a lot of asynchronous calls to web services during ASP.NET page processing. Now it’s time to make same thing work with ASP.NET MVC. This blog post shows you how to use asynchronous controllers and actions in ASP.NET MVC and also you will see more complex scenario where we need to gather results of different web service calls to one result set.

Experiments.WebAppThreading.2.zip Experiments.WebAppThreading.2.zip
VS2010 solution | 113KB
Source code @ GitHub Source code repository
GitHub

Asynchronous controllers

ASP.NET MVC has its own mechanism for asynchronous calls. I don’t know why but it seems strange when you start with it and completely logical when you finish. So prepare yourself for disappointment – here is nothing complex. My experiment was mainly guided by MSDN Library page Using an Asynchronous Controller in ASP.NET MVC.

As a first thing let’s compare two controllers that have basically same meaning and in default case also the same behavior. First one is our classic synchronous controller.


public class HomeController : Controller
{
    public ActionResult Index()
    {
return View();
    }
}

And here is asynchronous version of same thing.


public class HomeController : AsyncController
{
    public void IndexAsync()
    {            
    }
 
    public ActionResult IndexCompleted()
    {
        return View();
    }
}

In the case of asynchronous controller we have to use Async and Completes prefixes for same controller methods. I mean, instead of one method we have two methods - one that starts asynchronous processing and the other that is called when all processing is done.

AsyncManager

ASP.NET MVC solves asynchronous processing using its own mechanism. There is one thing we cannot miss and it is class called AsyncManager. From documentation we can read that this class provider asynchronous operations to controller. Well, not much said, but let’s see how we will use it. One thing that seems a little bit dangerous to me is that we have to deal with some kind of counter. Take a look at the following code and see how Increment() and Decrement() methods are called.


public void IndexAsync()
{            
    AsyncManager.OutstandingOperations.Increment();
 
    var service = new DelayedHelloSoapClient();
    service.HelloWorldCompleted += HelloWorldCompleted;
    service.HelloWorldAsync(delay);
}
 
public ActionResult IndexCompleted()
{
    return View(result);
}
 
void HelloWorldCompleted(object sender, HelloWorldCompletedEventArgs e)
{
    AsyncManager.OutstandingOperations.Decrement();            
}

These methods belong to OperationCounter class that maintains count of pending asynchronous operations. Increment() method raises count and Decrement() decreases it. Why I don’t like the need to call these methods is the fact that there calls are easy to forget. Believe, I was more than once able for it during five minutes. Forget these methods and you may see mysterious and hard to catch errors.

Parameters collection

AsyncManager has also parameters collection. This is solution I like a lot because instead of keeping up some controller-wide field I can save my values from asynchronous calls to AsyncManager and therefore I avoid mess in my controller code.

Parameters collection is brilliant idea in my opinion. Parameters are indexed by name and when calling completed-part of controller action those parameters are given to this method as method arguments. This is how I add results to AsyncManager parameters collection.


if (AsyncManager.Parameters.ContainsKey("result"))
{
    list = (List<string>) AsyncManager.Parameters["result"];
}
else
{
    list = new List<string>();
    AsyncManager.Parameters["result"] = list;
}

In my case I have only one parameter, named as result and this is how my completed-method looks.


public ActionResult IndexCompleted(List<string> result)
{
    return View(result);
} 

Asynchronous methods fill the list like shown in previous code fragment. And parameter called result is given to IndexCompleted() method. My job was only to get the values (and of  course keep in mind those damn decrements).

Example of asynchronous controller

Here is the source of my asynchronous controller. In the IndexAsync() method I create 100 calls to my dummy web service. With each call I increment pending asynchronous operations counter by one (okay, it is also possible to do before this for loop because Increment() and Decrement() have overloads with count parameter).

HelloWorldCompleted() method is called when asynchronous call is done and data is here. In this method I add new line to result element in AsyncManager property bag. And I decrement the counter because when this method finishes there is one pending asynchronous call fewer then before.


public class HomeController : AsyncController
{
    public void IndexAsync()
    {            
        var random = new Random();
 
        for (var i = 0; i < 100; i++)
        {
            AsyncManager.OutstandingOperations.Increment();
 
            var delay = random.Next(1, 5) * 1000;
            var service = new DelayedHelloSoapClient();
            service.HelloWorldCompleted += HelloWorldCompleted;
            service.HelloWorldAsync(delay);
 
            Debug.WriteLine("Started: " + service.GetHashCode());
        }
    }
 
    public ActionResult IndexCompleted(List<string> result)
    {
        return View(result);
    }
 
    void HelloWorldCompleted(object sender, 
HelloWorldCompletedEventArgs e)
    {
 
        var hash = 0;
        var service = e.UserState as DelayedHelloSoapClient;
        if (service != null)
            hash = service.GetHashCode();
 
        List<string> list;
        if (AsyncManager.Parameters.ContainsKey("result"))
        {
            list = (List<string>) AsyncManager.Parameters["result"];
        }
        else
        {
            list = new List<string>();
            AsyncManager.Parameters["result"] = list;
        }
 
        list.Add(e.Result);
 
        Debug.WriteLine("Finished: " + hash + " " + e.Result);
        AsyncManager.OutstandingOperations.Decrement();            
    }
 
    public ActionResult About()
    {
        return View();
    }
}

Also notice that I did nothing to bind AsyncManager.Parameters["result"] to result parameter of IndexCompleted() method. This little thing was automated by ASP.NET MVC framework.

Conclusion

ASP.NET MVC supports asynchronous processing very well. It has thinner interface for that than ASP.NET Forms does and we have to write less code. The only not so good thing is incrementing and decrementing pending asynchronous operations counter. But after 15 minutes you don’t forget these two necessary lines anymore. AsyncManager keeps parameters collection that is way better solution than keeping class variables in controller because class variables used by one asynchronous method are may be not used by other methods that need their own variables in class scope. As a conclusion I want to say that writing this sample I found again some brilliant ideas behind ASP.NET MVC framework.

Using MemBus for messaging between application components

Sometimes we need publisher/subscriber messaging in our applications to broadcast messages to different parts of system in real time. We can always build our own solution for this but we can also use something that is already there. In this posting I will show you how to use MemBus to send messages from MDI parent form to MDI child forms.

Experiments.MemBus.Forms.zip Experiments.MemBus.Forms.zip
VS2010 solution | 136KB
Source code @ GitHub Source code repository
GitHub

What is MemBus?

As we can read from MemBus introduction:

“it is messaging framework … that utilizes the semantics of sending and receiving messages. Those messages are not meant to leave the AppDomain where the Bus lives in. There is no durability - when the AppDomain is gone, the Bus is gone.”

So it seems like we have interface to send and receive messages between our system components. Now let’s build something to demonstrate how to use MemBus for something useful.

Design draft

Before going to details let’s take a look at my design draft. This is just drawing that gives you idea about logical parts of system we are building. How we implement our application is another topic. But this is the concept we will try to follow.

MemBus Forms: Design draft

We need something that receives events and lets MDI parent window know that there is new data. MDI parent window constructs message and sends it to Bus. Bus has subscribers that are implemented as MDI child windows. These windows implement IObserver interface. When new child window is opened then it is also registered as subscriber to bus. When child window is closed then it is unregistered. So, there is nothing complex.

Sample application

Before sending and receiving messages we need some type. Well, string is the simplest one but let’s write some more interesting application that we can extend to cool demo. So, instead of simple types I will use my own class called GeoLocationItem. The class is here.


public class GeoLocationItem
{
    public string Title { get; set; }
    public DateTime Time { get; set; }
    public decimal Latitude { get; set; }
    public decimal Longitude { get; set; }
}

Now we need event receiver. It is possible to build some more complex receiver but let’s try to make something more primitive that doesn’t need much code and maintenance. I will use Timer object on my MDI parent form. After every three seconds it sends new message to bus.


private void LocationTimerTick(object sender, EventArgs e)
{
    var item = new GeoLocationItem();
    item.Time = DateTime.Now;
 
    var secondString = item.Time.Second.ToString();
    item.Title = "Car " + secondString[secondString.Length - 1];
    item.Longitude = item.Time.Second;
    item.Latitude = item.Time.Millisecond;
 
    _bus.Publish(item);
}

This code creates GeoLocationItem with some random data and it simulates how to track cars on their tracks.

We have also IBus instance defined in form scope.


public partial class MainForm : Form
{
    private readonly IBus _bus;
 
    public MainForm()
    {
        InitializeComponent();
 
        _bus = BusSetup.StartWith<Fast>().Construct();
    }
 
    // more code here
}

When new MDI child window is opened then it is registered also with bus. When child window is closed it is also removed from bus so it does not receive events anymore.


private void NewWindowToolStripMenuItemClick(object sender,
EventArgs e)
{
    var child = new ChildForm {MdiParent = this};
 
    var observable = _bus.Observe<GeoLocationItem>();
    observable.Subscribe(child);
 
    child.Tag = observable;            
    child.FormClosed += ChildFormClosed;
    child.Show();
}
 
static void ChildFormClosed(object sender, FormClosedEventArgs e)
{
    var form = (Form)sender;
    var observable = form.Tag as IObservable<GeoLocationItem>;
    if (observable != null)
    {
        observable.Subscribe(null);
    }
}

The tricky part here is how form is introduced to bus. I ask new observable object from bus and then register my child window as subscriber to it. When child window is closed then I subscribe null as observer and bus stops sending messages to this instance. If you don’t unsubscribe then form is not destroyed and receives messages background. You may not want it.

Here is the child form. It implements IObserver interface and when new messages is received then it sends out event that is invoked in form’s own thread. Otherwise we get errors and there is no feedback shown on child forms.


public partial class ChildForm : Form, IObserver<GeoLocationItem>
{
    private delegate void AddDataItemDelegate(GeoLocationItem item);
    private readonly AddDataItemDelegate _addDataItem; 
 
    public ChildForm()
    {
        InitializeComponent();
 
        _addDataItem = new AddDataItemDelegate(SetValue);
    }
 
    public void OnNext(GeoLocationItem value)
    {
        Invoke(_addDataItem, new object[] { value });
    }
 
    private void SetValue(GeoLocationItem value)
    {
        var item = new ListViewItem();
        item.Text = value.Time.ToString();
 
        item.SubItems.Add(value.Title);
        item.SubItems.Add(value.Latitude.ToString());
        item.SubItems.Add(value.Longitude.ToString());
 
        BusDataList.Items.Insert(0, item);
    }
 
    public void OnError(Exception error)
    {
        throw new NotImplementedException();
    }
 
    public void OnCompleted()
    {
 
    }
}

Now let’s try to run our program.

And here is the result

I opened some windows and tiled them in MDI parent. Here is the example of program.

MemBus Forms: MDI parent with children

Okay, that’s it for now. You can see that all my eight child forms are receiving messages from parent through MemBus and this was our goal.

Conclusion

Although we can create our own messaging solutions where some clients are publishers and other are subscribers of messages we can avoid this task and use existing solutions. In this posting I demonstrated how to use MemBus to organize communication between MDI parent and child forms. We used nice standardized interface without any bad hacks and we got clean and working solution.

Posted: Sep 23 2010, 02:25 PM by DigiMortal | with 7 comment(s)
Filed under: ,
Unity, Castle Windsor, StructureMap, Ninject – who has best performance?

I made quick comparison of performance of four DI/IoC containers. I measured Unity, Castle Windsor, StructureMap and Ninject in two scenarios – resolving objects with empty constructor and resolving object with injected parameters in its constructor. Results are here.

As a first thing I made carefully sure that objects are created when resolve methods are called. The results for resolving object with default constructor by interface are here.

4 DI/IoC containers: creating objects with default constructor

In second round I let mappers resolve type that needs constructor parameters and these parameters were injected my DI/IoC containers. Times changed a little bit but TOP4 stayed the same.

4 DI/IoC containers: creating complex objects

Here are comparison chart of results for both measurements.

4 DI/IoC containers: performance comparison chart

StructureMap that is old veteran on field when considering .NET gives best results. Unity and Castle Windsor have also good results although Unity works faster in more complex scenarios. Ninject was slowest, although they use lightweight code generation (LCG). You can also read about my object to object mapper experiment and LCG to get better idea what LCG is all about.

Currently, I think, I will stick with StructureMap as they also have pretty good fluent API that was asy to learn and easy to use. I also hope that other competitors give better results in their next versions.

Update 1: Autofac gave me the following results - 9 and 16 for first and second test. So it seems like they are fastest ones. Thanks for suggestion, Robert!

Update 2: Units on charts are ticks. It is average over one million calls to pre-warmed resolve.

Generating data for tables in table per class inheritance tree using data generation plans

I am using Visual Studio database project and data generation plan to populate my database with random data. In my database there are some tables for table per class inheritance mapping and there are some additional steps required to get these tables filled with data correctly. In this posting I will describe the process of generating data for table per class inheritance tables and give you some hints how to make this data valid for O/R-mappers.

Table per class inheritance mapping

Let’s start with simple class diagram. If you are not familiar with party abstraction you can read more about it from my blog posting Modeling people and organizations: Class Party.

Classes to be mapped

In database I have table for each class shown in diagram. Company and Person share primary key with Party. Party defines primary key and if class is of type Person then table Person has row with same primary key as row in table Party. So the connections are 1:1 relationships between primary keys of tables. Here is the fragment of my database diagram that illustrates parties.

Tables for party classes

This inheritance strategy is supported by NHibernate and Entity Framework but not by LINQ to SQL. You can read more about inheritance mapping from article Mapping Objects to Relational Databases: O/R Mapping In Detail by Scott W. Ambler. Also I strongly recommend reading his books Agile Database Techniques: Effective Strategies for the Agile Software Developer and Building Object Applications That Work.

Data generation plan

I need a lot of data to test my database and see how mappers perform. I use data generation plans to simplify this task. Although data generation plan makes good work it has no knowledge or idea about my mapping strategy and therefore I need some data fixing when new data is generated. The following screenshot shows plan with not so much data.

Data generation plan - Medium

I tried different tricks to get data generated correctly but I failed at every attempt. The problem is – there will be primary keys in party table that exist in both tables – persons and companies. On object level it means that object has two types at same time and two identities. It is not possible on object level and mappers throw errors when they discover this situation. Also I cannot allow rows in party table that have no related rows in persons or companies table – remember, we cannot instantiate abstract classes.

Cleaning data

To avoid instantiating abstract classes I let generate data to persons and companies tables and it is not problem if data is duplicated. I wrote simple stored procedure that deletes 50% rows from persons table and all the rows from companies table that are represented in persons table. This way I have 10000 rows in both tables but their identities are different. Here is my stored procedure.


CREATE PROCEDURE CleanData
AS
BEGIN
SET NOCOUNT ON;

-- Delete parties that have no rows in person and company tables
delete
from 
party
where 
party_id not in (select party_id from person)
and party_id not in (select party_id from company)

DECLARE @i INTEGER
DECLARE @sql NVARCHAR(MAX)

-- Delete 50% of rows from person
SELECT @i = COUNT(*) / 2 FROM person 
SELECT @sql = N'DELETE FROM person WHERE party_id IN (SELECT TOP ' + 
CAST(@i AS NVARCHAR(MAX)) + ' party_id FROM person)'

EXEC sp_executesql @sql

-- Delete rows from company that also exist in person table
delete 
from
company
where
party_id in (select party_id from person)
END

After generating new data to database I run this stored procedure to avoid errors in mappings.

Conclusion

Data generation plans are powerful tools to generate data to databases so it is possible to test your application in close to real-life situations. Data generation plans are flexible. Still they have some lacking support for object relational mapping needs but as we saw it was very easy to get over this problem. Same procedure can be extended or generalized for more than one cleaning situation.

Returning paged results from repositories using PagedResult<T>

During my large database experiment I wrote simple solution for paged queries that I can use in my repositories. This far my experiments have shown pretty good results and I think it is time to share some of my code with you. In this posting I will show you how to create paged results using NHibernate and Entity Framework based repositories.

Repositories and classes

Before going to repositories I let’s draw out my solution architecture a little bit. We will cover here only the orders part of my model. Here is class diagram that illustrates how order repositories are defined and how PagedResult<T> class fits into picture.

Class diagram for repositories

These two order repositories are different implementations of same IOrderRepository interface. One implementation is for NHibernate and the other for Entity Framework 4.0.

Here are shown some classes from domain model. Just take a look at these classes so you have better idea what I have.

Class diagram for domain model

As you see, there is nothing complex this far. Classes are simple and repositories are small yet.

PagedResult<T>

To return pages results from my repositories I defined class called PagedResult<T> in my core library. PagedResult<T> defines all important properties we need when dealing with paged results. Code is here.


public class PagedResult<T>
{
    public IList<T> Results { get; set; }
    public int CurrentPage { get; set; }
    public int PageCount { get; set; }
    public int PageSize { get; set; }
    public int RowCount { get; set; }
}

Results property contains objects in current page and the other properties give informations about result set. RowCount tells how much rows there was without paging. Although it is possible to give better names to properties we are happy with current names right now.

Now let’s see how we can use PagedResult<T>. There is method called ListOrdersForCustomer() that takes customer ID and returns orders with paging. There is one improvement I made in both repositories: because there are more methods that return PagedResult<T> I moved all paging logic to private methods. This way I have query methods short and also I have less duplicated code.

NHibernate: ListOrdersForCustomer()

Here is my NHibernate implementation for IOrderRepository. Okay, it is partial implementation here but you understand my point. Basically what I do in ListOrdersForCustomer() and GetPagedResultForQuery() methods is simple:

  • create criteria for required result type,
  • add conditions for query,
  • create count query based on criteria,
  • create multi query that contains our real query and count query,
  • send queries to database,
  • get results and initialize PagedResult<T>.

Code is here. Sorry if it looks a little bit messy.


public class OrderRepository : IOrderRepository
{
    private readonly ISession _session;
 
    public OrderRepository(NhSession session)
    {
        _session = (ISession)session.CurrentSession;
    }
  
    public PagedResult<Order> ListOrdersForCustomer(Guid customerId, 
int page, int pageSize)
    {
        var criteria = _session.CreateCriteria<Order>();
        criteria.Add(
Criterion.
Restrictions.Eq("Customer.Id", customerId)
);
 
        var result = GetPagedResultForQuery(criteria, page, pageSize);
        return result;
    }
 
    private PagedResult<Order> GetPagedResultForQuery(ICriteria criteria, 
int page, int pageSize)
    {
        var countCriteria = CriteriaTransformer.TransformToRowCount(criteria);
        criteria.SetMaxResults(pageSize)
.SetFirstResult((page - 1) * pageSize);
 
        var multi = _session.CreateMultiCriteria()
                    .Add(countCriteria)
                    .Add(criteria)
                    .List();
 
        var result = new PagedResult<Order>();
        result.CurrentPage = page;
        result.PageSize = pageSize;
        result.RowCount = (int)((IList)multi[0])[0];
var pageCount = (double)result.RowCount / result.PageSize;
        result.PageCount = (int)Math.Ceiling(pageCount);
        result.Results = ((ArrayList)multi[1]).Cast<Order>().ToList();
        return result;            
    } 
}

Same way I can implement also other methods that return paged results. If you look at GetOrdersForCustomer() method you see it is pretty short. So, besides readability we also avoided code duplication.

Entity Framework: ListOrdersForCustomer()

ListOrdersForCustomer() for Entity Framework follows the same pattern. You may wonder why there is no base repository to handle all paged results for all repositories. Yes, it is possible, but as it is not problem right now I deal with this issue later.

For Entity Framework we need to specify sort order for results because otherwise it is not able to skip rows. Pretty weird problem but we have to live with it right now. Here is the code.


public class OrderRepository : IOrderRepository
{
    private readonly Context _context;
 
    public OrderRepository(Context context)
    {
        _context = context;
    }
  
    public PagedResult<Order> ListOrdersForCustomer(
Guid customerId, int page, int pageSize)
    {
        var results = from o in _context.Orders
                      where o.Customer.Id == customerId
orderby o.Id
                      select o;
 
        var result = GetPagedResultForQuery(results, page, pageSize);
        return result;
    }
 
    private static PagedResult<Order> GetPagedResultForQuery(
IQueryable<Order> query, int page, int pageSize)
    {
        var result = new PagedResult<Order>();
        result.CurrentPage = page;
        result.PageSize = pageSize;
        result.RowCount = query.Count();
var pageCount = (double)result.RowCount / pageSize;
        result.PageCount = (int)Math.Ceiling(pageCount);
var skip = (page - 1) * pageSize;
        result.Results = query.Skip(skip).Take(pageSize).ToList();
 
        return result;            
    }
}

Entity Framework code is a little bit simpler to read but we have this sorting thing we cannot forget. It is a little bit annoying but we can survive it.

Conclusion

We saw how to implement paged queries using PagedResult<T> class and we saw two implementations on IOrderRepository. One of them was for NHibernate and the other for Entity Framework. Using private helper method for paged results made our code easier to read and also we were able to avoid duplicate code.

Using timer based Unit of Work and Command classes to measure repositories performance

In my last post Find out how long your method runs I introduced how to measure the speed of code using actions. In this posting I will make a step further and give you some ideas about how to create easily measurable code units and how to build measurable scenarios of them. As a foundation I use simplified versions of patterns like Command and Unit of Work.

Here is what I am doing and what I want to achieve:

  • I have database with many rows,
  • I have core library that defines interfaces for repositories,
  • I have two implementations of repositories – NHibernate and Entity Framework,
  • I have simple project for playing with database and repositories,
  • I want to be able to write some simple functional units that perform tasks using repositories,
  • I want to measure how my repositories and tasks perform and how fast their code runs.

I have to warn you about something too…

NB! All code here is experimental and under heavy construction. Take this code as a snapshot and don’t consider it as complete framework for some more serious diagnostics and performance measuring. During time I will add more commands to my current solution and it is 100% sure that all things you find here are subjects to change.

NB! Patterns of Enterprise Architecture by Martin Fowler describes these pattern in the context of business applications and his patterns are also ready for locks and transactions. I have here no support for locks and transactions as this code is still experimental and I have not started with locks and transactions yet. I have here way simpler and very basic implementation of patterns mentioned.

Here is simple class diagram generated by Visual Studio illustrating some classes I have.

My classes

Now you have some ideas about what I have and it is time to get our hands dirty.

Commands

Let’s start with short discussion about Command pattern. Command pattern allows us to handle code units offering different functionality as commands using same interface. This interface in it’s minimal form defines just one method so commands can be executed.

Here is the interface I created for commands.


public interface ICommand
{
    void Execute();
    bool RequiresWarmupCall { get; }
}

RequiresWarmupCall is nasty hack that should be somewhere else but right now it can be where it is. But be warned that I plan to move it to some other place very soon.

To get some information about how one or another command worked I defined interface IDatabaseCommand that has additional members for returned row count and row count for not paged results. Here is how IDatabaseCommand is defined.


public interface IDatabaseCommand : ICommand 
{
    long RowsReturned { get; }
    long RowsInUnpagedResult { get; }
}

And here is example of one of my commands. This command calls ListOrders() method of given repository and cycled through all results returned so I can be sure that results are also retrieved when some of repositories supports lazy loading (and they all do).


public class ListOrdersCommand : IDatabaseCommand
{
    private readonly IOrderRepository _repository;
    private readonly int _page;
    private readonly int _pageSize;
 
    public ListOrdersCommand(IOrderRepository repository, 
int page, int pageSize)
    {
        _repository = repository;
        _page = page;
        _pageSize = pageSize;
    }
 
    public void Execute()
    {
        var results = _repository.ListOrders(_page, _pageSize);
 
        foreach (var order in results.Results)
            foreach (var line in order.OrderLines)
                ;
 
        RowsReturned = results.Results.Count;
        RowsInUnpagedResult = results.RowCount;
    }
 
    public bool RequiresWarmupCall
    {
        get { return true; }
    }
 
    public long RowsReturned
    {
        get;
        private set;
    }
 
    public long RowsInUnpagedResult
    {
        get;
        private set;
    }
}

In Execute() method I can, of course, do more than I did here. This is just one of simpler commands.

Unit of Work

To be able to run sequence of commands I defined my own Unit of Work. Okay, it is simple and non-transactional unit of work but later I will add transactions support there too. Here is my Unit of Work how it is defined.


public class UnitOfWork
{
    private readonly List<ICommand> _commands = new List<ICommand>();
 
    public virtual void Commit()
    {
        foreach (var command in Commands)
            command.Execute();
    }
 
    public IList<ICommand> Commands
    {
        get { return _commands; }
    }
 
    public void Merge(UnitOfWork uow)
    {
        foreach (var command in uow.Commands)
            Commands.Add(command);
    }
}

It is simple and straightforward, open wide to world and I am sure we can find more issues if we want. But for now it is enough and I am the only customer of this code, so I will modify it later when there is need for it.

To measure the work units I extended UnitOfWork and created class TimerBasedUnitOfWork.


public class TimerBasedUnitOfWork : UnitOfWork
{
    public override void Commit()
    {
        var stopper = new Stopwatch();
 
        Time = 0;
        TimeTable = new Dictionary<ICommand, long>();
 
        foreach(var command in Commands)
        {
            if (command.RequiresWarmupCall)
                command.Execute();
 
            stopper.Reset();
            stopper.Start();
            command.Execute();
            stopper.Stop();
 
            TimeTable.Add(command, stopper.ElapsedMilliseconds);
            Time += stopper.ElapsedMilliseconds;
        }
    }
 
    public long Time { get; private set; }
    public Dictionary<ICommand, long> TimeTable { get; private set; }
}

This class has all the same functionality as its parent but it is able to measure the time that commands take to run. It also supports warm-up calls so before measuring commands that read data from database I can make the call during what mapper and database can do all their preparing activities.

Comparing mappers performance

Now let’s see how I compare mappers performance in my test application.


static void Main()
{
    var uow = OrdersForProduct(ProductId);
    uow.Commit();
 
    foreach(var entry in uow.TimeTable)
    {
        var command = entry.Key as IDatabaseCommand;
        if (command == null)
            throw new ArgumentNullException();
 
        Console.Write(command.GetType());
        Console.Write(": ");
        Console.Write(entry.Value);
        Console.Write(" ms, rows: ");
        Console.Write(command.RowsReturned);
        Console.Write(" / ");
        Console.WriteLine(command.RowsInUnpagedResult);
    }
 
    Console.WriteLine("\r\nPress any key to exit ...");
    Console.ReadLine();
}
 
private static TimerBasedUnitOfWork OrdersForProduct(
Guid productId)
{
    var nhOrderRepository = new NhRepository.OrderRepository(Session);
    var efOrderRepository = new EfRepository.OrderRepository(Context);
 
    var nhCommand = new OrdersForProductCommand(nhOrderRepository, productId);
    var efCommand = new OrdersForProductCommand(efOrderRepository, productId);
 
    var uow = new TimerBasedUnitOfWork();
    uow.Commands.Add(nhCommand);
    uow.Commands.Add(efCommand);
 
    return uow;
}

Okay, this code should be illustrative enough to get idea what I have built.

Hot to use it?

So what I can do using my Unit of Work and ICommand? I can do a lot of things:

  • I am able to measure how long my unit are running,
  • I can define simple commands that carry one specific functionality,
  • I can use different repositories with my commands,
  • I can combine my commands to units of work and handle sets of commands as one unit to execute and measure,
  • I can merge separate units of work to create more complex API usage scenarios.

By example, I can create the following commands:

  • ListOrdersCommand – lists orders for first page of result set,
  • ListOrdersForCustomerCommand – lists orders for given customer and select second page,
  • SearchProductByNamePart – returns all products that contain given string in their names,
  • AddOrderForCustomerCommand – adds new order for customer.

From these commands I can build up scenario like inserting new order:

  • user opens orders form and first 10 orders are listed,
  • user filters orders by customer and first 10 orders for customer are listed,
  • user opens new order form and selects product for order line,
  • user saves new order,
  • user is returned back to orders list where active filter is restored.

I have no need for user interface, I can easily define functional units to perform different tasks and I can measure how good or bad my repositories perform. This far the most time consuming part has been messing with NHibernate mappings.

Find out how long your method runs

I am making some experiments with large database and different O/R-mappers. To make it easier for me to measure the time that code takes to run I wrote simple command class that uses Stopwatch class and measures how long it takes for action to run. I this posting I will show you my class and explain how to use it.

Here is the code of my class. You can take it and put it to some temporary project to play with it.


/// <summary>
/// Class for executing code and measuring the time it 
/// takes to run.
/// </summary>
public class TimerDelegateCommand
{
    private readonly Stopwatch _stopper = new Stopwatch();
 
    /// <summary>
    /// Runs given actions and measures time. Inherited classes 
/// may override this method.
    /// </summary>
    /// <param name="action">Action to run.</param>
    public virtual void Run(Action action)
    {
        _stopper.Reset();
        _stopper.Start();
 
        try
        {
            action.Invoke();
        }
        finally
        {
            _stopper.Stop();
        }
    }
 
    /// <summary>
    /// Static version of action runner. Can be used for "one-line" 
    /// measurings.
    /// </summary>
    /// <param name="action">Action to run.</param>
    /// <returns>Returns time that action took to run in 
/// milliseconds.
</returns>
    public static long RunAction(Action action)
    {
        var instance = new TimerDelegateCommand();
        instance.Run(action);
        return instance.Time;
    }
 
    /// <summary>
    /// Gets the action running time in milliseconds.
    /// </summary>
    public long Time
    {
        get { return _stopper.ElapsedMilliseconds; }
    }
 
    /// <summary>
    /// Gets the stopwatch instance used by this class.
    /// </summary>
    public Stopwatch Stopper
    {
        get { return _stopper; }
    }
}

And here are some examples how to use it. Notice that I don’t have to write methods for every code piece I want to measure. I can also use anonymous delegates if I want. And one note more – Time property returns time in milliseconds!


static void Main()
{
    long time = TimerDelegateCommand.RunAction(MyMethod);
    Console.WriteLine("Time: " + time);
 
    time = TimerDelegateCommand.RunAction(delegate
            {
                // write your code here
            });
    Console.WriteLine("Time: " + time);
 
    Console.WriteLine("\r\nPress any key to exit ...");
    Console.ReadLine();
}

You can also extend this command and write your own logic that drives code running and measuring. In my next posting I will show you how to apply Command pattern to make some more powerful measuring of processes.

Experiment: List<T> internals and performance when adding new elements

Lists and their performance has been hot discussion topic for years. I have seen many examples and read a lot of stories about List class and I made conclusion that there is too much pointless opinions about it. In this posting I will show you how List<T> performs when adding elements to it and how it works internally. Be prepared for interesting discussion!

As a first thing I will introduce you how I measured List<T> performance and what results I got. After that we will see how List<T> is internally optimized. There are some changes that happen during optimizations and sometimes people consider these changes as scary ones. I will calm you all down, there is nothing so horrible as some authors say. In the end of the posting I will make some quick conclusions about my experiment and investigations.

My experiment

I wrote for loop that adds 10 million integers to List<int>. I made eight experiments with different initial List<T> capacity to see how initial capacity affects performance. In all of my tests I made sure I don’t make mistakes I described in my previous posting Common mistakes made when measuring the speed of code. Here is the example of my code I used to measure List<T> performance.


static void Main()
{            
    var watch = new Stopwatch();
    var cycles = Math.Pow(10, 2);
    var elements = Math.Pow(10, 7);
    var listCapacity = 10000000; // (int)elements;
    var times = 0D;
 
    for (var cycle = 0; cycle < cycles; cycle++)
    {
        var inputList = new List<int>(listCapacity);
 
        watch.Reset();
        watch.Start();
 
        for (var i = 0; i < elements; i++)
        {
            inputList.Add(i);
        }
 
        watch.Stop();
 
        times += watch.ElapsedMilliseconds;
    }
  
   
Console.WriteLine("Time: " + times / cycles);
    Console.ReadLine();
}

For initial capacity 1 I got 0.78 seconds as average to add 10 million elements to List<T> and for initial capacity 10 million I got 0.33 seconds as average time. So there is two times difference although in time the difference is so small that we don’t recognize it. Here are the results.

Adding elements to List<T>: Initial capacity and elements adding time

As you can see it does not really matter how we choose the initial capacity of List<T>. There seems to the question if these is small initial capacity or there is enough capacity to fill list with elements without any need to resize internal buffers. By default, initial capacity is set to four elements.

How List<T> is optimized?

Source code of .NET Framework is available for everyone and my Visual Studio 2010 is configured to download .NET Framework symbols and source code when I debug my code. The Add() method of List<T> is as follows.


public void Add(T item)
{
    if (_size == _items.Length) EnsureCapacity(_size + 1);
    _items[_size++] = item;
    _version++;
}

Before adding element to internal buffer there is check for capacity. If elements count is reaching the capacity of list then capacity will be increased automatically. Finding new capacity happens in internal method called EnsureCapacity().


private void EnsureCapacity(int min)
{
    if (_items.Length < min)
    {
        int newCapacity;
        if (_items.Length == 0)
            newCapacity = _defaultCapacity;
        else
        {
            newCapacity = _items.Length * 2;
        }
        if (newCapacity < min) newCapacity = min;
        Capacity = newCapacity;
    }
}

As we can see the new capacity for list is two times bigger than previous capacity. But as we saw from diagram above we don’t win much if we don’t set the capacity to final length of list at first place.

What happens when list capacity changes?

When capacity is increased then something happens in setter of Capacity. In my opinion these operations should be part of EnsureCapacity() and not part of some setter. Here is how internal buffer is resized to match the new capacity.


public int Capacity
{
    get
    {
        return _items.Length;
    }
    set
    {
        if (value != _items.Length)
        {
            if (value > 0)
            {
                T[] newItems = new T[value];
                if (_size > 0)
                {
                    Array.Copy(_items, 0, newItems, 0, _size);
                }
                _items = newItems;
            }
            else
            {
                _items = _emptyArray;
            }
        }
    }
}

We can see that this is the place when blasphemy called array copying happens. Now we know how list capacity is optimized and how internal buffer is increased. Let’s see now how the number of array copies changes with elements count in list.

Adding elements to List<T>: Initial capacity and array copies count

Comparing this diagram with previous one we have new question – how array copying affects overall performance so lightly? We can practically see correlation only in last part of diagrams. What is going on?

How is implemented resizing of internal array?

Okay, let’s open Reflector and hunt Array.Copy() as far as we can. Pretty soon our journey ends with call declared like this.


[MethodImpl(MethodImplOptions.InternalCall)]
[ReliabilityContract(Consistency.MayCorruptInstance, Cer.MayFail)]
internal static extern void Copy(Array sourceArray, int sourceIndex,
                                 Array destinationArray, 
int destinationIndex,
                                 int length, bool reliable);

This is the final point we can reach. On picture we can show it this way.

How IList<T> increases internalbuffer size

The big question is what is happening in this external method. I discovered the answer when I measured performance of different array copying methods. All safe managed code methods gave me pretty similar results. Extremely heavy performance hit came when I moved to unsafe code and pointers.

When I compare performance of Array.Copy() method with my own array copy method then their performance is almost equal. As there is no other way I know that should end up with externals and give extremely fast results I am very sure that internally .NET Framework copies arrays using some technique that I tried out with pointers.

Conclusions

Based on previous discussion I have three conclusions.

  • Initial capacity of list affects it’s performance. When we avoided internal buffer resizings we got almost twice better results than with buffer corrections. Although we got performance better it is not so much better that we should work hard to find out final capacities of our lists at any price.
  • List<T> is optimized very well. When list capacity is reached then capacity of list is doubled and that leaves room enough to add new elements without internal buffer resizing. For 10 million elements with initial capacity 1 we got only 24 array copy operations.
  • Array copy is not a real problem. Array copy in .NET Framework is implemented (expectedly) in unsafe code using pointers. You can try out how fast Array.Copy() works when compared to custom code that copies elements from one array to another. Array copy is extremely fast and it is not bottleneck in List<T> work.
More Posts Next page »