Fetch performance of various .NET ORM / Data-access frameworks

Update:

I've added an additional test result, namely for Linq to Sql with change tracking switched off (in the answers, at the bottom of the article). I also have updated the graph so it's now partitioned: the frameworks which do change tracking and the ones which don't do change tracking are now grouped together. DbDataAdapter with DataTable is added to the change tracking set, as a DataTable does change tracking.

Original post:

I've thought long and hard if I should blog about this, but in the end I decided it would benefit everyone if I would post about this and would be as open as possible. This article is about fetch performance, namely how quickly can a given ORM or Data-access framework fetch a given set of data from a database into a set of objects. The results presented here are obtained using publically available code which is run on a specific setup which is explained below. I'll go deeper into the results as well and will try to counter some obvious remarks which are always made when benchmarks are presented.

I'll stress again: this benchmark is solely a result of fetch performance measurement, using simple queries which should barely take any time in the DB itself. If your favorite framework gets slaughtered here because it's very slow, and you simply don't believe it, you can run the tests yourself by cloning the repository linked above.

One thing to keep in mind as well is that the actual raw numbers given are unique for the test setup below here in my office. That's OK, as the actual raw numbers aren't that important; what's important is how the frameworks relate to one another. If the hardware is very fast, the numbers will be lower, but the slowest framework will still be the slowest and the fastest framework will still be the fastest and with the same relative margins.

The setup

As this benchmark is about measuring fetch performance, the code ran to fetch data should not be constrained by the database taking away cycles on the system. Therefore the benchmark obtains the data from a database on the network. Nothing used in the setup is particularly new or high-end, except the code of the frameworks of course. Whether a part is not high-end isn't really important either, as every tested framework has the same elements to work with: the same network, the same database, the same .NET framework, the same hardware.

Server: Windows XP, SQL Server 2005, running in a VM with 1 CPU and 1.5GB Ram on VMWare server on a Core2Duo box.

Client: Windows 8.0 32bit, .NET 4.5.1 on a Core2Quad @ 2.4ghz with 4GB ram

Network: 100BASE-T

I admit, that's not state-of-the-art material, but I assure you, it doesn't matter: all frameworks have to work with this setup, and slowness of a framework will still show itself, relative to the others. If framework X is 5 times slower than fetching the data using a DataTable, it will still be 5 times slower than that on the fastest hardware, as the DataTable fetch will also be much faster on the fastest hardware.

The bottlenecks in the frameworks will show their ugly heads anyway, if the hardware is the latest and greatest or a VM. For the people wondering why I picked XP, it's my old dev partition which I transformed into a VM to keep the complete environment in tact for support for older LLBLGen Pro versions, in this case v1 and v2, and it happens to have a good AdventureWorks example DB installed. The query spends barely 2ms in the DB, so it's more than capable to serve our tests.

As database I've chosen to use AdventureWorks on SQL Server. It has a reasonable amount of data to work with, and more than a couple of tables, so it is not a simple demo / mickey mouse database but closer to a database used in real-life scenarios.

The test consists of fetching all 31465 rows from a single table, SalesOrderHeader. This table was chosen because it has more than a few columns and a variety of types, so the materializers have to work more than with a 3-column table with only integer values. It also has a couple of relationships with other tables which makes it a good candidate to see whether the ORM used has a problem with relationships in the context of a set fetch when entities were mapped on these tables.

I also chose this table because the amount of data isn't that small but also not massive either, so a good average test set for performance testing: if there are slow parts in the fetch pipeline of a framework, this kind of entity will put the spotlight on them easily.

All tests were run on .NET 4.5.1 using release builds. No ngen was used, just the default assemblies. The database was warmed up as well as the CLR with fetches which results were thrown away. For the averages the fastest and the slowest times were discarded.

The included ORM / Data-access frameworks

I've included the following ORM / Data-access frameworks and have specified my reasons why:

  • LLBLGen Pro v4.1. Reason for inclusion is because I wrote it and this benchmark was started to see how fast my fetch logic improvements were stacking up against the competition.
  • Entity Framework v6, latest release build from nuget. Reason for inclusion is because Microsoft presents it as the standard for data-access in .NET so it's fair to include it to see whether it is indeed the standard to beat when it comes to fetch performance.
  • NHibernate v3.3.3.4, latest release build from nuget. Reason for inclusion is because still a lot of people use it and it was once the most used ORM on .NET.
  • Linq to Sql. Latest version included in .NET 4.5.1. Reason for inclusion is because it was Microsoft's first ORM shipped for .NET and still a go-to data-access framework for many people as it's simple and highly optimized.
  • Dapper. Version from October 2013. Reason for inclusion is that it is considered (one of) the fastest Micro-ORMs on .NET.
  • DbDataAdapter with DataTable. Latest version included in .NET 4.5.1. Reason for inclusion is because it's a core part of how to do data-access in .NET for a lot of developers still, who work with typed datasets and stored procedures, and also because of it's speed of fetching sets of tabular data.
  • Hand-coded fetch with DbDataReader and a hand-written POCO. Latest version of the DbDataReader code included in .NET 4.5.1. Reason for inclusion is because it shows how fast hand-written, hand-optimized code is and thus in theory how much performance a full ORM framework spills elsewhere on features, overhead or slow code in general.

The DataTable and hand-written fetches are more or less guidelines to see what's to expect and how close the other frameworks come to these two's results. They give a good insight in what's possible and thus show that e.g. expecting way faster fetches is not reasonable as there's not much more to gain than a tight loop around a DbDataReader. In the results you'll see that Dapper did manage to get very close and in the raw results you'll see it sometimes was faster, which is a remarkable achievement.

All used ORM frameworks have the full AdventureWorks model mapped as entities, all tables have a corresponding entity class mapped and all relationships are modeled as well. No inheritance is modeled, as not all of the frameworks support it (there are some inheritance relationships present in the AdventureWorks schema) and the used entity isn't in an inheritance hierarchy so it didn't make much sense to add it.

The results

The raw results are located here. I've replicated the average times below in tabular and graph form. Each operation consisted of fetching 31465 entities from the database with one query and materialize these in individual objects which were stored into a collection. The average times given are the averages of 10 operations per framework, where the slowest and fastest operation were ignored.

ORM / Data-access framework Average time (ms)
Entity Framework v6 3470.22
Entity Framework v6 with AsNoTracking() 689.75
Linq to Sql 746.88
LLBLGen Pro v4.1 786.89
LLBLGen Pro v4.1 with resultset caching 311.50
Dapper 600.00
NHibernate 3.3.3.4 4635.00
Hand-coded into custom objects 587.00
DbDataReader into DataTable 598.38

Graph_12092013

The results are not pretty, at least not for Entity Framework and especially not for NHibernate. The micro-ORM Dapper and the DataTable / hand-written fetch code are very fast, as expected, all ORMs are slower than those three. The resultset caching result, which by far the fastest, fetches the resultset once and then re-materializes the objects from that set, so it still loops through the raw rows obtained from the database to materialize new entities, it just doesn't have to go to the DB again and doesn't suffer from the network / DB latency.

A couple of things stand out. I'll address them below.

  • NHibernate is extremely slow with fetching a set of entities. When I add just one entity type to the model (SalesOrderHeader) instead of the full model, average is around 1500ms, so there's definitely something going on with respect to having a model with relationships like AdventureWorks and fetching entities in NHibernate. I did a profile run to see whether I made a terrible mistake somewhere or that the NHibernate team messed up big time. The short answer is: it's not me. See below for more details.
  • Entity Framework is also extremely slow. I reported this to the Entity Framework team some time ago. They have acknowledged it and have created a workitem for this. It turns out that this issue is also present in Entity Framework 5. When there's just 1 entity in the model, Entity Framework manages to get close to 1100ms average, so similar to NHibernate there's something going on with respect to relationships in a model and fetching. The AsNoTracking call bypasses all benefits of the ORM and simply converts the rows into objects, like Dapper does, and shows what potential speed Entity Framework has under the hood, if it doesn't have to do anything expected from a good ORM.
  • Linq to Sql and LLBLGen Pro are close. I am very pleased with this result, considering the amount of work I've spent on optimizing the query pipeline in the past years. I'm also pleased because the amount of features our framework has doesn't make it much slower than a framework which offers much less, like Linq to Sql. To materialize an entity, LLBLGen Pro has to do the following for each row:

    • call into the Dependency Injection sub system to inject objects
    • call into a present authorizer to test whether the fetch of this row is allowed
    • call into a present auditor to log the fetch of a given row
    • call into the uniquing sub system to make sure no duplicates are materialized
    • call into the conversion sub system to convert any db values to specific .net types if type conversions are defined.
    • call into the string cache to avoid duplicate instances of the same string in memory to avoid memory bloat.
    • create a new entity class instance and store the values of the entity into it.

    What's also interesting (not shown) is that if there's just one entity in the Linq to Sql model, it's very close to Dapper's speed. So a bigger model also causes a speed penalty there, however not as big as with Entity Framework or NHibernate.
  • With a somewhat slow connection to the DB, it can be efficient to cache a resultset locally. With faster network connections, this is of course mitigated to a certain point.

Answers to some expected remarks / questions

Below I'll try to answer as many questions you might have after reading this as well as some remarks which are to be expected. This isn't my first benchmark I'm doing in the long time I'm writing ORM frameworks.

"You're biased!"

The framework I work full time on is indeed included in the test. Though I didn't skew the test to make it win as I picked a reasonable scenario which will show performance bottlenecks in code used in a lot of LoB apps every day. I also disclosed the full sourcecode archive on github so you can see for yourself what I used for testing and whether I made terrible mistakes.

"Would you have published the results if your own framework was dead last?"

I don't think so, I am honest about that. But perhaps another person would have. This benchmark was started to see whether a prototype I made had a chance to be successful; LLBLGen Pro v3.5 code was much slower than the results offered by v4.1 (around 1280ms on average), and I was pleased with the work done to make fetching data much faster in v4.0. It took months to get there, but the changes were worth it: from 1280 on average to 790 on average was better than I'd have hoped for. To show the world what my hard work had resulted in, I published these results, to inform database using developers that reality is likely different from what they think it is.

"Why this particular test? No-one else tests this"

This test is nothing new. In the earlier days of .NET ORMs we used these kind of fetch benchmarks all the time, especially because earlier on Microsoft pushed typed datasets and stored procedures which leaned a lot on the tabular data fetch performance of the DbDataAdapter. To get close to that performance as an ORM, one had to optimize the pipeline a lot. It's showing that Entity Framework and also NHibernate teams have gotten lazy in this respect that they never tested fetch performance of their frameworks in this way. A solid, good fetch performance is essential for an ORM to be taken seriously by a developer as every bottleneck in an ORM will be a potential bottleneck in the application using it. Performance testing is therefore key for good quality control. Personally I'm very surprised that the two biggest ORMs on .NET perform so poorly.

"I never fetch that much entities, so it doesn't apply to me"

Actually it does. While you might not fetch 31000 entities in one go, but 10000 times 3 entities gives you the same performance degradation: if your application does a lot of fetching, every time a fetch is performed it slows the application down if the fetch itself is slow. If this happens in an application which is relying on good data-access performance, it can bog down the application under load if the data-access performance is sub-par or downright poor, when it didn't have to be. 

"Linq to Sql also has a mode where you can switch off change tracking"

Linq to Sql with changetracking disabled was determined after the rest of the article was already written so it was unfair to add that as-is without updating all the numbers again, so instead I'll mention briefly these numbers here: new run, Linq to Sql normal: 738.63ms, Linq to Sql, no change tracking: 649.25ms.

"What should I learn from these results?"

Firstly you can see what the relative performance is of framework X compared to framework Y in this particular situation of fetching a set of entities. Secondly, you can see that there is a hard limit on what performance you can expect from a data-access framework: reading data into entity objects takes time, no matter what you use. This also means that if your application needs to read data faster than e.g. Dapper can give it to you, you likely will not be able to solve that problem in a regular way. Thirdly, if your application feels 'slow' when obtaining data from the database while you're using a framework which is actually pretty fast with fetching data, it might very well be the slowness is elsewhere and not in the data-access / ORM framework used.

"I wrote my own ORM, it's likely faster"

I'm doing this now for a very long time and I've dealt with many developers who wrote ORMs in the past decade or so. Unless you've invested a lot of time in optimizing the fetch pipeline, chances are you made all the rookie mistakes we all made in the beginning. At least that's what I've learned and seen in the past 10-12 years writing ORMs for .NET. But if you're confident, fork the github code and add your solution to it so others can see how quick your code is.

"You didn't include caching for NHibernate"

I indeed didn't as the test wasn't to show how fast caching is in various systems, but how fast entity materialization is. An entity cache could help with NHibernate but it wouldn't make the slow performance of fetching entities any less slow. I included the resultset caching results of LLBLGen Pro to show how fast it is in this context as it does materialize entities again, but it's not the main focus of the benchmark. 

"NHibernate is very fast, you messed up somewhere"

Well, I knew they weren't the fastest, but I too didn't expect them to be this slow. So to be certain it wasn't me who made a terrible mistake somewhere, I profiled the NHibernate query execution using dotTrace 4.5.1. One look at the profile results explains it: the code internally is so over-engineered, there's no real overview what's going on as it calls code all over the place and it calls a lot of methods a lot of times: there are 31K entities in this set, yet it manages to call a rather expensive method over half a million times. Here's a screen shot of the trace. It's a 'warm call', so pure fetching, no mapping file import done anymore: (click to see a full image)

NHibernate_slow

Having high performance means you have to engineer for that too. It doesn't come out of the box. If you never profile your code, you don't know what's going on at runtime, especially with code that's so fragmented in methods all over the place like with NHibernate. If someone from the NHibernate team wants the dotTrace profile snapshot file (15MB), let me know.  

"You made a mistake with framework X"

Please show me what I did wrong and send me a pull request so I can correct it.

"Did you take into account start-up times of framework X?"

Yes, the slowest and fastest time for any framework are ignored, so startup time isn't taken into account in the averages. 

"Why didn't you include framework Y? It's way faster!"

I tried to include the most used frameworks, and there are not a lot left anymore. I could have added some more but I leave that to the owners of the frameworks to do so via the github repository.

"Isn't it true that the less features you have, the faster you can be?"

Fetching entities fast is about a couple of things: creating an object to store data in should be very fast, and overhead per row and per field value should be very low. It's tricky to get all three down to the bare minimum in generic code but one way to do it is to do very little or nothing at all per row or per field. As soon as you add more features, each row is affected and this adds up. This is then extrapolated into the results of fetching many rows. So having less features means less overhead per row means faster fetch speeds. This doesn't mean a fast framework is feature-less, though it might. I know that I will never get LLBLGen Pro as fast as Dapper, simply because Dapper does way less things than LLBLGen Pro does with each entity materialization cycle, but I try Winking smile 

"Lies, damn lies and benchmarks"

True, benchmarks often turn out to be equal to lies or marketing if they're given without a context, without all the surrounding information about what's exactly benchmarked, and why. As I mentioned earlier, the results in this test are not usable without the context in which they're created: it's a single test, highlighting a single feature, performed on a single setup. The numbers only have meaning with respect to that setup, however, the relative performance differences are usable outside the setup: Entity Framework is ~5 times slower than LLBLGen Pro when it comes to fetching sets of entities, however how slow exactly in milliseconds depends on the particular setup your application runs on.

I hope this all helps developers out there open their eyes how fast some of the frameworks out there really are with respect to entity fetch operations. If you have more questions, please post them in the replies so I can answer them.  

37 Comments

  • I would be interested in seeing the results on a newer version of SQl Server. I know EF has been optimized for 2008R2 and newer.

  • You should have tried PLINQO as well.

  • @ H Lord: that won't make a difference: the Query generated is the same (simple select field names from table) and the code used to fetch the data is the same as well. The performance bottleneck they have isn't related to the sql server version used, but related to the work they do per row with respect to the entity model in memory: the more relationships and entity types, the slower it gets. A profile of the code reveals that. The EF team is aware of this and as you can see in the linked issue on codeplex, they have scheduled an update on this for 6.1. Personally I think it might be hard to fix this 100% as it does look like it is inherit to their internal design, but we'll see.

  • @Blake: I answered that in the answers given at the end of the article: I haven't included every framework on the planet, just a few I found important. If you want it to be included in a follow up article, please fork the code on github, add the code and send me a PR so it's included in a follow up article with new numbers.

  • Thanks for posting this.

    It would be interesting to see not only the performance when requesting all the records, but also increments along the way. What I mean is, the performance when requesting, 10%, 20%, 50%, 75% and 100% of the records.

  • @Asher: The only difference would be the time generating the query, as paging on the server has no effect in slowness on the client with respect to materializing entities. The paging query would simply return the entity rows to materialize, e.g. the top 50% and the real performance bottleneck is then still in place: converting rows into objects.

  • Very interesting, indeed.

  • Didn’t really read all of it, but:

    The test consists of fetching all 31465 rows from a single table, SalesOrderHeader. This table was chosen because it has more than a few columns and a variety of types, so the materializers have to work more than with a 3-column table with only integer values. It also has a couple of relationships with other tables which makes it a good candidate to see whether the ORM used has a problem with relationships in the context of a set fetch when entities were mapped on these tables.

    If that’s true, the test is pointless. ORMs are good at managing the Object-Relational Mismatch, which means loading smaller sets of rows from multiple tables into a set of entities. Loading tens of thousands of rows is a pointless exercise. I bet Dapper, PetaPoco or massive would be even faster than anything on this chart 

    See also:

    http://ayende.com/blog/4122/benchmarks-are-useless-yes-again

    This whole ‘performance’ thing has been hashed out a million times on the web and IMO it’s completely irrelevant. Performance will be reasonable for most ORMs. I’m more concerned about features, maturity and flexibility.

  • @Jon: You indeed didn't read all of it, as I addressed what you said in the article, in the answers section. It's not pointless, 10000 times reading 3 entities gives you the same slowness as reading 1 time 30K entities. Performance ISN'T reasonable for EF and NHibernate, they are terribly slow with fetches compared to the rest. Linq to Sql and my own framework show that you don't have to sacrifice everything for performance like dapper does, so the excuse of 'I use nhibernate and it's slower because it has more features' is not valid: it is unreasonable slow, same as EF. One likely expects them to be fast, but they're not.

    Your remark about dapper etc. being even faster than this chart shows you have not read much at all about the article, why it's written and what's answered already. Shame.

  • @Frans,

    This debate has been rehashed a million times and is really becoming laughable. What is the point of benchmarking change tracking if you don't then go and write back to the database? Every so often a new benchmark is released that uses 1% of an ORM and then draws all kinds of conclusions from it. I'm tired of wasting brain cycles arguing about what really amounts to a solved problem.

  • I find it interesting that you include EF without change tracking, but did not include NH without change tracking. Are you aware that feature exists?

  • @jon: So you're saying I don't know what to test to make it clear an essential part of an ORM (fetching entities!) is slow or fast? Read the article, you make all kinds of assumptions I carefully addressed. Or don't and safe your brain cycles ;)

  • @Eric: no I didn't know that feature existed. I'll add it in a follow up article. EF without change tracking was present already in the benchmark code after MS suggested it to see whether that at least made if fast again. To be honest, it likely won't change that much as the time spent by NHibernate is more than 50% in the entity creation code (see profile) and not in the identity map which will likely be skipped with no-change tracking. But we'll see :) thanks for the suggestion. I recon it's easy to find this setting? (as that's not a give with NHibernate ;))

  • This discussion reminds me the flamewar started a few years ago when Alex Yanukin started his infamos 'ORMBattle' site.. :-)

  • I am interested to see details about OpenAccess ORM.

  • @Tudor oh that site, indeed :) With this difference that that benchmark was carefully designed to make their orm look good, which is not what this test is all about (I tried everything to avoid that :))

  • @John: if Telerik wants to add a test to the code at github, I'll include it in a follow up.

  • I issued a pull request to include OrmLite on GitHub.

  • @Matt: thanks! I'll merge the code tomorrow (tuesday)

  • Hi Frans,

    Thanks for the very interesting post. It sounds like the Entity Framework bug is related to the total number of foreign key relationships in the model. Could you run the test with a model that only includes the tables used?

    Thanks,
    Don

  • @Don: with just 1 table/entity and no relationships, the speed of EF is around 1100ms, so much quicker.

  • You should also test NHibernate IStatelessSession.

    Comparable to EF AsNoTracking()

  • A nitpick: "publically" should be "publicly." I noticed yesterday somebody misspelling it, too (not Mary Jo Foley, but one of the other MS-affiliated women), so I thought I'd mention it, before it becomes like "lose" and "loose" (seemingly everybody now spells "lose" as "loose" which is the opposite of tight, not the opposite of win).

  • Thank you, Frans--I appreciate seeing things like this from time to time--even when it makes me question having chosen NHibernate about two years ago. :)

  • If your test suite is still warm it would be worth it to rerun with EF6.0.2 Beta 1. Curious if some of the performance issues are really fixed.

  • @B. Clay: thanks for the correction! I'll take that into account next time :)

    @Zech: the issue EF runs into with this test is not really related to the ones fixed in 6.0.2 beta, though I'll see if it makes a difference with the follow up.

  • @Jonny

    The one which shippes with 2005/2008, which is a little different than the one on codeplex. The code will be refactored to match the one on codeplex. The source on codeplex does have the same db, the download however is different. It's a bit of a mess indeed. I was under the assumption the db was equal, which wasn't the case. It will be fixed, today or later this week. Keep an eye on the github repo.

  • Which version of AdventureWorks database did you use and is it available for download somewhere?
    The AdventureWorks database on CodePlex http://msftdbprodsamples.codeplex.com/releases/view/93587 does not contain ContactID field on the SalesOrderHeader nor a [Person].[Contact] table.

  • LLBLGen v4 is fast, I agree, but working with it is a pain (you know where). It doesn't even support enums, can't do complex mappings not to mention things like POCO. When I look at the classes generated by the tool it makes me puke. It reminds me of the first version of EF for which I had to write custom code generator to be able to reasonably use it. Apart from things it does not support it's bloody buggy! Small example: if you use discriminator field for inheritance and have self referencing table you're gonna have exceptions thrown while doing queries. Also there's no way to do group by count which is also a pain. (I'm not sure if this is bug or yet another missing feature). If you want to know more about how dissatisfied I am with LLBLGen - feel free to email me (name@gmail). The only thing that I am quite happy about is that it can run fairly complex queries and there's no problem with generating entities from views.

  • @kubal5003

    > LLBLGen v4 is fast, I agree, but working with it is a pain (you know where).

    No, I don't, please enlighten me.

    > It doesn't even support enums, can't do complex mappings not to mention things like POCO.

    It supports enums out of the box since v3 and before that through type converters since v2. What do you mean with complex mappings? Poco's are not supported, that's a design choice. I wonder why you want poco's, as it doesn't really matter much in practice.

    > When I look at the classes generated by the tool it makes me puke. It reminds me of the first version of EF for which I had to write custom code generator to be able to reasonably use it.

    Why? because it's commented sourcecode that works out of the box so you don't have to write it by hand, and you hate it when a system does your job?

    > Apart from things it does not support it's bloody buggy!

    What things that it doesn't support? Buggy? where? we fix issues within 24 hours, if you report an issue, it gets fixed, and we have no open known bugs we need fixing. If you report an issue, instead of trolling here on this blog, we can actually fix it.

    > Small example: if you use discriminator field for inheritance and have self referencing table you're gonna have exceptions thrown while doing queries.

    No you don't. Post a true example of what you run into on our forums and we'll look into it and fix it, whatever you run into.

    > Also there's no way to do group by count which is also a pain. (I'm not sure if this is bug or yet another missing feature).

    We have 3 query systems: low-level api, linq and queryspec. In all 3 you can do group by count (and whatever aggregate there is, even more than ef/NH). See the example queries shipped with the installer. You're sure you're ranting about the right tool? ;)

    > If you want to know more about how dissatisfied I am with LLBLGen - feel free to email me (name@gmail).

    No, I have seen enough troll BS for one day, sorry. If you really want us to help you, simply post a message on our forum, we'll then look at your problems and get you up and running.

    > The only thing that I am quite happy about is that it can run fairly complex queries and there's no problem with generating entities from views.

    At least that's something :) Hey, Kubal, I think you should inform yourself a bit about what it really can do, instead of claiming things here which aren't true. Have a nice weekend! :)

  • @FransBouma
    What you call trolling is just the frustration after working with this tool for half a year. Sorry, that I wrote it here, but I accidentaly found your post - so you happened to be the first LLBLGen related person I found. I'll definitely post bugs on LLBLGen forum.

    I also realize that you as a developer of LLBLGen want to be proud of what you do. The thing is that not eveyone is that attached to this code and other people might not like your design decisions even if they are best effort. We were forced to use LLBLGen by inheriting an old codebase (with version 3, we switched to 4) so this wasn't our choice.

    About the experience & generator & design decisions: the way I was used to work was to have a convention based system - while developing the only thing that I needed to do was to create a class/add property to a class/.. . This automatically triggered migration to be run when I run the application. That covered 80% of development. For tougher things I needed to write an explicit migration using SQL and of course do the appropriate changes in the class. By tougher I mean i.e deleting a column and adding a new one.
    This integrates nicely with version control.

    Now compare this to the way you work in the team with LLBLGen. Everytime I need simple changes to the database I need to create a changescript, open LLBLGen, checkout the folder from TFS, generate the classes, make sure to include everything in TFS. This is not all! The fun part starts when you want to merge branches - you need to regenerate the LLBLGen files, otherwise you end up with errors. Do you have a solution to ease this process?

  • @kubal5003

    > What you call trolling is just the frustration after working with this tool for half a year. Sorry, that I wrote it here, but I accidentaly found your post - so you happened to be the first LLBLGen related person I found. I'll definitely post bugs on LLBLGen forum.

    I'm sorry to hear you've had a frustrating experience. I called it trolling because you claimed all kinds of things which aren't true. I get it you are frustrated, but claiming untrue things is not the way to go.

    > I also realize that you as a developer of LLBLGen want to be proud of what you do. The thing is that not eveyone is that attached to this code and other people might not like your design decisions even if they are best effort. We were forced to use LLBLGen by inheriting an old codebase (with version 3, we switched to 4) so this wasn't our choice.

    One can't please everyone, so it's either way a gamble whether the user is pleased or not. If you re-read your first post back, where you claim my code is something that made you puke and that we can't do group by (which has been supported since day 1, now more than 10 years ago), I don't know whether to take you serious or not, sorry.

    > About the experience & generator & design decisions: the way I was used to work was to have a convention based system - while developing the only thing that I needed to do was to create a class/add property to a class/.. . This automatically triggered migration to be run when I run the application. That covered 80% of development. For tougher things I needed to write an explicit migration using SQL and of course do the appropriate changes in the class. By tougher I mean i.e deleting a column and adding a new one.

    You can do that too in the designer: work model first. Add the field to the entity, use forward mapping to auto-create the relational model elements for you, create update script and classes, done. Can also take care of the more difficult things, like changing fk's, add/remove uc's etc.

    > Now compare this to the way you work in the team with LLBLGen. Everytime I need simple changes to the database I need to create a changescript, open LLBLGen, checkout the folder from TFS, generate the classes, make sure to include everything in TFS. This is not all! The fun part starts when you want to merge branches - you need to regenerate the LLBLGen files, otherwise you end up with errors. Do you have a solution to ease this process?

    Firstly, that TFS is not great, is not my problem. (it has a bug which sometimes doesn't add files to it while it should). Secondly, why add generated code to sourcecontrol, if you don't add code to the generated code project? Just add the llblgenproj file to source control. The thing is that you likely also don't add the generated code by asp.net (it generates a lot too ;)) to sourcecontrol. All generated code is coming from one project anyway: the llblgenproj file. That file is in xml, and designed to be easy to merge in source control.

    If you want to have the generated code in source control (e.g. because you extend it) you don't have to deal with merges: merge the llblgen project file and regenerate. That's the final code to commit. It's generated code, so the origin of where it comes from is important, not the generated code itself: if there are several llblgenproj files in use, merge them, then generate code from that.

    Hope this helps. And please, read the manual. It's there to help you, so you can avoid claiming things which aren't true, OK? Thanks :)

  • @Kubal5003

    btw, we ship command line tools in sourcecode (see sourcecode archive in customer area) for refresh and generation as well, so you can, if you really want to, automate things to a great deal. Additionally, you can create plugins (which are easy to create, see sourcecode in source archive) which act on events in the designer, so you can automate things that way as well.
     Btw2: your TFS problem is likely equal to this: (with solution) http://www.llblgen.com/tinyforum/Messages.aspx?ThreadID=8962


  • I’ve interviewed a couple of candidates recently and found that the majority of them don’t have any clue of what an ORM like EF or NHibernate is good for. Concepts like the Identity Map, second level caching are important for an ORM both in terms of application design and performance. That goes beyond just fetching data.
    It seems that most people just use the ORM for data retrieval. If that is the case, then yes, your performance comparison really matters to them.
    The essence of your post is: a car is a faster truck. With some tweaks, you can make a truck faster but it will never be as fast as a car.

  • @Carsten
    There are many aspects of an ORM, e.g. the ones you mention and others. One of the things an ORM also must do is be fast, or better: not be a performance hog, as it's an important part of an application. In this article I've looked at a single fragment of what a data-access framework can offer, so it ignores the rest. That's not to say the rest isn't important, on the contrary, it's just not measured in this test.

  • @FransBouma
    That’s what I’m saying: if you just compare the driving speed of cars and trucks, cars will be faster.
    The problem is more with the audience and how they perceive your message: they don’t read the entire post; they make their conclusion without having considered the boundaries of your comparison. They go to bed saying: “NHibernate and EF are bad in performance. In my next project, I am not going to use any of these tools”.

  • @Carsten

    otoh, if I hadn't written the article, these folks might have gone to bed thinking 'I'll use EF because it's from Microsoft and their stuff is always fast'. In the early days one would perhaps pick another orm because of performance alone, but nowadays people look at the complete picture as they know they'll have to use it for several years to come. Performance is one of the things, but not the only one.

    Besides, there are many people out there who immediately think that the ORM is to blame when their app is slow. That was also a reason why I wrote the benchmark: to show that an ORM can be fast and if your app using it is slow, it's not always because the ORM is simply a slow piece of junk but likely because of other factors.

Comments have been disabled for this content.