Why use the Entity Framework? Yeah, why exactly?

Danny Simmons wrote a marketing piece about the project he's been working on for so long: "Why use the Entity Framework?". I don't expect Danny to be unbiased towards his own work, so at first I just ignored it: Microsoft produces these kind of 'use our stuff, it's better than sliced bread'-articles a dozen times a day. However, this particular article seems to be a discussion subject and is supported by non-Microsoft people on other blogs, so it's time to stop ignoring it and start to refute the contents of the article, despite it being marketing. After all, it doesn't look like it's marketing.

I've spend the entire last 6 years of my life on something called Object-Relational Mapping, so I think I can comment on Danny's claims a bit. Object-Relational Mapping, or O/R mapping or 'ORM' (for the people who aren't aware of ORM being the acronym of Object Role Modelling) can be implemented in a lot of ways, and it is always used to solve a mismatch between two projections of an abstract entity model: the projection onto a relational schema and the projection onto an object oriented language. For more details, read my essay about this subject (don't let the title feed you with presumptions about the contents). As this is rather abstract, let's use an example: a very simple Order system

When you're creating a system for a client, that system has to represent functionality which are usable in the reality the client lives in. In short this comes down to the fact that the functionality and features of the system have to connect to what the client does and have to make the client solve problems / overcome challenge s/he would otherwise run into without the system (otherwise, why bother using it, right?). To be successful in this, the system should work with elements which are recognizable in the reality of the client: if the client works with customers who file orders for products, the system should work with customers who file orders for products (this is oversimplified, but it's for the example in a blogpost, not a book). But, what are these 'Customer', 'Order' and 'Product' exactly?

When discussing the project with the client, the client will tell you how s/he sees 'Customer', 'Order', 'Product' and how they relate to eachother: which information elements they consist of, how long they'll live inside the reality of the client, who must alter them etc. etc. This information is abstract, i.e. it's not physically available to you. To get a deeper understanding of it, you'll create a model out of this information, in one way or the other: the model contains the information obtained from the client and shows you the definitions of Customer, Order and Product and how they relate to eachother, perhaps other information as well if it benefits the model and its purpose. Now, the word 'Model' will make a lot of people think about visio diagrams or otherwise a picture of some sort. But that doesn't have to be the case. It can be anything, as long as it represents exactly the information you obtained from the client (or for DDD enthousiasts: the Domain Expert), so if you like to write lond-winded word documents, or write everything down in a text-based DSL, it's up to you.

As I also described in my essay linked above, you'll soon find out that a 'Customer', 'Order' and 'Product' in the reality of the client (and thus the system you're creating for this client!) are actually names for different groups of data elements. In other words: the instances of these three elements are tuples of data. If in your abstract model, the definitions of 'Customer', 'Order' and 'Product' are called entity definitions, the tuples are entity instances. So if I say: "ALFKI, Alfreds Futterkiste, Maria Anders, Sales Representative etc.", what does that mean? For the Northwind impaired: not much. For the people who recognize the first Customer record in Northwind's Customer table, it means: "It's a customer!". Well, almost. It's a Customer instance.

The system you're creating will deal with these entity instances in memory but also has to store them in a persistence storage (e.g. database). If for persistence storage a database is used, it means that these entity instances flow from memory to database table (insert/update) and back (fetch). If we define the reality of the application to be the state in-memory inside the application, we can define that an entity instance should be the same data tuple identified by the same identifying attribute (e.g. primary key, Id), if we save it or fetch it: if we fetch the ALFKI customer instance from Northwind on Monday morning 8 am. it has to be the same instance if we fetch it on Thursday afternoon at 4 pm (unless it's been deleted by someone of course). It might be that some attributes (fields) have changed a value, but it is the same instance.

However, looking at the data tuple's contents, they're just a bunch of strings and other constants. So this data tuple only becomes an entity instance if there's a valid entity definition in the same space (e.g. memory, database, application) to give it context. In other words: if you want to be able to talk about Customer, Order and Product in the application, as well as the database, you've to have a definition of these entities available where you work with the data tuples.

As we've already made the abstract entity model (in the form you like, e.g. a chalkboard drawing with foodstamps, knock yourself out, as long as it represents the exact information it should represent), why not use that information to become our entity definitions we need to give the data tuples context? This is called projection: we project this information onto a different space (e.g. program language, storage structure) and the result of that projection is the element we can use inside that space. The advantage of this is that the projection result isn't something that fell out of the sky: it is based on the result of analysis with the client or Domain Expert. As 'projection' sounds rather abstract, what is it exactly? Think about it like a transformation of a definition from one domain to another.

Take our abstract Customer entity definition. It's a definition of the attributes (fields/data elements) which together form the Customer: some ID which is unique, a company name, contact person name, title of the contact person etc.. If you see that entity definition, could you write a .NET class which represents that definition? I think you can . That's called a projection: you projected the abstract entity definition onto a .NET language. So a C#/VB.NET class which is the projection of an abstract entity is the projection result of that entity and can be used inside that space (C#/VB.NET/.NET) as the representative of that abstract definition. This means that we can use a Customer, Order and Product class which are the projection of these abstract entities onto for example C#/VB.NET in our C#/VB.NET application to give a Customer, Order and Product instance (data tuple) meaning. If we load the same data into an instance of a random other class (or an object array for example), you'll see a bunch of constants, values. But does it mean the same? You can interpret the data as if it's a Customer instance, but is that correct? In other words: you need the definition of the Customer to give the data meaning: create an instance of the projection result (class) and store the data tuple (customer entity instance) inside that class instance so the data inside the class instance has meaning: it is a Customer instance.

The same can be said about the persistent storage. Let's assume the data tuples are stored in a relational database. Because data tuples are just that: groups of constants, we need an entity definition to give them context: to give them meaning. In the relational model, these definitions are called tables, views and select queries (as by Codd/Chen's definition: a query is also an entity). So if we project our abstract entity definitions onto the relational model, we get representing elements which we can use in that space. Some will pick the table as the form they want to work with, others will pick a view. Both will have the same characteristic feature: they represent the abstract entity definition they're a projection of, they give meaning to entity instances of the entity they're a projection of.

For your Order system you need two projections of the same entity definitions: one on a .NET language, and another one on the relational model used in the relational database of choice (e.g. Oracle). As we've seen above, this results in Customer, Order and Product classes and Customer, Order and Product tables (or views, if you like views). Both projection results live inside a reality with its own rules and boundaries: the .NET classes live in the OO world of .NET, the tables live in the world of relational schemas, algebra and set theory.

Earlier in this post we've seen that an entity instance (data tuple) has the same meaning in the application's space whatever you do: working with the Customer instance represented by the ID 'ALFKI' means you're working with that instance, not with a random instance but with the instance. Inside the persistent storage, the instance is stored in a row which is defined by the table (or view) it is part of, i.e. the table definition (which happens to be the projection result of the Customer entity definition onto the relational model!). In memory the instance is stored in an instance of the result of the projection of the same definition onto the .NET language of choice, i.e. the Customer class.

However these two worlds don't live together in the same space: transfering an entity instance from its entity class instance (customer object) to the table row inside the database and back could be seen as a transformation: perhaps the entity is stored inside the database in two or more tables. Perhaps the projection on the .NET language resulted in multiple classes. For the application however, they must look like they live together: the transformation between the two worlds should just be there, it should 'just work', so the developer writing the system for the client doesn't have to worry about it. This is the service provided by an O/R mapper: it makes sure that the entity instances can be transported to the persistent storage (where they're stored in instances of the projection result on the relational model (i.e. table / view)) and back, and they keep the same meaning.

If a class definition C is the projection of an abstract entity E and a table definition T is also the projection of the same entity E, isn't it possible to project C out of T? or T out of C? Given the rules and boundaries of the spaces T and C live in respectively, and the projection rules of E onto C and T, one can define a projection from T to C and from C to T. This is what most of you are doing today: you pick an O/R mapper, you start with an abstract entity model, in one form or the other, work with it to create either tables or classes and tell the O/R mapper of choice to produce the other side. There are some variants on this but it more or less comes down to this, or in the ideal situation where you start from scratch and have the abstract entity model which is then used to produce both the tables and the classes. Some people will now argue that their .NET classes are way different than any table, and the classes follow the application's needs, but frankly that's not true: the classes written in such an application haven't fallen out of the sky either. If a Customer, Order or Product class has to be created, how is decided which fields are defined in these classes? Exactly, by projecting the abstract entity model.

The Entity Framework is just an O/R mapper
Now that we understand how entity definitions, entity classes, tables and views and entity instances relate to eachother, that there's a need to make them all work together and that that need is fulfilled with the service provided by an O/R mapper, the question arises: why does Danny Simmons argue that the Entity Framework is so much more than an O/R mapper? What does it provide above the service an O/R mapper provides? After all, isn't it so that what we needed, namely making the two projection results work together, is already provided by an O/R mapper? Why is a system needed which apparently can do more (whatever that is) ?

The truth is: you don't need more. What you need is an O/R mapper which provides a rich service to make classes work together with tables to make sure your entity instances are the same whatever you do and you are able to work with entity instances without worries, and above all: which fits how you want to work. As discussed above: if you want to start with the abstract entity model, or with a projection result, or with two projection results, that's up to you: pick the O/R mapper which fits the way you want to work, the way you deal with the abstract entity definitions. Because that's its purpose: providing the service to make the two spaces work together as if they're one space, so you only have to worry about the functionality you have to write for your client. Everything else is overhead, plumbing, and above all: your client pays you to build functionality for the sole benefit of that client, s/he doesn't pay you to build overhead / plumbing, if that's already available.

So what does Danny say about the Entity Framework and why does that make the article marketing? Let's quote a couple of snippets:

The big difference between the EF and nHibernate is around the Entity Data Model (EDM) and the long-term vision for the data platform we are building around it. The EF was specifically structured to separate the process of mapping queries/shaping results from building objects and tracking changes. This makes it easier to create a conceptual model which is how you want to think about your data and then reuse that conceptual model for a number of other services besides just building objects.

First this one. I'm not going into detail about the comparisons between Entity Framework and ADO.NET (apples vs. oranges) nor Entity Framework vs. Linq to SQL ('We deliberately limit framework B and we're comparing our other framework A with it to make A look good!'). Danny uses NHibernate as the metaphore for comparing Entity Framework with a 3rd party O/R mapper. I'm not sure why he uses NHibernate in particular, perhaps because Microsoft thinks it's an open source framework which isn't owned by any company (so bashing it in a comparison matrix isn't running the risk of a lawsuit), but it is owned by a company (JBoss Inc., which is owned by Red Hat), but that's not important. The important bit is in that last sentence.

I've tried to explain in short what the purpose is of an O/R mapper, and why it is needed in the first place. For example an application which is written entirely inside the same space as the tables doesn't need an O/R mapper because the projection result onto the relational schema (table/view definition) is re-usable to give meaning to an entity instance used in that application. An application which uses an ODBMS (object database) also doesn't need an O/R mapper, as entity definition projection onto the OO language is usable inside the persistent storage as well. Danny now paints the picture that the Entity Framework is easier to create a 'Conceptual model which is how you want to think about your data'. But, Danny, isn't the way we want to think about our data already defined in our abstract entity model? You know, the model which is used as projection source for our projections onto a .NET language and the relational model? If I use that abstract entity model to produce classes in the .NET space, and use an O/R mapper to make sure that what's inside the instances of these classes represents the entity instances I work with, so the data tuples inside these class instances have meaning, what else is there? Isn't that exactly what I need to create an application which can work with these entity instances?

Danny hints in that last sentence that there are other services besides building objects in which the Entity Framework can assist. But... are these services only usable in your application in such a way that the data tuples have the same meaning in these services as well as in your application if the Entity Framework is used? Or is another O/R mapper, say LLBLGen Pro or NHibernate also usable for that? Of course these other O/R mappers are suitable for that too: they're transformation services which make it possible to work with entity instances in your .NET language. If you for example want your Astoria (ADO.NET dataservices) service to work with another O/R mapper, you can: simply because there's no conceptual element required for these services which isn't available in the group of mature O/R mappers out there. In fact, it's the Entity Framework which lacks some conceptual elements when you compare it to mature O/R mapper frameworks like LLBLGen Pro: distributed scenario's for example, or multi-database models.

Danny's last sentence is worth quoting:

So the differentiator is not that the EF supports more flexible mapping than nHibernate or something like that, it's that the EF is not just an ORM--it's the first step in a much larger vision of an entity-aware data platform.

I don't get this: a person like Danny Simmons who worked on the Entity Framework for so long, how can such a person ignore the fact that any O/R mapper is about entity awareness? What's described in that last sentence is exactly an O/R mappers sole purpose: it's there so that the developer can work with entity instances in the OO language and store these instances in a non-OO environment like a relational database and vice versa. What larger vision is there to have, if all there is is the abstract entity model and its projections? Tooling perhaps? To make things easier for the developer to create these projections and position the O/R mapper service in the application code?

If that's so, then why is it that the Entity Framework designer is such a pain to use? And why does it lack true maintenance features like true projection maintenance of T to C for example? After all, the core point of the Entity Framework seems to be that a conceptual model can be defined on top of a set of table / view definitions. But if these table / view definitions change, who's doing the maintenance? Isn't that the purpose of the Entity Framework, as the developer works with the abstraction created in the form of the conceptual model, on top of these tables / views? Then why doesn't the tooling for the Entity Framework take care of it? If I can create a designer which can find inheritance hierarchies automatically, which can deal with thousands of entities, which does maintain your model after the table definitions have changed, which actually can deal with UDT types written in C#, which can deal with multiple catalogs/schemas in a single model, why can't MS? Taken that into account, what's left of that much larger vision if we look at the bits today, if we look at what Danny said that the Entity Framework apparently is: not just an O/R mapper but part of a much larger vision?

I'm sure the Entity Framework is build by competent people, who are very smart and know what O/R mapping is. What I'm not sure about is if these competent people actually have windows in their building, if they have an internet connection and that someone has told them it's no longer 1994 but that times changed and that there are numerous people out there who have solved the same problem they've tried to solve for so long. That these people have mature solutions at their hands now, which match what developers need. That these mature solutions are solving the exact same problem. That these mature solutions don't need a corporate spin to make the real problem (mismatch between two projections) look like some kind of different 'problem' which can only be solved by the product sold by the same corporation.

So for the people who echo the Microsoft spin: you too should look further than the shiny brochure on your desk. There's no 'bigger vision', as there's nothing bigger to vision. If there's a reason the Entity Framework can only work with technology ABC, the reason is artificial: bottom line is that there are just entity definitions, entity instances and projections of entity definitions to store the entity instances. Nothing more, just data and the definitions to give that data context, meaning and a service to make that happen.

30 Comments

  • Frans, interesting post, and good to see someone call out the BS when it is posted as fact - but do you have a source for your quote

    "'We deliberately limit framework B and we're comparing our other framework A with it to make A look good!'"

    I recently saw an EF demo followed by a LQ-SQL demo and was a bit confused by the difference in performance, is it just this that you refer to, or that the LQ-SQL only provides support for SQL Server when it appears that most of the work using an uber-provider already seems to have been done with EF?

  • RJ: see Matt Warren's (http://blogs.msdn.com/mattwar/) latest blogpost, where he states:
    "LINQ to SQL was actually designed to be host to more types of back-ends than just SQL server. It had a provider model targeted for RTM, but was disabled before the release. Don’t ask me why. Be satisfied to know that is was not a technical reason. Internally, it still behaves that way"
    So if it's not a technical reason... we all know what kind of reason it was. :)

  • I browsed that article a few days ago. I seem to remember the author pointing out that NHibernate was a more mature product with many more features than EF. I came away thinking it was fairly balanced piece. Then again, I'm not an ORM afficionado.

  • Frans,

    I was at dev connections a few weeks ago and spent almost 14 hours in EF sessions. As a matter of fact Danny was in most of them.

    You comments make sense from someone who has not spend much time with the Entity Framework. I also think Danny confuses people when he says the EF is not "just" and O/R Mapper.

    What Danny is doing in one place is seperating the Entity Framework from the Entity Model. Then, in another place he refers to the both as the Entity Framework.

    Yes, the Entity Framework is an O/R mapper. There is no doubt about that. The Entity Framework leverages the Entity Data Model. This is the .edmx file that you create when you say new "Entity Data Model" in vs. This entity data model is the definition of your conceptual model, your physical model and a mapping between them.

    As you also said, once your have designed your EDM you can project your .Net classes. In release 1 you can project the relational model but Danny ensures me that is on the roadmap for a .Next version.

    Once you have your Entity Data Model the Entity Framework object services use the data to provide O/R mapping services. No more and no less.

    Danny's point is that there are (will be) added services which also leverage the EDM so you get a value add when you spend the time to create it. One example perhaps is SQL Reporting Services which could allow you to design reports against the EDM rather than the relational tables.

    So, it is not "Entity Framework" that is "more" than an O/R mapper... it is infact the Entity Data Model that once in place can be leveraged by many services that you would prefer work at the conceptual model layer rather than the relational layer that many of them work against now.

    BOb

  • Great article, i also really liked the OR/M intro you gave.

    Maybe Danny chose NHibernate because it is the most widely known OR/M mapper?

    Danny Simmons wrote:
    "In fact, it is certainly true that nHibernate is a more mature product and in many ways has more ORM features than the EF."
    And after that:
    "So the differentiator is not that the EF supports more flexible mapping than nHibernate or something like that, it's that the EF is not just an ORM--it's the first step in a much larger vision of an entity-aware data platform."
    First he mentions the reason not to use EF, and then he comes up with a 'reason' out of thin air to do use the EF. Anyone ever used a less featured product just because 'it is the first step of a larger vision'?

    Gotta give him credit for his marketing though :)

  • I'm still trying to work out what I can apparently do with an ETM that I can't do with a PONO (Plain ol' .NET objects) library.
    Isn't that my version of the ETM?

    I'm also unconvinced by the vapourware reasons for switching. I'm sticking with NHibernate for now.

  • Bob: I understand what MS wants with the entity framework, but the point is: there's no need for this conceptual model abstraction layer: that's the abstract entity model you already have.

    This means that also in the situation of other O/R mapper frameworks, you can re-use your model (as you re-use your projections!) for other applications/services. This isn't something that's unique for the entity framework, it's done this way for years.

  • It seems like Microsoft is doing one of the things they do best. They see a need that is being fulfilled by competent software manufacturers and they copy it, then they produce marketing spin.

  • It's interesting that the parrots simply point to Simmons' post and don't really add any value.  While the rest of the community, that does have a problem with EF, are pointing out what the problems are and trying hard to get people like Danny to understand what OOD is about.

    EF supports procedural programmers with a data-centric view of software development.  There's just so much that has happened in the .NET space in the last few years (that echoes what has happened in the other mainstream communities well before that) that is simply counter to what EF lets you do.

    Why can't MS reach out to the knowledge in these other communities and leverage that knowledge?

  • Hey guys,

    I appreciate the strong opinions about this and the open debate. Lots of viewpoints make for better products in the long-run. I'll also say that if you find that nHibernate or LLBLGen or some other product work well for you, then you should use them. Productive developers creating great software is my goal, and I don't pretend to have a lock on the only way to get there. The title of my blog post was questionable from the time I made it because it turns it more into a marketing spin than I wanted--probably a better title would have been "What differentiates EF from other ways of doing data access x,y,z".

    OK. Enough of all that, I only wanted to try to clarify one point: The idea that there's something different here with the EDM and the EF that makes it possible to reuse your conceptual model in other contexts is that there are cases where you wouldn't want to materialize objects even though you do want to reshape / map your data. Reporting services is a great example of this in my opinion... If you write high-volume reports, you need to be able to design your report in terms of the concepts the customer understands (your conceptual or object model), but you don't want to materialize objects as you produce each report because the perf will kill you.

    - Danny

  • I think that last comment from Danny summs it quite well. The most common way to look at the Entity Framework is serving up object instances from a conceptual model. In fact thats the Object Service layer than handles that part. The Object Service layer implements the Identity Map/Unit of Work pattern and it works against the conceptual model.

    What Danny is getting at is the framework at the base level deals with the three xml files, csdl, msdl, and ssdl. Those three files combined is your entity model. Today there is an Object Services layer that consumes that model to provide persitance and O/R functionality.

    Tomorrow there could be Reporting Service layer that targets the exact same model and spits out reports rather than objects. In short the Entity Framework is just that a framework that provides a conceptual view to a store. What you do with that conceptual view is completely up to you... add O/R services, reporting services, workflow services whatever.


  • In trying read between the lines here there are some truths we can derive:

    1) Frans' comments should all be viewed from the perspective of someone who has a financial interest in developers choosing another O/R product

    2) Daniel says whatever tools developers want to use is fine with him, but the frustrating reality is there is no reason for him to care. EF will have such large momentum from being an MS tool his team can afford to "not make it into v1" with many critical features. In the real world (if Daniel's company sold or even gave away EF as a product) EF would be DOA because it's not competitive today.

    3) The scariest thing about all this functionality is the maintenance of it, which is what you really deal with more during the life of a project. How can anyone want to use EF in v1 if at minimum you can't use a designer and propagate changes throughout with a single change? For now it seems all EF lets us do is "invest" (use unnecessary time) in the future. Forget about about versions not yet delivered - no, not yet started - and lets compare productivity we get today.

    4) Not withstanding point (1), It was great blog post and I agree with it. Why couldn't MS have bought LLBL Gen and been done with it?

  • Like many products/technologies to come out of Redmond, I think EF provides a great benefit that is hard for others to provide: for many companies if Microsoft builds a particular technology, then it validates the need for that type of solution. EF will cause a lot of developers to start investigating OR/M. Microsoft will spend millions of dollars marketing the need for OR mapping. Much of the marketing speak will be stripped away by blogs like this one. The end result will be that some developers will stick with EF while a large group will migrate to other more mature products.

  • > That these people have mature solutions at their hands now, which match what developers need.
    Bingo!! You got it exactly right. Why do I need this slow monster with its own model when nhibernate solves my problems today. Yes, LINQ is sweet but Ayende is adding it to nhibernate so I don't need the huge elephant of EF in for that. Besides, EF does a pretty poor job of it compared to LINQ to SQL.

    > I'm not going into detail about the comparisons between Entity Framework and ADO.NET (apples vs. oranges) nor Entity Framework vs. Linq to SQL ('We deliberately limit framework B and we're comparing our other framework A with it to make A look good!').

    But that is good for nhibernate and even your LLBLGen :-). These guys are showing up with a 18 wheeler rig for Indy 500 (and one held together by rope). If Oracle and IBM are smart, they will stay clear of hitching this huge trailer to their trucks. It can only slow down even a decent database.

  • @Joe: good point!

    @Danny:
    "The idea that there's something different here with the EDM and the EF that makes it possible to reuse your conceptual model in other contexts is that there are cases where you wouldn't want to materialize objects even though you do want to reshape / map your data. Reporting services is a great example of this in my opinion... If you write high-volume reports, you need to be able to design your report in terms of the concepts the customer understands (your conceptual or object model), but you don't want to materialize objects as you produce each report because the perf will kill you."
    But that's nothing new to the Entity Framework. Today, modern O/R mappers re-use entity mapping meta-data to allow projections of resultsets, formed from multiple entity sets in code. With linq you can go even a step further: you get typed resultsets, formed from entity data based on entity meta-data. These are ideal for reporting: grouping, aggregates etc. All doable, and directly build on top of the code + o/rmapper meta-data. Modern o/r mappers also allow you to project these 'dynamic resultsets' onto classes so you can extend it similar to what the EF offers.

    After all: it doesn't matter if you model these in a designer -> generate code (like the entity framework does and how LLBLGen Pro does this with Typed List definitions based on entity meta-data for these purposes (pre-.NET 3.5 compatible)), or that you define this in a couple of routines in your codebase, like with o/r mappers like NHibernate, LLBLGen Pro etc. or with any o/r mapper with a decent Linq provider.

  • I think it hardly needs to be said that all other O/R mappers are inferior when compared to LLBLGen. It's nice to see you still have the energy to refute the junk published by bloggers :)

  • Hello all,

    This is an interesting discussion.  Having dabbled in the ORM space myself (not really realizing it till a bit later) I have been questioning my methods lately.  I tried NHibernate but quickly came to the conclusion that you need to develop your system with it in mind.

    The current system I am working on is a re-write of a Perl online airline booking and administration system to C# using the existing Sybase database --- and that was the sticking point.

    An ORM provides one with a great deal of functionality.  Maybe, in some cases, one does not need it.  In other cases it definitely falls short of the mark.  MS is trying, as always (and as is the case for a product), trying to satisfy everyone's needs.  This is, indeed, a difficult task.  

    EF feels heavy.  What NHibernate does behind the scenes seems heavy.  Compared to custom code both will *always* be slower.

    But getting all the functionality one may want (not necessarily need) may be restrictive: lazy-loading, hydrating of domain entities / value objects, dirty flags.  All these persistence / infrastructure elements add weight.  I want a lightweight domain class that does the voodoo that it is supposed to do.  But that darn persistence with the raggedy broken edges of the object graph: *that* is what leaves me queezy.

    So now I am back to doing the mapping myself, in code.  No lazy-loading. No dirty flags. No magical hydration.  I get what I want, when I want it and work with it.

    Regards,

    Eben

  • Thanks for your article.

    Its all in that sentence: There's no 'bigger vision', as there's nothing bigger to vision.

    Regards,
    André "Switched on LLBLGen"

  • I've been following the ORM debate for over 5 years. I've written various code generation tools working off database and custom metadata to shortcut persistence plumbing. These do exactly what I want.

    But I want ORM to be just another bit of my standard development environment that other developers understand out of the box. One reason why I want to use EF.

    Frans, you obviously have a great deal of knowledge about ORM, I just wish what you say did not always come with such bitterness, to make it a bit more palatable for us developers to consume. I've been put off ever using your products by your atitude. One more reason to use EF.

  • Andy: why is explaining what the reality is, something that comes out of bitterness? I guess you rather listen to the marketing machine from MS, well... it's your call of course. :)

    the point of this article was that the EF isn't the best thing since sliced bread as MS wants you to believe. If you think that's a bitter vision on reality and therefore an attitude you should reject.... I have nothing further to say...


  • Tools for sure, Microsoft's marketing for developers is always how to create software without writing actual code.

    What Microsoft wants, is that customers start using EF now, despite any shortcomings, and we must hope they fix those in the next release. The dev groups are probably bigger than in your company and have much more strict rules than OS project groups (layers of managers and a legal department), so they move slower. What the designer might lack today, could be released next year. That's what the blog is alluding to, a sort of long term vision... of products to come.

  • I'm only familiar with the NHibernate side of the fence within this discussion, but here is the conversation I have read so far:

    Microsoft people: Its different than NHibernate because the mapping logic is decoupled from the object instantiation/lifecycle logic.

    NHibernate people: We dont need this/we already have this.

    What I sense is that this is an ulterior motive behind the EF, and it is the same motive behind asp.Net MVC, and it is the same motive behind VS testing suite, and the same motive behind Object builder etc...

    By Microsoft-ifying the ORM, MS will once again write a clone of a popular open source effort into their feature stack.

    Basically, you can have everything you used to have 2 years ago in the castle project now with a MS logo on it.

  • It is impossible to read this post objectively knowing that your company sells a competitor to the entity framework. Still a good read, but obviously biased.

  • Frans:

    "I have nothing further to say... "

    Somehow I find that *very* hard to believe :)

  • "It is impossible to read this post objectively knowing that your company sells a competitor to the entity framework. Still a good read, but obviously biased."
    Why is it biased? Isn't it so that O/R mapper developers are the people who actually know a lot about the inner workings of an o/r mapper and what it should do, why it's there etc.?

  • it's hardly obviously biased; frans didn't mention llblgen once even. it is totally acceptable for someone with knowledge in a field to comment on it. afai read he wasn't even disparaging EF much, just commenting on the marketing crap they put out.

  • I took LINQ to SQL for a spin a few days ago for the first time. After a few minutes (about 10) I hit my first wall. There's no way to refresh the model. At first I couldn't believe it, then I was disappointed, next I tossed it aside in the toy bin. I'm really not looking forward to taking EF for spin; I somehow get this gut feeling, don’t' know why, that it's going to be a waste of time.

  • Bouma, I think you are little too touchy about Entity Framework. L2S or EF doesn't seem to aim at other ORMs like llblgen and nhibernate. Linq has some basic ORM ability but it's mostly a new language feature that runs query over a variety of data (SQL SERVER only one of them) rather than a dedicated ORM solution. A big part of EF is trying to bring the data out online as a reliable, scalable web-based data service, in other words EF is the data engine of Microsoft cloud computing solution. Since cloud computing is becoming an inevitable trend and eating away the market share of traditional desktop-based apps, Microsoft decides to take it on w/ EF-based data service, Live Mesh, SilverLight, etc. EF belongs to a set of weapons Microsoft has been developing to compete w/ Google, Sun, Amazon and so on. For llblgen to achieve more success, I think you too could work on how to enable customers to access their data online easier.

  • New to O/RM - ISV looking for database independence.

    We are looking @ EF because L2S was Sql Server specific, the reason being database independence. After spending 5 minutes with it how are you supposed to separate the conceptual model from the storage and mapping when they are all in the same file, edml? Apparently it is possible because it builds 3 files, but I haven't a clue how you are meant to use it, perhaps we have to generate our edml n times for all databases!!!!???

    I thought the provider model was meant to be main starting point for EF, what you can end up with now is L2S, L2Oracle, L2MySql not conceptual mapping to physical.

    NHibernate seems to offer this, as do many others.

  • There may be a bit of marketin spin on Danny Simmons article, I will grant you that. But dont let Microsoft-bashing blind you from the fact you do provide a competing product and are in a way doing exactly the same thing he's doing. Your point in that the Conceptual model is not needed is debatable, actually once we start seeing implementation of Data Services (Analysis, Reporting...) against it (that normally were done agains the Relational Model) we may be having a different view/opinion. I would like to make a call to devs to instead of going straight to bashing microsoft/taking the easy route, stop and think.

Comments have been disabled for this content.