Shared Code, the GAC and Applications

A recent developer mailing list discussion questioned the efficiency of shared code, suggested the GAC should never be used, and suggested that (enterprise) applications should never be installed on the client.

For posterity, I thought I would include my response here.

> "Efficient (shared code) ? ... Saving disk space is no longer an issue. Keep a thousand copies of the assembly on the drive.

Yes sharing code is efficient - it is a core feature of most operating system architectures.  Why share code?  Fundamentally, because all computer systems are limited in resources. Can you load a thousand copies of a code module into memory?  What about copies of all those module's dependencies?  What about on a CE device, a PDA or an intelligent watch?

If we raise the level of abstraction to .NET, the same architectural limitations exist.  In .NET, does the CLR load a new instance of a versioned (GAC) assembly a thousand times, or does it use one already loaded in memory?  What about a non-versioned (privately deployed) version?  What are the performance and working set implications of the CLR being able to resolve a single set of (JITed) stubbed code, rather than a thousand different (but the same) sets?

The reality is that the majority of code running right now on YOUR system IS shared.  From the operating system to installed services, server products and of course .NET.  What they have in common are large code bases, shared functionality and the need to maximise the efficient use of limited resources.  Why use the GAC?  To implement .NET applications which meet the same criteria.

> If MS recommends placing assemblies in the GAC, they are dead wrong. You shouldnt use the GAC for *anything* - it just repeats the errors of \system32 (maybe slightly better).

Many in the .NET community do argue that shared .NET code in the GAC should be limited to "system" type code.  If you had stated that, I might have gone some way towards agreeing with you.  But to say that NOTHING should be installed in the GAC suggests that you are coming from a particular architectural context - that you have a particular architectural scenario in mind.  To disregard all others, and the GAC along with them, is at best limiting. 

However, that doesn't mean that sharing code is without issues.  Most notably that of versioning.  I suspect your issue isn't really the GAC, but rather the question ... "How do I update shared code without breaking existing code?"

While not perfect, the introduction of side-by-side deployment of versioned assemblies together with application/publisher/machine configuration file policy, go a long way towards ensuring that each .NET application IS running against the same assemblies that they were built against - or against specific assemblies explicitly determined by the publisher or administrator.

> "I dont think that you should be installing (enterprise) applications on client machines anyway ... run them from servers ..."

The last time I heard someone say that was Larry Ellison of Oracle, when talking about his thin client (network computer).  He cited many of the same advantages you do.  About the only person who got really excited by that was Oprah ;-)

I never bought Scott McNealy's - "the network is the computer" - line either.  Proposing that the answer to the "rich" vs. "reach" dilemma is to surrender solely to the "reach" camp, risks repeating Ellison's mistake.

No Comments