Thoughts on SharePoint Application Pools, Recycling and "JIT Lag"

What are Application Pools?  

Application Pools are a .NET construct, and each pool represents an instance of the Common Language Runtime (CLR) executing managed .NET code. Each application pool in IIS hosts one or more web applications, and the recommendation is to stay under 10 pools per server. This recommendation was made in 32-bit days, and other considerations like 32 vs. 64-bit, available RAM, and I/O (bandwidth and disk usage) really take over as you add application pools. With some planning and the right horsepower, usage characteristics, and a healthy dose of monitoring, this is a "soft" limit and it is possible to grow beyond 10.

 

What happens when an application pool recycles?

Recycling is like rebooting. The process is stopped and started fresh. The CLR reloads, the assemblies in the GAC are re-read, and the application is ready to respond to requests. When the first request comes through the application takes a look at the page (aspx), web service (asmx), or whatever resource was requested, checks whether it was pre-compiled or needs to be JIT-compiled, reads the assemblies required to serve it (and does the same JIT-compilation checks on them).

Why all the JITter? In a nutshell, when you compile an assembly in Visual Studio you're compiling to MSIL (which is processor-agnostic), and not to native machine code (which is processor-specific). This JIT compilation is why the first request to a page takes longer than subsequent requests - the first request has extra work to do.

If you take the heavy-handed approach of resetting IIS (IISRESET) rather than recycling an individual application pool, there is much more to be torn down and restarted. Developers quickly learn the speed advantage of resetting single application pools, and administrators quickly learn the value of "warming up" pages so the JIT-compilation is done by the time users make their page requests.

 

Why do Application Pools need to recycle?

For one, an application pool recycles whenever web.config changes; this immediately enforces whatever changes are made. When you update an assembly in the GAC you also need to recycle for your new version to be "seen" since the GAC is only read when the application pool starts up - it isn't "watched" for changes. So developers tend to recycle the pools often by hand whether by IIS, a script or Spence's Application Pool Recycle Utility. To understand the other reasons we need to do more digging.

On 32-bit systems you could allocate up to 4 Gb including 2 Gb of user-addressable space per application pool, and the CLR would hit out-of-memory issues somewhere between 1.2 and 1.4 Gb of usage. This is because the CLR itself takes up a certain amount of space, let's say ~200Mb, the assemblies pre-loaded in the GAC take up space, and whatever is left is available to the application(s) in the pool. IIS lets you set recycling when memory load reaches a certain point, so ~800 Mb was a common setting for recycling MOSS applications if RAM was no issue (i.e. you had 4 Gb or more). Lower limits would be set to throttle individual pools so they might behave well together given physical memory constraints. In 32-bit days, the point of setting limits was generally to divide available memory among application pools so all could happily co-exist.

On 64-bit systems like those hosting SPS 2010, the address space becomes huge, and you'd be tempted to think "well now I can just let SharePoint use as much memory as it needs, limited only by physical memory of the server." And like most other common sense thoughts with respect to SharePoint, you would be wrong. The correct answer is somewhere between 1.2 and 1.4 Gb.

As pages are opened, objects are created and destroyed, and particularly as cached objects are created and expired, the garbage collector does its thing to reclaim memory. The same way that a hard drive becomes fragmented over time with writes, reads and deletes, so too does memory become fragmented in the life of an application pool. As fragmentation increases, more garbage collection is necessary. ASP.NET uses the large object heap (LOH) to store objects larger than 85kb, and it is particularly expensive to reorganize these when the LOH becomes fragmented (which is why I called out cached objects above). When .NET needs more memory it allocates blocks in 64Mb chunks. If memory is fragmented to the point that 64Mb of contiguous memory isn't free, an "out of memory" exception is thrown.

At some point, performance is as dependent on fragmentation and reorganizing memory as it is on the number of reads and writes. This inflection point is a good time to recycle the application pool and start over with a clean slate. So the purpose of recycling application pools for 64-bit applications is as much about defragmenting memory - in particular the large object heap - as it is about "assigning" memory per application pool.

 

How do you avoid JIT Lag?

There are two choices for recycling: daily at a preset time, or when memory hits a certain load. The advantage of daily recycling is that you can predictably "warm up" your application(s) once they restart. You might even script the two operations together on a timer job to minimize the chance that user requests are JIT-lagged. The advantage of recycling based on memory load is that it's less arbitrary and maximizes performance, though there is no obvious way to automatically warm up pages to eliminate JIT lag.

For a nice round-up of warm-up scripts available, head to EndUserSharePoint:

https://www.nothingbutsharepoint.com/sites/itpro/Pages/Roundup-SharePoint-Warm-Up-Scripts.aspx 

 

Conclusions

- It's a bad idea to prevent application pools from recycling, and good to think of how best to recycle yours.

- 32-bit applications like MOSS 2007 should be set to recycle based on the maximum amount of memory you want available to each application pool but never less than 200 Mb, and anywhere from 800 Mb to about 1.2 Gb if you have RAM to spare.

- 64-bit applications like SPS 2010 are more tolerant of a daily recycle schedule, though to maximize performance the recommendation is to set the pool to recycle at 1.2 or 1.4 Gb.

- I can't believe that "JIT lag" didn't enter ASP.NET terminology a long time ago, or at least I can't find any sign of it. Death to JIT lag!

- And yes, I hope to write more this year, be seeing you!

 

Follow-up Q&A:

Q: How do I figure out whether memory fragmentation is an issue for me?

A: Check out this MSDN article by the guys who know this stuff best. In addition to explaining "why" in more detail, it tells you which PerfMon counters to enable. "Time spent in garbage collection" for example is fine around 10%, but should not get over 50%.

Link: http://msdn.microsoft.com/en-us/magazine/cc163528.aspx

 

Q: How much of a performance hit are we talking about?

A: Garbage collection doesn't happen on every page request, so the answer is somewhere between "I don't know" and "it depends." If the 50% figure in the above article is a typical symptom, that would seem to indicate 40 to 50% of your CPU cycles are not being used to respond to requests.

 

Other Resources:

Stefan Goßner post: "Dealing with Memory Pressure Problems in MOSS/WSS" Had I seen this I might have posted a link rather than written much of the above, but it is nice to see a post that corroborates a recommendation, in this case the 800 Mb setting for 32-bit IIS.

MSDN Article: "Introduction to Using Disposable SharePoint Objects"

No Comments