Collections and caching redux.

In feedback to the Collections and caching post, ServProvAdmin asked enough quick questions to fill a whole post.

"What about usage in large web farms where there may be 200+ applications? What affinity might an admin run into?"

Caching is well-suited to static data, dynamic data should be distributed for scalability in other ways. If you must cache dynamic data then you need to build your own invalidation mechanism across machines, or design your site and business processes around the issue. For example, Amazon generates pages based on cached inventory data so the site might read “Usually ships within 24 hours” plus the warning “Only 3 left in stock.” When you actually place an order, Amazon sends out e-mail confirmation only after availability is confirmed against the real, live data. This guarantees both a responsive site (by generating pages against a fast cache) and reliable business processes (by putting actual orders through a managed queue).

Page, page fragment and data caching is ideal for large server farms and should be encouraged wherever appropriate, it can help apps seriously scale. Page caching is especially powerful. All page requests go through two levels -- first the HttpModules and then the HttpHandlers. The HttpHandler-level is where the ASP.NET filter kicks in to process the .aspx and render output (read through machine.config to confirm this for yourself). But it's the earlier HttpModule-level that looks for pages already in the output cache. If found, the lower-level processing never needs to be called, the HTML can be served straight up. That's a serious speed advantage. For further reading, check out the excellent Rob Howard article on Page Output Caching.

Let's get back to the Cache class. You save resources when you use the Cache class to store the result of a process which is performed over and over again -- as in a database call or in generating a frequently-used portion of a page. You're trading a costly (far) operation for a cheap (near) cache operation. You don't need partial caching for static pieces like HTML headers and footers since no processing is required. This would be trading a non-operation for a cache call, and that would be a bad trade. But if your headers were dynamic -- for example if you declare a page title or the current user's name to a header control -- then you can benefit from caching that fragment.

This sort of caching is not ideal for dynamic session-bound data in a server farm. It might be fine for static content like a fragment displaying the user's name (where a copy cached on each machine is peachy), but would be impractical for a shopping cart. A fast database local (near) to the cluster would be more appropriate for the cart, while another (far) database might be used for actual orders as in the earlier Amazon example. A lot of data is static enough to be cached near. User names, page titles, and drop-down list fodder like region and country lists rarely change. A daily refresh is fine for most catalogues and an obvious choice for weather forecasts or movie listings. The trick is to build an infrastructure appropriate to the rhythm of the site.

Developers need to know when they're writing for a cluster (or not). It's as basic and as critical to performance as knowing whether you're building against Oracle or SQL Server. Certainly, some apps (think Classic ASP apps which rely on Session vars for state) need to be pinned to a specific box because they were designed that way. While ASP.NET makes it easier to build scalable sites, it still takes a bit of planning if you need session-bound state data.

"Memory? - can its usage be controlled or configured? Is the L2 cache used?"

The cache class is managed by ASP.NET, it doesn't have anything to do with the hardware cache. Cached objects are subject to garbage collection (GC) when memory is short using a least-recently used (LRU) algorithm. To avoid GC on rarely-used but expensive-to-load items, you can take advantage of the CacheItemPriority parameter of  Cache.Insert.

"What happens in a multi processor box when 200 apps are all trying to pull stuff from the cache? what happens?"

The Cache class has its own automatic locking mechanism, unlike the Application object which must be manually locked and unlocked. If you have 200 apps on a box, each would manage its own cache. Concurrent access is not an issue.

One thing that alarms me about this feedback is that it implies that "200+ applications" on every server in a "large web farm" is normal. This is not a normal way to load balance a network, and a sure way to burn memory on every machine. Much better to break the farm into clusters of logical machines and assign apps to a single cluster with some decision process that balances traffic among the clusters. The exception would be a single app that requires many logical machines (clusters) because either its database is partitioned across those logical machines, or user sessions are pinned not to single machines but perhaps a cluster with shared near storage.

 

No Comments