It is a common
misconception that cloud-hosted services and infrastructure (SaaS and IaaS) are
turn-key propositions that require less skill than self-hosted platforms to
plan, deploy, operate and maintain. The reality is that a well-rounded book on
the topic - like MSFT O365 Administration - must spend roughly 200 pages
discussing planning tools and directory services, before even touching on O365
management. If considering an O365 deployment, you want this book in the
The book is written
for "Information Technology (IT) system architects who need to integrate
Office 365 with existing on-premises technologies," but it is equally
useful to everyone in the traditional TechNet audience, including IT Operations
teams and infrastructure experts (IT Pros). Extensive attention is given to
System Center monitoring and management (via SCOM, DPM, VMM, SCO, etc.), and
SCOM alerts. Another chapter helps you install Orchestrator. Another covers
Service Manager automation. By this time we're 400 pages deep and getting into
remote administration with PowerShell.
But wait, there's
more! At this point the tenancy is configured complete with monitoring and
management, but we haven't gone deep on any of the actual services - SharePoint
Online, SkyDrive Pro, Exchange, and Lync. The content is roughly proportional
to the pain: Exchange integration gets the heaviest treatment, followed by
SharePoint and Lync.
online coverage is terrific in describing SP's topology and constraints, and
the App Store model used to add features. The book does not attempt to cover
user-scoped topics like governance, information infrastructure, or social
features (perhaps because it was mostly written during beta, and prior to
Microsoft's announcements about Yammer integration). So you also won't see
discussion of common issues with hybrid scenarios, including Search and Social.
So what you have is
a great book with the insights of experience to get you into the game, plus a
wealth of tips and tooling to ease the road ahead. The book is (almost
surprisingly) well-written and concise with many step-by-step sections
including full screen shots. I didn't work through them all so there may be
corrections as these products are updated, but in the cases I read, the
high-level plan is discussed first, terms are explained, risks are identified
and troubleshooting guidance is provided; minor changes in the UI should not
trip up anyone using this book as a reference. With a service established and
integrated, the team can then look to other resources to go further into
planning governance, information architecture, user experience, and App
development. Well done and strongly recommended.
[In response to a question on #SPYAM I wrote this update of an article form 2010 titled "The Relative Effort of SharePoint 2010 vs. 2007." -Eli.]
SharePoint is the best demo-ware ever, and that is why it is a multi-billion dollar product. It’s like going to the pet store and seeing a great dog that does backflips all kinds of tricks – and it really is a smart dog and it does all those tricks – but when you get it home you realize that what you need is a dog that hunts. SharePoint can be trained, but is fundamentally a platform where Microsoft's first priority was first to get the foundations right - to make it trainable and extensible, and today their priority is to make it work and scale in the cloud - their cloud.
If Microsoft's O365 scenarios are not your scenarios, then it is again time to fill the gaps with custom solutions and Apps. You need an experienced architect because solution design matters. You need to know what infrastructure you need to support your solutions. You need to know what components are out of scope for your business case so you do not provision needless infrastructure. If you want a hybrid of cloud and on-premise in any way, you need knowledge of both.
And fundamentally, you need to understand what specific business need you are solving so an appropriate solution can be delivered to meet it. "We need SharePoint" always ends in low adoption. "We need a generic template that works for both per-client CRM and project execution" always ends in low adoption. "We need a website where we can share shedules and designs with clients to support construction projects" is a specific need that can be designed and delivered. Solutions with a concise purpose and audience are seeded to succeed.
SharePoint is complex. There is no substitute for the knowledge and skill needed to design and deliver efficient, maintainable, and extensible solutions. If we were talking about a brand new paradigm with its own model - like when Facebook or Twitter were first released - I might agree with your executive - go ahead and kick the tires. But we're talking about your business, so unless you're okay to proceed out-of-box and without any competitive advantages, that dog will not hunt. Get some experts on your team.
At this year's SharePoint Conference there was an active Community Zone where people could learn more about user groups, MVPs, and current events in the SharePoint community. I answered questions in the Community Zone on Monday and the most frequent was "How do you start a User Group?" Here are my thoughts, and I'd love to hear yours so please comment.
The first and only requirement is location, once you have one you can book a speaker and make your announcement. Find a location where you can get consistent access as often as you plan to meet (e.g. monthly).
It helps to choose a consistent day each month (e.g. "the third Wednesday of each month") and be ready to announce your next meeting's speaker at each meeting. If you can't plan that far ahead, give members at least a week's notice. The longer notice you can give, the more people you will attract.
Build a website. I like Meetup.com because it's free for small groups and inexpensive as you grow (~$75/year). You can host your own SharePoint site but be aware this is a significant committment that introduces risks that a volunteer-run UG doesn't always have the resources to support. Meetup.com takes that whole headache away and lets your planners focus on planning (find ours at http://www.tspug.com).
At your first meeting (or every meeting) ask people what they want to learn about. I write the ideas on a board or flip chart and then run through the list a second time to vote on topics. For the top 3 to 5, ask if anyone there would would like to present. You can often plan 2-3 meetings at a time like this. The point is to involve people and asking them what they want their group to be is a good step towards being able to ask them to contribute.
A box of reusable name tags goes a long way.
After your first meeting register the group with MSTC: https://www.technicalcommunity.com/
A user group exists for the community of its members, not its sponsors. While all UGs rely on sponsors for facilities, speakers, giveaway items, catering and other perks, you will build a stronger community with a policy of inclusion rather than exclusion. If you ever have to ask or answer who is "allowed" to be a sponsor or speaker know that you walking a slippery slope. Diversity strengthens.
Your own members are your best speakers. Develop them. A night of 10-minute talks is a great way to encourage new speakers to share their stories.
At TSPUG we start meetings by having new people introduce themselves and doing a general Q&A. People are often there to find answers, and if you can supply them right away they can sit back and relax for the rest of the night. User groups started as gatherings to talk about work. While those are officially "SharePints" now, always set aside time for people to chat and get to know each other.
Let your local Microsoft office know you exist and ask for their support. Get your group announced in their regular missives to partners and customers (e.g. MSDN Flash), ask for giveaways, ask them to present.
As you grow, delegate. Different people can confirm the facility a few days ahead, coordinate speakers, check the registration count for catering, answer queries on the website, act as emcee, and tabulate evals.
Speaking of evaluations - use them to get feedback about what people want to see, and get comments and scores to your speakers so they can improve. To encourage people to fill them out, use evals to raffle off the prizes you've collected from sponsors.
Sit back and enjoy your new user group!
What did I miss? Do you have a story to share? Comment!
If our ex-mayor coaches football the way he runs City Hall it probably goes something like this. . .
Coach: All right team, this is the league final, the game we've played so hard to get to all season. Out there are provincial and national scouts with contracts in their pockets, all you need to do is take the title. Here's the gameplan: Get out there and play the best soccer of your lives.
Player: Hey coach, no disrespect but uh, we're a football team.
Coach: Not tonight boys. Tonight we're mavericks playing by our own rules, and soccer is just another name for football. We're going to win this title on our own terms and pity the fools who say otherwise.
Player: What about the refs? If we kick the ball around and don't stop play on whistles, our guys are going to get kicked off the field. How do we make any points without TDs and field goals?
Coach: Let the other suckers read their left wing commie rule books, and see how far that gets 'em. Hah, because we'll be playing soccer! One step ahead! It's brilliant! We don't need no stinking refs. Most of those losers are dead weight and the rest are politically motivated. How can you know you're going to win if you play by the same rules as the other team? We're underdogs! Mavericks! We make points by putting the ball into the goal. Heck we'll bring our own ball, and our own goal. And dictate the rules as we go. If I couldn't do that I wouldn't be coach - I took this job on my own terms and those are them. I make the gold, I make the rules. It's not how you play the game, it's whether you win or lose in your own mind.
Surprisingly, coach's tactics did not go over well with the pinko refs, er, judge.
Ford pledges to fight the decision "tooth and nail" rather than with "knowledge and common sense," leading inside sources to believe that his attention was simply still on lunch rather than the legal battle at hand.
Here in Canada, and particularly in south Ontario we're lucky to have an exceptionally strong SharePoint community. With the publication this month of Ruven Gotz's Practical SharePoint 2010 Information Architecture I count at least 6 books that were either written by, or contain contributions by our local SharePoint MVPs. Bookmark this post or watch my tweets for updates as I post reviews and add other local titles.
Practical SharePoint 2010 Information Architecture by Ruven Gotz
Not yet reviewed.
Professional SharePoint 2010 Development 2nd Ed., co-authored by by Reza Alirezaei, co-technical editor Eli Robillard
I'm a biased reviewer of this volume having contributed to the 2007 version and provided technical editing for several chapters of this release. That said, this is a great developer reference written by the experts who know these topics best. I wasn't aware of who the authors were during the editing process, and with each chapter I wondered, "was this written by the product team?" Seriously good insights, highly recommended.
SharePoint 2010 Enterprise Architect's Cookbook co-authored by Reza Alirezaei
Not yet reviewed.
Real World SharePoint 2010, co-authored by Reza Alirezaei
This is a terrific collection of chapters by 22 SharePoint MVPs; basically 22 experts writing about the subject areas they live and breathe daily.
Microsoft SharePoint 2010 Development Cookbook by Ed Musters
This book is like a self-paced introductory class in SharePoint 2010 development led step-by-step by an experienced, thoughtful, well-spoken instructor, which is exactly who Ed is. Every topic Ed covers is treated in terrific detail, and that said, my only caveat would be to check the table of contents to be sure that the topics you are interested in are covered. This is a focused, hands-on introduction to standard skills - building a development machine, columns and content types, event receivers, web parts, packaging, basic workflow, basic branding, the client object model and more. And once you absorb this one you will be ready for more advanced material.
Expert SharePoint 2010 Practices, co-authored by Ed Musters
Not yet reviewed.
And one more by the Easterners: Beginning SharePoint 2010: Building Business Solutions with SharePoint co-authored by Amanda and Shane Perran
Not yet reviewed.
Thanks to all my local colleagues for taking the time to share their insights with the world-wide community. This collection represents hundreds of hours sacrificed from family, friends and work, and multiples of that in practical experience. Have I missed any? Do you have reviews of your own? Let me know in the comments.
Thanks to Danny, Reza, all the speakers and the rest of the team for hosting another great SharePoint conference in Toronto. Also a shout-out to the SharePoint Blues Band for putting on a great show Tuesday night, and for the privilege of joining them on-stage to play one of my originals. It was a lot of fun, and I hope we can do it again next year or sooner!
Also a big thanks to everyone who attended my session on Large-scale SharePoint Architecture. I've attached the deck to this post and added some of my thoughts in the notes, so it should be a good read even if you couldn't be there in person [Download].
Also as a special bonus I've uploaded my personal collection of Visio Shapes. I've been growing this template for several years now and use it almost daily, and I'd be interested to read or see how you use it too [Download].
2012 SharePoint Summit Toronto
@SP_Summit Twitter Feed
Conference wrap-up message from Danny
The question was asked, "how hard is it to configure FAST and what does that effort give you?" The none-too-helpful answer is that with every search product you get what you give. FAST happens to have more substance so logically there will be more to configure than some alternatives, and you can get more out of it in the long run and continuously grow its ROI as you learn its ropes.
First ask what you're searching for - know your corpus. How large is the corpus, how many users will you have in years 1/2/3, is this a single farm or several geo-distributed farms, and is there non-SharePoint content to index (databases, public web sites, file shares, etc.)? Just as important is your organization's previous experience with search - do people wince when search is mentioned? Have you used search applicances? Do you have librarians, taxonomy managers, or a dedicated search curator? Have you customized or built apps on top of search? If what you have now brings the shivers, then it's easy to win friends and influence people just by delivering search that doesn't suck.
And now we arrive at the central question: Are you willing to manage search as an application? Apps get attention. When they don't work, the business has no problem kicking butts until they're fixed. So when search is more helpish than helpful, why does no one fix it? Garbage in, garbage out. Search needs to be managed as an application.
No search product is great when left with a default configuration. All are built to be improved, and along with the sexiness of the UX, this is where you will find the important differences in features. Capture that in your evaluations. FAST is better than many - it does some automatic tuning - but I would never bet a career or reputation on out-of-box FAST. No application or product does an acceptable job in the long run with a "set it and forget it" mentality. You would never treat your LoB applications this way. Nor does it work for Google.com, the Google appliances, FAST search, or SharePoint Enterprise Search. Users can scream at these products to serve better results all day (and they do), and the net effect will range from zero to nil.
Even a part-time assignment - a half day every other week - will make a sizable difference to user satisfaction. That time is spent reading feedback (you do have a feedback link on your results page, right?), investigating the search logs to see if people are finding what they're looking for (and configuring synonyms and "Best Bets" if not), and building "canned queries" and other ways to ease the experience. This is how the nifty "metadata-based navigation" in SharePoint 2010 came to be, and I see other managed search applications serve magic in their companies year after year.
So is FAST for you? It depends. If your corpus is under a million documents, your current offering is weak, and you can only afford minimal management, then out-of-box SharePoint Enterprise search is a great choice. If your indexing requirements are more demanding, or users more accustomed to a great experience, then FAST is a great choice to build on. If prepared to make any meaningful investment in people to manage search, start with FAST because it provides a great growth path for features and scalability. The question remains: To what extent are you willing to manage search as an application?
Common Sense and Opt-In: http://weblogs.asp.net/erobillard/archive/2003/05/08/6680.aspx
Eight years ago I wrote a brief piece on cookie management proposing that preferences be remembered by default with an opt-out option. The part that got the most feedback was this:
The act of remembering preferences in the form of cookies is not gathering information on surfing habits. If the issue is the perception of privacy, then educate your users about cookies. If you care about privacy, provide a button to delete cookies previously stored by your site.
Eight years later, these basic principles are reflected in the EU cookie law (http://eucookiedirective.com/) with the notable exception of opt-in vs. opt-out. People should know and care what's being stored on their machines, and as a principle transparency should always win.
The other idea worth a second thought is to make it easy for people to delete their cookies. Give them the ability to say "I'll use the site now, but to give me control over my own privacy, let me delete any cookies when I'm done."
Are these ideas still controversial? Always curious to hear.
What are Application Pools?
Application Pools are a .NET construct, and each pool represents an instance of the Common Language Runtime (CLR) executing managed .NET code. Each application pool in IIS hosts one or more web applications, and the recommendation is to stay under 10 pools per server. This recommendation was made in 32-bit days, and other considerations like 32 vs. 64-bit, available RAM, and I/O (bandwidth and disk usage) really take over as you add application pools. With some planning and the right horsepower, usage characteristics, and a healthy dose of monitoring, this is a "soft" limit and it is possible to grow beyond 10.
What happens when an application pool recycles?
Recycling is like rebooting. The process is stopped and started fresh. The CLR reloads, the assemblies in the GAC are re-read, and the application is ready to respond to requests. When the first request comes through the application takes a look at the page (aspx), web service (asmx), or whatever resource was requested, checks whether it was pre-compiled or needs to be JIT-compiled, reads the assemblies required to serve it (and does the same JIT-compilation checks on them).
Why all the JITter? In a nutshell, when you compile an assembly in Visual Studio you're compiling to MSIL (which is processor-agnostic), and not to native machine code (which is processor-specific). This JIT compilation is why the first request to a page takes longer than subsequent requests - the first request has extra work to do.
If you take the heavy-handed approach of resetting IIS (IISRESET) rather than recycling an individual application pool, there is much more to be torn down and restarted. Developers quickly learn the speed advantage of resetting single application pools, and administrators quickly learn the value of "warming up" pages so the JIT-compilation is done by the time users make their page requests.
Why do Application Pools need to recycle?
For one, an application pool recycles whenever web.config changes; this immediately enforces whatever changes are made. When you update an assembly in the GAC you also need to recycle for your new version to be "seen" since the GAC is only read when the application pool starts up - it isn't "watched" for changes. So developers tend to recycle the pools often by hand whether by IIS, a script or Spence's Application Pool Recycle Utility. To understand the other reasons we need to do more digging.
On 32-bit systems you could allocate up to 4 Gb including 2 Gb of user-addressable space per application pool, and the CLR would hit out-of-memory issues somewhere between 1.2 and 1.4 Gb of usage. This is because the CLR itself takes up a certain amount of space, let's say ~200Mb, the assemblies pre-loaded in the GAC take up space, and whatever is left is available to the application(s) in the pool. IIS lets you set recycling when memory load reaches a certain point, so ~800 Mb was a common setting for recycling MOSS applications if RAM was no issue (i.e. you had 4 Gb or more). Lower limits would be set to throttle individual pools so they might behave well together given physical memory constraints. In 32-bit days, the point of setting limits was generally to divide available memory among application pools so all could happily co-exist.
On 64-bit systems like those hosting SPS 2010, the address space becomes huge, and you'd be tempted to think "well now I can just let SharePoint use as much memory as it needs, limited only by physical memory of the server." And like most other common sense thoughts with respect to SharePoint, you would be wrong. The correct answer is somewhere between 1.2 and 1.4 Gb.
As pages are opened, objects are created and destroyed, and particularly as cached objects are created and expired, the garbage collector does its thing to reclaim memory. The same way that a hard drive becomes fragmented over time with writes, reads and deletes, so too does memory become fragmented in the life of an application pool. As fragmentation increases, more garbage collection is necessary. ASP.NET uses the large object heap (LOH) to store objects larger than 85kb, and it is particularly expensive to reorganize these when the LOH becomes fragmented (which is why I called out cached objects above). When .NET needs more memory it allocates blocks in 64Mb chunks. If memory is fragmented to the point that 64Mb of contiguous memory isn't free, an "out of memory" exception is thrown.
At some point, performance is as dependent on fragmentation and reorganizing memory as it is on the number of reads and writes. This inflection point is a good time to recycle the application pool and start over with a clean slate. So the purpose of recycling application pools for 64-bit applications is as much about defragmenting memory - in particular the large object heap - as it is about "assigning" memory per application pool.
How do you avoid JIT Lag?
There are two choices for recycling: daily at a preset time, or when memory hits a certain load. The advantage of daily recycling is that you can predictably "warm up" your application(s) once they restart. You might even script the two operations together on a timer job to minimize the chance that user requests are JIT-lagged. The advantage of recycling based on memory load is that it's less arbitrary and maximizes performance, though there is no obvious way to automatically warm up pages to eliminate JIT lag.
For a nice round-up of warm-up scripts available, head to EndUserSharePoint:
- It's a bad idea to prevent application pools from recycling, and good to think of how best to recycle yours.
- 32-bit applications like MOSS 2007 should be set to recycle based on the maximum amount of memory you want available to each application pool but never less than 200 Mb, and anywhere from 800 Mb to about 1.2 Gb if you have RAM to spare.
- 64-bit applications like SPS 2010 are more tolerant of a daily recycle schedule, though to maximize performance the recommendation is to set the pool to recycle at 1.2 or 1.4 Gb.
- I can't believe that "JIT lag" didn't enter ASP.NET terminology a long time ago, or at least I can't find any sign of it. Death to JIT lag!
- And yes, I hope to write more this year, be seeing you!
Q: How do I figure out whether memory fragmentation is an issue for me?
A: Check out this MSDN article by the guys who know this stuff best. In addition to explaining "why" in more detail, it tells you which PerfMon counters to enable. "Time spent in garbage collection" for example is fine around 10%, but should not get over 50%.
Q: How much of a performance hit are we talking about?
A: Garbage collection doesn't happen on every page request, so the answer is somewhere between "I don't know" and "it depends." If the 50% figure in the above article is a typical symptom, that would seem to indicate 40 to 50% of your CPU cycles are not being used to respond to requests.
Stefan Goßner post: "Dealing with Memory Pressure Problems in MOSS/WSS" Had I seen this I might have posted a link rather than written much of the above, but it is nice to see a post that corroborates a recommendation, in this case the 800 Mb setting for 32-bit IIS.
MSDN Article: "Introduction to Using Disposable SharePoint Objects"
The 5th Annual Toronto SharePoint Camp was last Saturday and it was another terrific success. Thanks to the TSPUG executive committee and the small army of volunteers who made it happen, and to the smiling faces of this year's 200+ attendees for making it all worthwhile.
BIG Congratulations to the recipient of our first ever Toronto SharePoint Community Champion Award: Brian Lalancette.
Brian was nominated by members of TSPUG and selected from all nominees by the TSPUG Executive for his tireless work in creating, maintaining, and shepherding the fantastic AutoSPInstaller project. You can congratulate him over Twitter with a shout-out to @brianlala, or check out his latest musings in his Lala Land blog.
Also thanks to those who came to my session on Large-scale SharePoint Architecture. This is a brand new presentation and to improve it for future audiences I'd love to hear your thoughts, comments and suggestions. While I won't be sharing the sample documents shown during the session, for the first time I've posted the original deck rather than a slide-show version. This version contains all my own annotations, so even if you weren't there you will be able to read through the speaking points on each slide. As promised, the deck contains a second presentation on infrastructure considerations for large-scale deployments for those of you wanting an IT Pro / infrastructure session. Lareger-scale infrastructure is a session of its own and I'll look into scheduling it at either the Toronto or Hamilton SPUG.
My presentation deck: Large-scale SharePoint Architecture - Eli Robillard (PowerPoint deck)
More Posts Next page »