The third annual Toronto SharePoint Camp will deliver over 20 sessions by the best Canadian and international SharePoint experts on a wealth of topics. Whether you're a developer, server administrator, architect, power user, or business sponsor; whether you're learning about SharePoint for the first time or a seasoned pro; whether you're migrating, developing, designing, or planning; this is the event for you! FREE Registration includes lunch.

Thanks to our wonderful Diamond and Platinum sponsors for keeping the Toronto SharePoint Camp free for all: Microsoft Canada, KwizCom, Infusion Development, and Navantis. 

[Toronto SharePoint Camp Website] 

[Register for the Camp!] 

 

Toronto SharePoint Camp 2010 Schedule
 

Developer - What's New Developer - How To Developer - Broad Areas Administrator Champion, Architect
8:00 Registration          
8:40 Announcements
9:00 Sessions 200
What's New for Developers in SP 2010
Ambreen Shahid
100
SharePoint Designer for Forms and Visio 2010 for Workflow in SP 2010
Amit Vasu
200
Enterprise Content Management in SP 2010
Eli Robillard
100
Best Practices for Architecting, Deploying and Optimizing for SP 2010
Chris Foreman
100
Writing a SharePoint Success Story
Richard Harbridge
10:15 Sessions 300
Developing with VS 2010: Tips, Tricks, and Upgrading Code
Shai Patel
300
Poor Man's Dashboard: JQuery, SharePoint Web Services and Charts
Scott Zimpfer
200
Introduction to Business Connectivity Services in SP 2010
Jason Bero
200
City of Brampton: A Case Study
Nadeem Mitha
11:30 Lunch          
12:30 Sessions 400
Best Practices for Development in SharePoint 2010
Reza Alirezaei
100
SP 2010 Branding for Beginners: How to do it, how to upgrade
Kanwal Khipple
200
SharePoint Administration
Bill Brockbank
200
Explaining Metadata to Stakeholders (and Understanding it Yourself)
Ruven Gotz
1:45 Sessions 300
Silverlight 4 and SP 2010
Mano Kulasingam
400
Advanced SharePoint Web Part Development
Rob Windsor
200
New Social Features in SP 2010
Andy Nogueira
200
SharePoint Administration with PowerShell
Craig Lussier
300
Striking a Balance Between Navigation and Search
Ravi Vijay
3:00 Sessions 200
What's New with SharePoint Lists Including Cross-List Capabilities
Roy Kim
300
Build a Feature: Upload and Download as Zip
Oguz Demirel
300
Web Content Management in SP 2010
Mike Maadarani
300
Planning and Measuring Performance of a SharePoint Farm
Ivan Neganov
200
Train Your SharePoint End Users
Joe Seguin
4:15 Prizes and Closing          
5:00 Tear Down

The third annual Toronto SharePoint Camp is scheduled for March 20, 2010. To be considered, please read the Call for Speakers (attached to this post, below) and submit your abstract(s) using the form provided by midnight on Friday, February 12.  

This year we plan to add one room and expand the number of sessions slightly from about 20 to about 25. As in past years while everyone is welcome to submit more than one abstract for consideration, each speaker will be selected for a single session in order to involve more people and encourage first-time speakers. Also as in past years, we’re looking for more content for attendees representing IT Pro (server administration), Architects, and "The Business." You can learn more about our tracks and audiences in the Call for Speakers document.

 

If your company is interested in sponsorship opportunities during the TSP Camp, please contact me through this site and I will connect you with sponsorship chair Ruven Gotz. Registration will also begin this month, stay tuned for details.

 

We do not provide travel or accommodation, but do provide lunch, a speaker shirt, local fame for a full hour, and a lifetime of fond memories. Hope to see you there!

 

 

I was going to hold off on posting this until the public beta drops, but anyone preparing for the drop will want to get the right hardware, OS and optionally virtualization in place now. Then when the beta drops I'll write more about specific steps to get SQL, SharePoint and your development tools installed.

Hardware and Operating System

The "official" hardware or virtualization requirements for running either SharePoint Server 2010 or WSS 4.0  are: a 64-bit dual-core 3 GHz CPU, 4 GB of RAM, an 80 GB hard drive, and a DVD drive (presumably for installation). My first virtual environment ran fine with 2 GB RAM and a 20 GB (virtual) hard drive, though as always more is better. Remember that like any product, SharePoint and Visual Studio 2010 are not fully optimized during the betas, and performance will continue to improve through RTM.

You do need a 64-bit OS and for virtualized hosting therefore have the choice between Microsoft Hyper-V, VMWare Workstation, and Sun’s VirtualBox (and likely other third-party apps, though I know these work). Out of the game and being deprecated in deference to Hyper-V are Virtual PC and Virtual Server, neither of which support a 64-bit guest OS. There are a few options for operating system with Windows 2008 R2 being the clear favourite, though Windows 7 and Vista – for the first time non-server operating systems – are supported for WSS 4.0 and SharePoint Server development.

Minimum SQL flavours are SQL Server 2005 with SP2 or SQL Server 2008 Standard Edition, though standalone installations can run on SQL Server 2008 Express (included). Note that some features (like converting a Site Collection to read-only mode) require SQL Server 2008 Standard. While for performance reasons it is not generally recommended to run SQL Server (or the SharePoint Index role) virtualized, it's normal in a development environment.

Additional requirements included in (or downloaded by) the installer are: .NET 3.5 SP1, Silverlight 2, IDFX “Geneva”, Powershell 1.0, and IIS 7.0. Supported client browsers are IE 7, Firefox 3 and Safari 3.

Windows 2008 R2 with Hyper-V is a good choice for virtualization on desktop or server installs, but is not supported by Microsoft for use on laptops. Hyper-V was really never intended for use on laptops, and so conveniences like Sleep and Hibernation aren’t there. Other notable things not usable on Windows 2008 R2 are Bluetooth, Zune software and Call of Duty, and there are others. I really like Windows 7 but I don't want to do development on my daily-use OS, so third-party virtualization like VirtualBox or VMWare are the clear winners for me, both of which support a 64-bit client OS. That said, many of the SharePoint MVPs who presented at SharePoint Conference 2009 (#spc09) use a configuration that can dual-boot into either Windows Server for development and presentations, or into Windows 7 for personal use. The fact that the boot times are great for both minimizes their dependency on Sleep.

Average hard disk sizes are larger, so virtualization solutions that support differencing disks - Hyper-V and VMWare - have a clear advantage. A differencing disk lets you create a single base image (a virtual hard disk) that you can then branch with many additional drive images - for example one for every project you work on. So your base image with SharePoint Server, SQL Server, Visual Studio 2010, and Office might be 20 to 30 GB, but additional machines will only be as large as your project and content, perhaps 5 to 10 GB apiece.

And for your physical hard disk do yourself a favour and get an Intel SSD (solid-state) drive installed. The average sizes are smaller (~120 GB is large as of writing this), but performance is absolutely killer with reports of up to +40% in boot times and regular operations; these drives are the reason people don't mind using Hyper-V to boot into their dev images. It's also nice that my laptop doesn't sound like a small aircraft anymore. All concerns about virtualizing on SSD have been wiped away (unlike when they first appeared on the market), so if you can get one, do it. As always any reliable 7200 RPM drive will perform well, but SSD is definitely the new "best-of" around. For external storage, my "building a machine" post for 2007 still applies with 7200 RPM eSATA through an ExpressCard interface probably your best bang-for-the-buck.

Popular Hardware

This section describes a few known configurations used by early adopters, feel free to comment with your own!

Hewlett-Packard HP8530. Quad-core with up to 8GB RAM and 7200 RPM disk(s), running Windows Server 2008 R2 and Hyper-V. Recommend dedicating 2 cores and 6 Gb to the guest OS (either Windows Server 2008 R2 or Windows 7).

Dell M6400. Intel Core 2 Duo P8600 (2.40GHz, 3M L2 Cache, 1067MHZ) Dual Core, up to 16 GB RAM, 17" screen, SSD optional, ~8.5 lbs [Full specs].

IBM Lenovo T500 and W500. Intel Core 2 Duo T9600 / 2.8 GHz Dual Core, up to 8 GB RAM, 15.4" screen, SSD optional, ~6.2 lbs. These are nearly the same laptop, the W500 has a better screen and video sub-system (find a detailed comparison here). The differences weren't important to me so I'm using the T500 and it's been great. My current virtual machine is assigned a 20 GB hard drive, 4 GB RAM and one processor (I tried two but since I only have two it was unstable, at least with VirtualBox; VMWare generally does better sharing CPU cycles with the host).

Servers and Desktop Workstations. With relative costs, remoting into a dedicated machine or a virtual server hosting many guests is sounding better and better these days, especially for development and test farms. The principles are the same, the requirements are the same, but it no longer matters how or from where you connect, and hardware costs are significantly lower. For anyone who doesn't need to demo or present on a laptop, I'd seriously consider it.

Until the Public Beta

With any luck that should get you started, have fun!

 

First Looks at Microsoft® SharePoint® Server 2010
Presented by Savash Alic, Principal Specialist – SharePoint TSP, Microsoft Canada

Join us for a special live meeting on Wednesday, October 28th, 2009 where Microsoft Canada’s Savash Alic will present Canada’s first look at Microsoft® SharePoint® Server 2010.

Savash is a loyalist of SharePoint who has been dedicated to the product since its’ very early days in 2001 implementing solutions. Savash has been selling Microsoft SharePoint in a technical capacity for the last 4 years at Microsoft Canada, and is a technologist at heart who started his professional career in ’94 and worked at several Microsoft Gold Partners delivering and managing projects.

A special thank-you to our sponsors for this meeting: Microsoft Canada, Nexient Learning, and Non-linear Creations.

Schedule
6:00 Meet and Greet
6:30 Q&A
6:45 LiveMeeting Starts
7:00 Feature Presentation
8:30 Closing

Registration is FULL, but we have room for hundreds more on the Webcast! Register now, see you tonight!

Someone recently asked about test plans and how to test components during development so you can be comfortable they'll perform well when hosted on large farms. The short answer is that you want to create the best simulation you can, and that means creating a test farm as close to production as possible, and testing scenarios with patterns and data as close to production as possible. With mission-critical apps the test environment should be identical with production, but in most cases it won’t be. Recent versions of LoadRunner do well for building the tests, earlier versions have issues (e.g. with javascript and with scripting against dynamically named / generated file sets). Visual Studio 2010 contains load-testing tools that work great against SharePoint 2010, I'm really looking forward to testing these when beta 2 is released next month. The Developer Dashboard is another great tool for breaking down the load times of each component on your page, the performance of methods in your call stack, and the latency of calls to background services; this will be an indispensible tool for checking performance.

Another great but under-used way to test is the million monkey method - get as many people in a room hammering at the test farm as you can. The first client I know of who identified what turned out to be a SharePoint performance bottleneck (row-level locks escalating to table-level locks) tested this way with just 14 people. Happily this any many other database performance issues are resolved in SharePoint 2010, but this targeted brute-force testing laid these issues bare where they hadn't come up at all in months of automated testing at other companies.

As an example of why you want the test farm to be identical with production, there was a SharePoint app that a colleague built and tested at a large bank, and they had the benefit of an identical test farm. The app was stress tested over a full 24 hour period, and issues only showed up in the ninth or tenth hour. While that translated to the issue occurring only after weeks of actual production, it would have ground the system to a halt had they not caught it before release. When SharePoint becomes mission critical, building the right test farm is worth the investment.

But because duplicating production can cost a lot, few companies actually do it. When the test farm doesn’t match production you can only guarantee that tests will provide a baseline of performance to compare with other apps or versions run in the same test environment; it really doesn’t tell you much about how the app will behave in production or how many users it will handle. You can make estimates in the ballpark - within an order of magnitude - and extrapolate to a degree, but too many factors conspire against a reliable interpretation; there’s simply no such thing. Whether it’s the number of servers in different farm roles, the use of virtualized vs. real servers for testing, differences in SQL implementation (is it clustered or mirrored, how may spindles are available to split OS/logs/data, are other apps sharing the SAN, etc.) , the effects of load balancing schemes (and admins whose understanding stops at round-robin), the availability and latency of network services, and the actual behavior of users (it’s hard to guess what peaks will be until the app’s been in the wild for a while) all conspire against relying on extrapolated results. Experience helps makes better predictions, but it equally tells you that it’s hard to guess where an app will fail until it does. And then service packs and upgrades inevitably throw your baselines off unless you retest every application after every upgrade.

So unless you can stress test a farm identical with production, you just try to get in the ballpark and pick other practical goals – are web parts or application pages running logic that kill performance, how does the farm respond to a mix of operations, are web service calls stalling pages, where should you be caching parts or data, where could you benefit from AJAX, how does response time on the local subnet compare to requests from around the world, how are these affected by authentication and the location of the domain controllers, how does the respones time of your custom pages compare with out-of-box pages, do use-of-memory patterns show leaks, etc.

Once an app is in production for a while you can get a few numbers to inform the tests – how often do you see peak usage, how many concurrent users does that mean, and how long does it typically last. What’s the mix of reader / writer / administrative operations, what pages do users hit (e.g. list views vs. home pages vs. app pages), where are users calling from, and what’s the mix of file sizes and types being uploaded or read. All of these help you build tests to more accurately simulate production use.

So test early, test often, and gather as much information as you can in order for your tests to approximate "the truth." Where you can't get enough information, or the hardware does not match production, be realistic about what testing will prove. And even if you can't predict the circumstances where production will fail, you can still use load and stress tests to build a better product.

The default experience when you press F5 in Visual Studio 2010 is to Create, Build, Package and Deploy your solution, all at once, automagically, pretty cool. As long as you don't want to control that process. But wait, you can do that too. You can customize exactly what happens when you press F5 to meet your own needs. From copying things into specific locations, to retracting or installing solutions, to calling MS-Build to do something special, to resetting the Application Pool, to stopping and restarting services, to whatever your heart and project desires or requires. This is huge and goes far beyond pre and post-build actions we had before. You save your custom Deployment Configuration in the Project Properties panel in a new SharePoint tab.

So if you want to Quick Deploy a web part into the application \bin or GAC, or update your application page without having to recycle the App Pool, you can write a Deployment Configuration just for any of these scenarios. You can build Deployment Configurations to copy specific files for particular environments (DEV/ACC/PRD), do your packaging and FTP the files out for code review without adding a Deployment step. Truly cool. You get two for free - Default deployment (the aforementioned Create, Build, Package and Deploy), and No Activation which does pretty much what the name implies.

You can take this a step further by creating custom actions to reuse in any Deployment Configuration, perhaps to execute SharePoint commands. Since VS is 32-bit and SharePoint is 64-bit, for this you'll use a new thing called a SharePointCommand that marshals your code into a 64-bit thread and executes it on your behalf. It actually binds to a thread identical to the one that serves the URL that your project is affiliated with. There's some cool stuff going on here - you don't need to attach the debugger to the worker process anymore (w3wp.exe ), that wire-up happens for you. And the same goes for code you marshal into that context using a SharePointCommand - any VS extensions you care to write can execute code in the SharePoint context associated with your project. Ted Pattison's seasion also went deep on the next steps - packaging the new component (in this case a Deployment Configuration option) into a VSIX file for passing on to anyone else who wants to use your option in their IDE.

More great nuggets

  • No more DDF files! The Package Builder takes care of all the gory details.
  • Automatic generation of Features, Manifests, .webpart files and supporting files.
  • While the Feature manifest generates all you need, you can provide your own nodes to merge into the generated manifest, or override the process completely (but like, why)

 All in all, there'ssome pretty awesome control to be had over the deployment process.

* Notes based on a presentation by Eric Shupps at SharePoint Conference 2009 (#SPC09)

Extensibility points

  • Already had: Macros, add-ins and packages
  • New extensions based on MEF
  • VSIX model simplifies distribution and deployment

VSIX Package

  • A zip package
  • Contains am .XML manifest
  • Install by double-clicking

Managed Extensibility Framework (MEF)

  • Part of .NET 4.0
  • An extensible app "imports" functionality
  • An extensible componet "exports" its functionality
  • An application catalog tracks instances of imported component
  • An application is composed by dynamically loading components

SharePoint Tools export extensibility interfaces

  • The interfaces: ISharePointProjectExtension, IProjectItemExtension, IDeploymentStep, IExplorerNodeTypeExtension (though these may change for beta 2 release)
  • Your custom extensions export these interfaces, e.g. :

[Export(typeof(ISharePointProjectExtension))] 
internal class myClass : ISharePointProjectExtension {
     public void Initialize(...)
    
     /* add event handlers */
    
     void projectService_ProjectMenuItemsRequested(...) {
          /* Add your options to the project menu */
     }
}

  • Requires applying MEF attributes to export each interface
  • MEF attributes written to the catalog at installation

Calls to SharePoint

  • The trick is that VS is 32-bit and SharePoint is 64-bit, so if you want VS to execute SharePoint operations, you need to use a special SharePointCommand type that will marshal the code for you for execution in a 64-bit thread. The Client OM would be a simpler option if you don't need to elevate permissions.

 Designers

  • You can build WPF extensions to VS2010 and there will be a number of community projects that do exactly that, or provide templates for custom designers.

* Notes from a SharePoint Conference 2009 (#spc09) presentation by Ted Pattison and John Flanders.

Look out TSPUG, there's a new user group in town! Led by Ray Outair, all the pieces are finally in place and the first meeting is:

This Monday from 6 to 8:30 at Microsoft Canada's Mississauga office!

An Overview of SharePoint 2010

Presented by Rob Windsor (ObjectSharp)

SharePoint 2010 is being unveiled this week at the Microsoft SharePoint Conference. This session will provide an overview of the product with a particular focus on what’s new for developers. The tools included in Visual Studio 2010 for SharePoint 2010 make developers more productive and new hosting options for SharePoint solutions provide more flexibility in deployment. This talk will cover the new designers, explorers and templates and overall developer experience for SharePoint 2010. Along the way we’ll see several of the enhancements to the end-user experience including the ribbon, in-place editing, and the new page and dialog interface model.

[Register Now!]

Yes, MSPUG (and this is their temporary site) is getting the scoop on TSPUG and this will be Canada's first look at SharePoint 2010. The TSPUG meeting next Wednesday will also be SharePoint 2010 presentation delivered by Savash Alic of Microsoft Canada. However our registration is over capacity, full up, SOLD OUT, and so we're pleased to announce that we'll be streaming it live over LiveMeeting. Don't miss out, register for next Wednesday's TSPUG LiveMeeting feed today! So there you go, next week will be full of great SharePoint content. There's so much in SharePoint 2010 that I guarantee Rob and Savash won't be covering the same ground, so sign up, and see you all live or virtually next week. 

And a big bold thank-you to Ray, Wanda Yu and Damir Bersinic at Microsoft Canada, Bill Brockbank, and all the other volunteers and sponsors who made this new user group a reality. I know it's going to be a great one.

There are three scenarios or scopes that the team designed for - the library, the document repository and then large scale repositories. The third isn't covered specifically here, but it's basically an architectural strategy that uses many components  (e.g. Content Organizer and FAST) to manage millions of files across sites. Other highlights are described in sections below.

Team Library

  • A list template 
  • Typically 100 to 200 files
  • Used for small projects and teams
  • Performance much improved and few real limits. Indexes are generated automatically, though you can always control as you like.
  • Multiple document selection and operations now standard
  • Nice Ribbon UI to aid management at the document and library scope

Document Center

  • A site template 
  • 500 to 500,000 files
  • Typically used as a central document repository
  • Nothing you can't do with team libraries, but a nice package 

Remote BLOB Storage (RBS)

  • Configured per Content DB
  • Better CAPEX for large scale deployments (>5 TB)
  • Better OPEX for fault tolerance, backup/restore and geo-replication
  • Requires SQL 2008 R2

Document IDs

  • Unique identifier across the company
  • Fully pluggable - can plug your own web service to generate your IDs
  • Can move files around and always locate a file with its document number

Term Sets

  • A hierarchical taxonomy, finally a way to do cascading lookups out-of-box!
  • Can be defined as open or closed; open allows users add new elements to the taxonomy
  • there is a central library of term sets, or you can create a local term set
  • Multi-lingual! You don't need to choose one language or another, you can implement several

 Metadata-based navigation

  • Provides a tree-view that lets you drill down into files by their metadata the same way you would navigate to them if they were stored in folders with those names
  • Can be combined with filters
  • Works within a site collection

Document Sets

  • A way to relate files together, like a folder but with different powers
  • For example, a sales proposal with a document, a spreadsheet, and a presentation
  • Can perform operations on the entire set, for example conversion to PDF (document assembly)
  • A set can have a Document ID, as can all the files within it.
  • You can customize the landing page for a Document Set, for example to instruct people on the sales proposal process

Content Type Syndication

  •  You can enable the Managed Metadata service and then publish a content type for syndication to other site collections or farms

Content Organizer

  • Basically the record routing from the Records Center has been unleashed for any library.
  • Allows you to specify rules so that when files enter the drop-off folder (or library), rules are applied to route the file(s) or Document Set into the correct location
  • Eli's Note: This doesn't mean that Microsoft is advocating the proliferation of folders rather than use of metadata. But it does better support the scenarios where folders (or libraries) each have distinct security settings, and then users still use the generated navigation or flat views of the library. You could also use this for document management scenarios where you want to simply route files into the appropriate fiscal year folder, and then have retention policies apply to all library content. The best practice will be to architect libraries and folders for document management and security, and provide the metadata-based navigation and other conventional views to users working with the libraries - they don't need to be aware of the underlying structures.

Scale

  • Compound index support
  • Automatic index management
  • Query throttling with fallback
  • Scale targets: a million items per folder, tens of millions in a library, hundreds of millions in a large repository
  • FAST search is used to retrieve content

Miscellany

  • Great support for folksonomies - users can build their own tags like "I liked this" or "How-To" and then get a view of all the files they've tagged.
  • Location-based defaults. Can set default values for properties in any given folder
  • CMIS Support - An OASIS committee with participation from IBM, EMC, Microsoft and 14 other vendors to rationalize the interoperability between repositories. This allows applications to target one or more ECM repositories. Multiple protocols supported including REST and SOAP. Announced for the first time today! There will be a pack available to provide CMIS producer and consumer capabilities

Information based on a sessions by Adam Harmetz and Ryan Duguid at SharePoint Conference 2009 (#SPC09).

 

Partial trust or "Sandboxed" solutions

Runs in a separate process

Everything in the WSP is deployed to a special repository managed by Central Administration. There is a new compilation model to support this repository (that you thankfully don't need to learn about, it "just works", though when the decks are released you'll see all the excellent secure detail).

PTS should be the preferred method of provisioning solutions

Sandboxed solutions are restricted by CAS and the API subset

Fully supported tooling in VS 2010

You can switch your project back and forth between PTS and Full Trust. Note that in Full Trust you can see where in the 14 hive your files will be deployed, which is valuable for new developers learning how SharePoint "works," and then switch back to PTS for packaging.

Sandboxed solutions are managed in Central Administration

Supported elements

Content Type, Site Columns

Custom Actions

Declarative WorkflowEvent receivers, feature receiversInfoPath Form Services[A couple others I missed]

Partially Trusted Solutions (PTS) can run in two modes

Local Mode
Execute code on WFE

Lower administration overhead

Remote Mode
Executes on back-end farm machine
Load-balanced distribution of code execution requests

Can create custom load balancers

 

Solution Monitoring

Farm Administrators set absolute limitsSite administrators identify expensive solutionsServer resources: CPU, Memory, SQL, Exceptions, Critical Errors, Handles, ThreadsYou can throttle an application with a Resource Quota so that after using up your “points” worth of resources in a day, you’re cut off. 
Solution Validators
Allow custom validation of a solution, installed at the farm scopeInstalled in a FeatureActivated eventOnce deployed, when you attempt to deploy a solution that breaks a validation rule, an error is displayed 
More Posts « Previous page - Next page »