Kind of a crazy name for a piece of software (in this politically correct world, the use of "screw" doesn't go over very well with some management) but a really great example of Open Source in action.
I was hunting around for a wiki for our development documentation and standards. My first thought was SharePoint but we're not rolled out yet to 2007 and I didn't want to bank on that yet. I wasn't quite sure what I was looking for, but needed a wiki that did basic features and had a few "must-have" features (like AD integration and content approval). A great site for checking out and comparing wikis is WikiMatrix. This site lets you compare all of the wiki software packages out there and even includes a wizard to step you through what you're looking for (OSS, platform, history, etc.) and gives you a nice side-by-side comparison page (much like comparing cars) to help you select a package.
First I took a look at FlexWiki which was fairly popular and easy to setup. I had set it up on my laptop before as I was toying around with using a wiki as my home page. FlexWiki was simple and more importantly (for me anyways) it was C# and windows based so if I wanted to extend it, play around, write extensions, etc. then that would be bonus. Flex is nice and if you don't look at anything else, probably suites the purpose (although CSS-style customization seems to be pretty complex). While I was leaning towards C# type wikis, I knew that the best and most mature ones were PHP/MySQL based (like the one Wikipedia runs on, MediaWiki). However I just didn't want to introduce another stack of technology at my client just for the purpose of documentation.
Finally I stumbled across ScrewTurn Wiki. Like Flex, it was easy to setup and like my favorite blogging software (dasBlog) it could be file based so you could just set it up and go. I installed ScrewTurn and messed around with it and it worked well. We handed the duties of really digging into it over to a co-op student we have for the summer and he's really gone to town with it. AD integration was added (it was always there, I just didn't enable it) and he's found some plugins and even written some code to extend it. What's very cool about ScrewTurn is that the common pages are written in C# and live as .cs files on your server. You just edit them and override methods, introduce new ones, whatever. New functionality without having to recompile assemblies or anything (everything is just JIT'd on the fly).
Anyways, ScrewTurn looks like a very good IIS based wiki if that's your thing. I find it more mature than Flex, written in C# 2.0 and has a lot of great features. Like I said, if you have a LAMP environment in your world then you might want to look at something like MediaWiki but for a Microsoft world, ScrewTurn is da bomb. The plugin support is great and I'm hoping that the community will step up and build new plugins for the system so it can grow into something special.
So you might want to give ScrewTurn a try if you're looking for a simple documentation system for your team.
Great stuff from the guys that make the cool tools, JetBrains anounces the release of their latest beta version of ReSharper 3.0.
This release includes refactoring for C# and Visual Basic.NET now. The C# side has been beefed up so it gives you some code suggestions that you may or may not choose to implement. Also with this release is XML and XAML support (handy when working with Silverlight now), a neat feature called "Go to Symbol" navigation which I'm prefering over "Go to Declaration", a smart TODO list, and a reworked Unit Test Runner (although I still prefer TestDriven.NET).
You can grab the beta from here. I'll see if I can find some time and put together some screenshots or (gasp) a webcast on the features as talking about them is rather boring. Enjoy!
There's been two great posts on the CAB debate recently that were interesting. Jeremy Miller had an excellent post over the brouhaha, citing that he really isn't going to be building a better CAB but supports the new project we recently launched, SCSFContrib. I think Jeremy's excellent "Roll your own CAB" series is good, but you need to take it in context and not look at it as "how to replace CAB" but rather "how to learn what it takes to build CAB". Chris Holmes posted a response called Tools Are Not Evil from Oren's blog entry about CAB and EJB (in response to Glenn Block's enty, yeah you really do need a roadmap to follow this series of blog posts).
Oren's response to Chris Holmes post got me to write this entry. In it he made a statement that bugged me:
"you require SCSF to be effective with CAB"
Since this morning, it looks like he might have updated the entry saying he stands corrected on that statement but I wanted to highlight the difference between being efficient with a tool, and being effective with the technology the tool is supporting.
Long before SCSF appeared, I was groking CAB as I wanted to see if it was useful for my application or not and what it was all about. That took some time (as any new API does) and there were some concepts that were alien but after some pain and suffering I got through it. Then SCSF came along and it enabled me to be more efficient with CAB in that I no longer had to write my own controller, or implement an MVP pattern myself. This could be done by running a single recipe. Event the entire solution could be started for me with a short wizard, saving me a few hours I would have taken otherwise. Did it create things I don't need? Probably. There are a lot of services there that I simply don't use however I'm not stoked about it and ignore them (sometimes just deleteting them from the project afterwards).
The point is that SCSF made me more efficient in how I could leveage CAB, just like ReSharper makes me a more efficient developer when I do refactorings. Does it teach me why I need to extract an interface from a class? No, but it does it in less time than it would take manually. When I mentor people on refactoring, I teach them why we do the refactoring (using the old school manual approach, going step by step much like how the refactorings are documented in Martin Fowlers book). We talk about why we do it and what we're doing each step of the way. After doing a few this way, they're comfortable with what they're doing then we yank out ReSharper and accomplish 10 minutes of coding in 10 seconds and a few keystrokes. Had the person not known why they're doing the refactoring (and what it is) just right-clicking and selecting Refactor from the menu would mean nothing.
ReSharper (and other tools) make me a more efficient developer, but you still need to know the what and why behind the scenes in order to use the tools. I compare it to race car driving. You can give someone the best car on the planet, but if they just floor it they'll burn the engine out and any good driver worth his salt in any vehicle could drive circles around you. Same as development. I can code circles around guys that use wizards when they don't know what the wizard produces or why. Knowing what is happening behind the scenes and the reason behind it, makes using tools like ReSharper that much more value-added.
SCSF does for CAB what ReSharper does for C# in my world and I'll take anyone that knows what they're doing over guys with a big toolbox and no clue why they're using them anyday.
Or do they? In the past couple of years, Google, Microsoft, and Yahoo have been buying up little niche products and technologies and absorbing them into their collectives like mad little scientists working in a lab. It reads like a whos-who of the IT world.
Feedburner - Very recent purchase and something many of us bloggers use. Good move Google!
Blogger - Awesome concept but I think it went by the wayside as Blogger became the MySpace for blogger-wanna-bes. Still some good bloggers use it, but I don't think it's panned out the way Google wanted it to.
Picasa - Neat desktop app that I tried out a few times and well done. Hopefully they'll do something with this in the near future.
Youtube - Biggest acquisition that I know of (who has this much money besides Microsoft?) but with thousands of videos being pulled every day by Viacom or someone else threatening to sue, I wonder what the future holds.
DoubleClick - Have no idea what this is all about as DoubleClick is the evil of the Internet. Maybe they bought it to kill it off (doubt it).
Flickr - Probably the best photo site out there, many new features being added all the time and nobody nearly as interesting as these guys out there in this space.
Konfabulator - Never really caught on and too many people compared it to the Mac desktop (which already had this capability OOTB). Windows Gadgets tries to be like this but again not a huge community swelling up around this.
del.icio.us - Next to Flickr, one of the best buys for Yahoo and the best social bookmarking system out there.
Connectix - Huge boost in the virtual space, although I think it still trails behind VMWare
Groove - What put Ray Ozzie on the map is now part of Office 2007 and still growing.
Winternals - Best move MS made in a long time, awesome tools from these guys who know the inner workings of Windows better than Microsoft in some cases
FolderShare - Great peer-to-peer file sharing system, but hasn't really taken off has it?
There's a bunch more but I didn't want to get too obscure here. There's a very cool graph here that will show you the acquisitions and timelines.
And here's the hot ticket items these days that are still blowing in the wind. It's anyone's guess who goes up on the block next and who walks away with the prize.
Facebook - Whoever gets this gets gold at 100,000 new members a day (!). My money is on MS to pull out the checkbook any day now.
Digg - Kevin Rose, who's already probably laughing his way to the bank will cash in big time on this if someone grabs it. Maybe Google to offset the Yahoo del.icio.us purchase?
Slashdot - Yeah, like anyone would want this except to hear Cowboy Neal talk about himself (don't worry, Slashdotters don't read my blog - I hope)
(SharePointKicks... yeah I wish)
Maybe it's a good, maybe it's bad, my question is who will end up with the most toys? Or maybe once all the little ducks are bought up the three top dogs will duke it out with one winner walking away. UFC at the Enterprise level kids. Should be a fun match.
I'm pleased to announce the startup of a new project on CodePlex. I'm very happy to be part of the team to bring you SCSFContrib.
What is SCSFContrib? If you're familiar with NAntContrib where members of the community contribute extensions to NAnt (specific NAnt tasks) then you're on the same wavelength. SCSFContrib is very similar, extra goodness for CAB/SCSF with a few differences:
- It is based on the Smart Client Software Factory (SCSF) and the Composite Application UI Block (CAB)
- It allows you, the community, to contibute to an effort that extends patterns and practices deliverables
- It shortens the time that contributions/changes/extensions to SCSF/CAB will make it into the public. Rather than waiting for a drop from the patterns and practices team, our team will help manage these and make them available through the CodePlex site
- Provides guideance to the patterns and practices team as to where gaps exist in the current factory and how they can make improvement in the core
There are three Contrib projects in motion, SCSFContrib being one of them. The other two are EntLibContrib and WCSFContrib (Web Client Software Factory) which allow contributions to each of those projects.
Note that this project does not allow you to contribute code directly to the core application blocks. We're talking about extensions here (for example there's an Outlook Bar extension that will be one of the first ones we release under the SCSFContrib project) but that doesn't preclude you from creating your own version of a core block. For example you could replace the Dependency Injection block completely if you wanted, but we won't be replacing it directly in the Factory. It could be enabled as part of a recipe (Use ObjectBuilder or XYZ for Dependency Injection). Of course everything generated has to work with the changes, but that would be up to you. In addition, the one small thing we ask for is full unit tests with any new development (although Alpha/Beta projects won't require these).
While this was initiated by Glenn Block and the Patterns and Practices team (thanks guys!), the SCSFContrib project is run by community folks (myself included). Here's who's on the team:
Author of WPFCAB. Kent has done some great work in the CAB space by developing a WPF version of our Windows CAB extensions. He’s graciously created an unmodded version that will be included in the contrib.. This product is in production in several environments today. Kent’s blog is at http://kentb.blogspot.com/.
Ward is the product manager for IdeaBlade, one of Microsofts key partners out there spreading CAB adoption through their Dev Force solution. IdeaBlade was one of the pioneers of Smart client software development in the .NET space. Ward is also extremely seasoned in the industry with 20+ years of experience. Ward’s blog is at http://neverindoubtnet.blogspot.com/
Matias works at Southworks. Matias and the whole Southworks gang are truly gurus at everything related to CAB, SCSF and WCSF as they helped Microsoft write it. Matias is also the author of the Outlook bar extensions for CAB. Matias’ blog is at http://blogs.southworks.net/blogs/matiaswoloski/default.aspx
And me, Bil Simser, major geek and general all-around nice guy.
Here's to the successful launch of another P&P Contrib project and hopefully you'll find use with the SCSFContrib project in your own solutions. You can check out the CodePlex site here and be sure to voice your opinion via the discussion forums or issue tracker as to what you're looking for (or contribute something you've built with CAB/SCSF if that's your thing).
I just realized, after running Windows XP for 5+ years, that there's no built-in alarm clock. I needed one as I was just dozing on the couch and couldn't be bothered to go to the bedroom and I had no other way to wake up. I figured I would just use XP. I mean, it must have an alarm clock after all. Nope. Nothing that I can find. Had to download a cheap freebie. Does Windows Vista have one? Does nobody really need one except for me? Hmmm... maybe another WPF weekend project I could do to pass the time.
I'm addicted to crack. That crack is called Facebook.
At first it was a silly thing. A social networking site with very little geek factor. It's fun to connect with old friends, make new ones, and generally keep on top of where people are and what they're doing. However I felt empty. A site like Facebook is just ripe for tearing into it and presenting and using the information you want the way you want to. The REST-like access to it seemed kind of klunky and you had to log in via a web page to obtain a session (there's a bit of a hack to do an infinite session, but it's just that, a hack). So I wasn't too interested in what it could provide.
Now my crack addiction has a proper API and a developer toolkit. Finally I can actually do something with my addicition rather than just admire it. The toolkit requires a developer key (which you can get from facebook for free) and the .NET 2.0 framework. You can grab the tookit here. There's also a developer wiki you can checkout with lots of QuickStarts, videos, walkthroughs, tutorials, and discussions. Is it just me, or is everything here very MS centric? Maybe MS should just buy Facebook (as everyone else is buying everything else out there) and call it a day. Of course they would have to rewrite it since it seems to run in PHP, but with dynamic languages and the .NET framework in the pipeline it could probably just be converted on the fly.
I'm still waiting for my invite to come through for Popfly, but in the meantime this will keep me happy as I write up some cool new Silverlight/Facebook apps on SharePoint. Yeah, nothing like mashing up all kinds of new stuff together to see how it works.
Every so often, a topic brushes by my RSS feeds that I have to jump into and comment on. The latest foray is a conversation between Chris Holmes, Jeremy Miller, and Oren Eini. It started with Oren and a post about not particularly caring for what the Microsoft Patterns & Practices guys are producing (EntLib, CAB, SCSF, etc.) and ballooned here, here, and here. Oren started down the path that CAB (and other components produced by P&P) was overly complex and unnecessary. I'll focus on CAB but there are other smatterings of things from EntLib here. The main points Oren was getting across (if I read him correctly) was lack of real world applications backing what P&P is producing and overly complex solutions for simple(r) problems. Oren put together his version of the policy injection block (a recent addition to EntLib) in 40 minutes. Last night I was reading Jeremy Millers response and needed to chime in as I'm very passionate about a few things, namely Agile software development and CAB.
I'll be the first to admit that CAB is complex. EntLib is large. There is a lot there. As Chris said this morning in what I think was an excellent response to the entire discussion, CAB for example is not just about building maintainable WinForm apps. I like CAB as it gives me a bunch of things and they all work together in a fairly harmonious way. EventBroker is a nice way to message between views and keeping the views separate; ComandHandlers allow me to hook up UI elements indirectly to code to execute them; the ActionCatalog let's me security trim my commands (and in turn my UI); and the implementation of the MVP pattern using views lets me write presenter tests and keep my UI thin. This all makes me feel good. Did it take me a while to get here? Absolutely. I've spent the better part of a year learning CAB, EntLib, ObjectBuilder, WorkItems, and all that jargon but it's no different than learning a dozen different 3rd party libraries. I simply chose the MS path because it was there and everything was in one neat package. If you packaged up Castle, NHibernate, StructureMap, and others together in a single package maybe I would have chosen that path (and is there really two different paths here? I use both tools together anyways).
Oren's defense is around the fact that he (and Jeremy) follow the guideline of evolving a framework from your application needs, not building one (like what the P&P guys have done). Okay, that's fair but at some point you have to stop building things over and over again. So when does your own work become a framework that you reuse? Is it as lean and mean as what you want it to be. Sure, you can put together the basic needs of an IoC in half a day (half a day Bil time, 40 minutes Oren time) but it's the just the beginning. It serves the need you have today and the problems you might be facing right now. I would argue that if you took something like StructureMap and evolved it to handle scenarios that you're not dealing with today, that you would be starting to build your own implementation of EntLib.
We all want lean software that does the job however I subscribe to the mentality that if you leverage something else (aka not reinventing the wheel) then do so as long as it doesn't come at a cost higher than doing it yourself. That's a hard decision to make as you don't want to get too predictive on what the future may hold (do we need logging, security, etc. in the future?) but you gauge your response based on current affairs and what feels best. It's more of an art than a science. When I first looked at CAB I thought it was huge, but once I sat down to grok the pieces and how it all fit together, it made sense. EntLib and CAB do include everything and the kitchen sink and you do need to get past the learning curve, but in the end it's a good collection of tools that you can have in your toolbox. Unfortunately it's not something I could introduce at a conference or User Group session and describe the entire stack in an hour, so I tend to avoid showing off applications and concepts using it as it just turns into a discussion of what [SmartPart] means instead of the main goal like describing MVP which I can do with my own code.
Is EntLib/CAB/etc. doing too much maybe? Perhaps but then if I choose the 3rd party elements I want and wire them together to suit my needs, what kind of Frankenstein have I built in the progress? When I look at CAB holistically, there's a lot there but it's not a bad implementation. I don't think Oren or Jeremy are saying the P&P guys did a bad job on in, they just choose to evolve their own solutions using a minimalist approach. I'm all for that. It's very TDD-like. When I build systems I start by writing single tests against my domain and only doing what I need at the time (the YAGNI principal). However at some point you end up with a very rich domain, hundreds (or thousands) of unit tests, dozens (hundreds) of classes and methods, and a lot of functionality. I argue that is in fact what EntLib and CAB have become. They're rich, re-usable tools that do a lot but frankly you can still use what you need. Maybe you'll deploy all the EntLib assemblies with your application and only use the logging feature, but so what? As an example, I had to implement NHibernate in an application recently to apply persistence to my domain. When I ran some db unit tests, I found out that I need the NHibernate assemblies, log4net, and an assembly from Castle to make it work. Disk space is dirt cheap so having the extra there means nothing (except a few extra seconds of download time).
I'll cite Rocky and his excellent CSLA.NET as an example. It's a large framework, lots of classes, lots of functionality. That's what frameworks are about. However while I like what Rocky's done and he's had great success at it, I don't subscribe to the approach he took. I'm not a fan of the ActiveRecord pattern and don't like how business objects are tied to data implementation (even if there's a level of abstraction there). I simply cannot use CSLA with DDD. Is the framework a bad product? No way. Would I recommend it to others? Absolutely. Would I use it myself. Nope, but it's a good piece of software and I wouldn't discount it.
CAB follows similar concepts as it's big and ugly in some places (like ObjectBuilder). Sure I could use Castle to do better (real) dependency injection, but if I don't buy into the MS song and use CAB and EntLib to it's full extent I end up with bits and pieces of goo all over the place. Like I mentioned with NHibernate, I needed to deploy log4net as it needs it, even if I didn't turn on that feature. At least with EntLib, if I'm not using security for example I don't need to deploy the security module. In my case now, I have EntLib logging deployed and now I've got a second logging system deployed because NHibernate dragged it along for the ride. Eventually I could have a really ugly monster on my hands with copies of Castle, StructureMap, CAB, EntLib, NHibernate, log4net, and who knows what else all living (hopefully) together in happy existence. I don't want that.
CAB gives me most of what I need (except O/R mapping and persistence) so for me I leverage as much as I can from CAB and EntLib and fill in the gaps with things like NHibernate for persistence. I could use EntLibs database factory but then I'm rolling my own DAL and that's not a path I want to take, so I choose to ignore the EntLib database component. The nice thing is that I don't have to deploy it so as long as my code doesn't call it, I'm golden.
As Jeremy put it, the P&P guys are a good thing as they're out there getting the Agile word out to many more people that we can. While they do produce large(r) tools, frameworks, and components that include perhaps more complexity that you need at the time at the end of the day, you'll probably end up using it. IMHO I'm happy with what CAB and EntLib provide. Could I get the same functionality from the other alteratives? For sure, however I would probably be writing more code to wire things together than I would with CAB. For that reason, I like what the P&P guys do and look forward to the future as to how they'll evolve hoping these kind of discussions will help adjust their path towards a better end game for all of us.
Hmmm, more odd things happening at ASP.NET Weblogs, even after playing Halo 3 for 4 hours (and boy are my thumbs killing me).
I noticed I was getting trackbacks. Nothing new here, I get them all the time. Except I was getting them from myself. Huh?
Yeah, the last two posts I made created a trackback, to itself. Sigh. More email, more notifications, less sleep...
Update: Now my RSS feeds are only partial. Grrr. Argh.
This makes the ASP.NET Weblogs upgrade and me not being at DevTeach all that much better.
Sleep, I knew you well...
Agile teams are all about co-location and communication. We have a wall where tasks are posted. The wall is life. It is the source of truth. From the wall, the ScrumMaster (me generally) enters in the hours remaining for tasks and updates some backend system (in our case, VSTS with the Scrum For Team System templates).
There are many tools out there to do Scrum, XP, etc. and keep track of your items. I think I did a round up of the tools out there but I missed one. SharePoint. Yup, my two favorite topics, SharePoint and Agile, come together.
A friend pointed me to an article on Redmond Developer News (a new feed I didn't even know about and one that looks pretty cool) by David Christiansen called Building a Virtual Bullpen with Microsoft SharePoint. Basically he walks you through creating a digital bullpen, complete with product backlogs and sprint backlogs all powered by SharePoint. And easy to do, with a few custom views and all standard web parts and controls.
I remember Scott Hanselmen mentioning that they used SharePoint for Scrum awhile back on an episode of Hanselminutes. He said it worked well for them. I've setup clients using standard out-of-the-box lists to track Product Backlog items and such. The only thing 2003 won't give you are burndown charts. With Excel Services, a little bit of magic, and MOSS 2007 behind the scenes this now becomes a simple reality.
Check out the article to get your virtual bullpen setup and drop me a line if you need a hand (or just want to share with the rest of the class).
I see (or guess) that ASP.NET Weblogs (where this blog is hosted) upgraded to a newer version of Community Server but boy it doesn't look good. Besides the change in the control panel and things, the look is pretty different on my blog, the sidebar has a few broken things now that I had to remove, but most importantly http://weblogs.asp.net isn't showing any content. Hopefully they'll have this fixed soon. Normally they announce major upgrades and such, but I guess you get what you pay for (free) so I can't complain too much.
Update: Seems a lot of people are complaining about the upgrade. Things are a little messed up here as the CSS has changed. I use the stock Marvin3 from the old .Text blog but it changed (or something around it) so there's additional white space and padding everywhere on the site. Other blogs that are using custom skins/css are really messed up. I noticed Frans' tags are just plain ugly and unreadable.
I caught a comment by Rob Howard on another blog saying that emails had been sent out regarding the upgrade, but only 115 went out then suddenly stopped, as if thousands of processes suddenly cried out in terror and were suddenly silenced. What a mess.
I'm not getting it. I'm seeing a lot of posts about "Feature Driven Development" (or FDD for short) but I'm just not getting it. All I see is Scrum with different terminology. I was reading the Igloo Boy's blog where he's off at DevTeach 2007 (man I'm so jealous, Montreal in the summer time with geeks) and he posted his review of a FDD session with Joel Semeniuk and I just don't see the bru-ha-ha about FDD.
FDD is defined as a process defined and proven to deliver frequent, tangible, working results repeatedly. In other words, what we try to achieve when using Scrum in software development.
FDD characteristics include minimum overhead and disruption, Delivers frequent, tangible, working results, Emphasizes quality at each step, Highly iterative. Again, Scrum on all fronts.
FDD centers around working on features (Product Backlog Items in Scrum) which have a naming convention like:
<action> the <result> <by|for|of|to> a/an <object>
Like user stories where:
As a/an <role> I would like to <action> so that <business benefit>
FDD Feature Sets is a grouping of features that are combined in a business sense. In Scrum we've called those Themes.
So am I way off base here or are we just putting lipstick on a pig? Are we just packaging up Scrum with a different name in order to sell it better? Wikipedia lists FDD as an iterative and incremental software development process and a member of the Agile methods for software delivery (which includes Scrum, XP, etc.).
There are differences here between Scrum and FDD, like reports being more detailed than a burndown chart (however for me, a burndown chart was more than enough information to know where we were and where we're headed). Practices include Domain Object Modelling (DDD?) and teams centered around Fetures, but again this is just (to me) just Scrum organized a certain way. I would hazard to say I already do FDD because to me it's all about the domain and business value.
Or maybe this is a more refined take on Scrum. Scrum with some more rigor around focusing on the goal? A rose by any other name... I must be missing something here.
If you're struggling with getting in touch to deliver what your customers really want, try this. To me, this is what Agile is all about.
Print out the big version of this (available here), put it up on your wall (in your face) and read it every morning before you start. Really.
Hugh is brilliant.
I'm a big fan of the Planning Poker technique for estimating. It basically is a process where everyone in the room gets together with cards and estimates effort for user stories. Each card has a number on it (a modified fibinacci sequence) of 0, 1, 2, 3, 5, 8, 13, 20, 40, and 100. Then everyone reveals their estimate for a given story at the same time. Any estimates on the fringe are challenged and justified and an estimate is arrived, then the process is repeated for the next user story.
Mike Cohn and the Mountain Goat Software people have put together a fabulous website to solve a problem with planning poker, that is the one of remote users. It doesn't help planning poker if the users are off in another city so planning poker, the site solves that. You create an account (free and only requires 4 fields of information) and log in. Then someone creates a game and everyone participates online. Its a great way of doing this, and you can export the results to HTML and CSV at the end of the session. There's even a 2 minute timer that you can invoke once you've discussed enough about a story and a ready to estimate. Some people have even used it internally by everyone bringing their laptops to the session rather than even using physical cards.
So check out Planning Poker for yourself as it might be useful in your next planning session. Here are some screenshots that walk you though creating user stories and estimating them with the planning poker site.
When you log into the site you can view the active or complete games. Complete games can be re-opened if you need to do them across a few days:
To create a new game, click on the link and fill in the name and description. If you have stories already ready to estimate, you can paste them into the field here from a spreadsheet. The first row should contain the field names.
To add a story for estimating, just enter it in the form As a/an <role>, I would like to <function> so that <business value>. There's also room for notes that might be something you want to capture but make it light, this isn't the place for requirements gathering details here.
Once you've added a story, the estimating game begins. Select a card from the screen for that story.
Then you can accept or play again with that estimate. Your estimate shows up along with others (if they're logged into your game).
If you were wrong with your original estimate or there's debate on something and you really do agree it's larger/smaller, click play again and select a different estimate.
When all the estimates are done and the game is complete you can view all of the estimates online.
Finally if you want to take this information elsewhere, you can export it HTML for viewing/publishing/display or to a CSV file for importing into another tool.
Note that Planning Poker doesn't work very well under IE7 on Windows XP but the guys are working on it. I flipped over to use Firefox for the screenshots and use FF when I do my sessions using the tool.
Not sure what the problem is but if you subscribe to my feed (via my feedburner url, http://feeds.feedburner.com/bsimser) you may have noticed that the feeds around here haven't been updated. In fact there's no new feed since May 1st.
The feedburner feed is stagnant but what's more disturbing is that feedburner is working correctly, it's the source of the feed from Community Server and weblogs.asp.net that isn't updating. I checked the private feed (sans feedburner) and it also shows May 1st as the last post.
I sent a note off to the weblogs.asp.net guys but haven't heard back. I'm posting this here in the hopes that someone on weblogs.asp.net is seeing the same problem (and it's not just me) and maybe something gets done about it.
Maybe I forgot to pay the rent on my site? ;)
Update: I've been clicking on other peoples RSS link and not seeing items from the past week for many blogs. So either the feeds are not getting through on my end (DNS problem? I doubt it) or something is messed up on weblogs.asp.net/Community Server.
As I continue to cleanup my projects on CodePlex, I've posted the latest version of the source code for my SharePoint Forums Web Part. This is version 1.2 that was released in August (better late than never?).
If you're interested in contributing/enhancing the project please do so. Right now I'm juggling a bunch of projects with lots of team members so it might take some time to get you added to the team if you're interested. If you are interested in modifying the codebase then Scott Hanselman has a great blog entry here on creating patches. You can simply submit a patch file to me (via email) and I'll add it to the codebase. This way you don't have to sign up for a CodePlex account and go through setting up all those tools. Your choice but please consider contributing to the project.
The source code does not include the 2007 version as that will be released under the Community Kit for SharePoint project (CKS) which also lives on CodePlex (surprise surprise). I'm donating the 2007 version to CKS but in reality just simply having it hosted under that project. It'lll be the same Web Part however hopefully we'll have some more bodies working on it under CKS.
You can download the source code directly from here (sort of direct since direct file links don't work anymore on CodePlex) or through a TFS client (Teamprise, Team Explorer, etc.) if you're signed up on the site via the latest change set here.
For those that have been playing along, CodePlex suffered a bit of a hiccup awhile ago and some data was lost.
The Tree Surgeon project was one of the casualities (along with some of my other projects). The CodePlex team got the work items restored, but the change sets and source tree was lost. Luckily we had two backups, the zip file and I had several local copies of it on various hard drives and backups. I've rebuilt the latest change set on Code Plex so you can hook up and grab the source if you're a member of the team to work on things (or just grab the latest change sets as we get more work done).
I'm just spending some time tonight to update other projects and get new or updated source code uploaded to CodePlex so watch for more announcements over the next few days.
A useful little class that you might find you need someday. It converts a weakly-typed list (like an IList of items) into a strongly typed one based on the type you feed it.
It's super simple to use. For example let's say you get back an IList<People> from some service (a data service, web service, etc.) but really need it to be a List<People>. You can just use this class to convert it for you.
I know, silly little example but just something for your code snippets that you can squirrel away for a rainy day.
1 public class ListToGenericListConverter<T>
3 /// <summary>
4 /// Converts a non-typed collection into a strongly typed collection. This will fail if
5 /// the non-typed collection contains anything that cannot be casted to type of T.
6 /// </summary>
7 /// <param name="listOfObjects">A <see cref="ICollection"/> of objects that will
8 /// be converted to a strongly typed collection.</param>
9 /// <returns>Always returns a valid collection - never returns null.</returns>
10 public List<T> ConvertToGenericList(IList listOfObjects)
12 ArrayList notStronglyTypedList = new ArrayList(listOfObjects);
13 return new List<T>(notStronglyTypedList.ToArray(typeof(T)) as T);
I thought I would start off the week by airing my dirty laundry, that laundry being one of the projects I'm Scrum Master and Architect on.
It's been 6 months of month long iterations and the project is currently on hold as we shifted resources around to a fewer "higher" priority ones. I'm looking back at the iterations and the burndown charts that we pulled out (via Scrum for Team System).
It's not pretty but it's real and it's interesting to see the progress (or sometimes lack of it) along the way. Here we go...
The sprint starts and we're not off to a bad start. In fact, it's one of the better sprints we had, even if it was the first. A little bit of churn the first few days but that's normal. In fact it was a collision between myself and the PM who decided to enter all the tasks again his way. After a few days we got that part straightended out and the rest of the sprint seemed to go pretty well. I was pretty happy so far.
Another sprint that didn't go too badly. The team (myself and one developer) had some momentum and we were moving along at a nice pace. However technical debt built up already and we ended the sprint with about 40 hours of work left. Still overall I was pretty happy and we seemed to have got our stride. We also picked up a new team member so there was that integration that had to happen, but it worked well for the team.
Third sprint, 3 months into the project and we were moving along. Sometime around the middle of the sprint we were going like gangbusters and realized we were going to end early. That's the big dip around the 17-20th of November 2006. Once we got back on the Monday (the 20th) we decided to add more work to the sprint, otherwise we were going to be twiddling our thumbs for 2 weeks. It worked out well for this sprint as we finished without too much overhead (about 12 hours but some of that was BA or QA work which didn't directly affect the project).
Ugh. This is ugly but bonus points for the first person to know why (other than who was on the team). The main cause for the burndown to go flatline here is the Christmas. Yup, with people on holidays and not really wanting to end the sprint in early January right when everyone got back, we decided to push the sprint out a little to make up for the lost time over the Christmas season. In addition to this, the first week of this sprint one of the main developers came down with the flu and was out of commission for almost a whole week. That crippled us. By the 22nd or 23rd of January we decided we had to drop a whack of scope from the sprint (which is the sudden drop at the end you see) and we would have to make it up the next sprint, somehow. Even with that adjustment we were still running about 225 hours over at the end of the sprint. Not a good place to be to start your next iteration.
Doesn't look good for the team that was doing so well. This sprint started off with a couple of hundred hours of deferred backlog items, then ballooned up with more technical debt and decomposition of new tasks. The team was larger now but we obviously took on more than we could chew. In fact I remember going in saying that but I was shot down by certain PTB that said "it'll work itself out". Don't believe them! If your burndown charts are looking like this the first week in (and you can tell that the first week in) you'll certain to not succeed on an iteration. Hands down. I like to take a CSI approach to iterations, let the facts show what's going on not peoples opinion. If your burndown is burning up, you need to make adjustments and not "ride it out" because unless you have magical coding elves come in late at night (and I'm not talking about geeky coders who like to keep odd hours) then you're not going to make it, and it's pretty obvious.
This sprint was just a death march. 800 hours of work which included 80 hours for a task we outsourced to a different group (which really turned into 200 hours of work as that person just didn't turn in any kind of idea for how long it would take) and probably 200 hours of technical debt that's been building for 4 months. We actually got a lot done this sprint, about 200 hours worth of work which isn't bad for 3 developers, 1 QA, and 1 BA but it looks bad here.
This is how we ended the project until it went stealth. No, we didn't shut the project down because the progress was horrible. As I said, it slipped down the priority chain and we, as an organization, felt it was better to staff a project with 4-6 developers and bring it home rather than 2-3 developers keeping it on life support.
Hopefully this reality trip was fun for you and might seed a few things in your own iterations. Overall a few things to keep in mind on your own projects following Scrum:
- No matter what tool you use, try to get some kind of burndown out of the progress (even if it's being drawn on a whiteboard). It's invaluable to know early on in a sprint what is going on and where things are headed.
- If you finish a sprint with backlog items, make sure you're killing them off the first thing next sprint. Don't let them linger.
- Likewise on technical debt, consider it like real debt. The longer you don't pay it down, the more interest and less principle you end up paying and it will cost you several times over.
- If you're watching your sprint and by the end of the first week (say on a 2-3 week iteration) you're heading uphill, put some feelers out for why. Don't just admire the problem and hope it will go away. It might be okay to start a sprint not knowing what your tasks are (I don't agree with this but reality sometimes doesn't afford you this) but if you're still adding tasks mid-sprint and you're already not looking like you're going to finish, don't. It doesn't take a genius to figure out that if you can't finish what you're got on your plate you shouldn't be going back to the buffet.
- Be the team. Your team is responsible for the progress of the sprint, not one individual so you succeed as a team and fail as a team. Don't let one individual dictate what is right or wrong in the world. As a team, if the sprint is going out of control, fix it. If a PM says "don't worry" when you can see the iceberg coming, don't sit back and wait for it to hit, steer clear because you know it's coming.
Our 11th episode of the best damn podcast in the greater Calgary area, Plumbers @ Work, is now online.
In this episode, we talk about Silverlight, the Calgary Code Camp, Silverlight, GoDaddy refunds, Silverlight, Rhino Mocks, Silverlight, the Entity Framework, Silverlight, and Halo 2. We finally wrap up the show by talking about Silverlight.
Magrathea? Maybe I should have called this post "09 F9 11 02 9D 74 E3 5B D8 41 56 C5 63 56 88 C0" just to get in the game.
An oldie but a goodie that I thought I would share with the class.
For user stories:
Stories are easiest to work with if they are independent. That is, we'd like them to not overlap in concept, and we'd like to be able to schedule and implement them in any order.
A good story is negotiable. It is not an explicit contract for features; rather, details will be co-created by the customer and programmer during development. A good story captures the essence, not the details. Over time, the story may acquire notes, test ideas, and so on, but we don't need these to prioritize or schedule stories.
A story needs to be valuable. We don't care about value to just anybody; it needs to be valuable to the customer. Developers may have (legitimate) concerns, but these framed in a way that makes the customer perceive them as important.
A good story can be estimated. We don't need an exact estimate, but just enough to help the customer rank and schedule the story's implementation. Being estimable is partly a function of being negotiated, as it's hard to estimate a story we don't understand. It is also a function of size: bigger stories are harder to estimate. Finally, it's a function of the team: what's easy to estimate will vary depending on the team's experience.
Good stories tend to be small. Stories typically represent at most a few person-weeks worth of work. (Some teams restrict them to a few person-days of work.) Above this size, and it seems to be too hard to know what's in the story's scope. Saying, "it would take me more than month" often implicitly adds, "as I don't understand what-all it would entail." Smaller stories tend to get more accurate estimates.
A good story is testable. Writing a story card carries an implicit promise: "I understand what I want well enough that I could write a test for it." "Testability" has always been a characteristic of good requirements; actually writing the tests early helps us know whether this goal is met. If a customer doesn't know how to test something, this may indicate that the story isn't clear enough, or that it doesn't reflect something valuable to them, or that the customer just needs help in testing.
A task needs to be specific enough that everyone can understand what's involved in it. This helps keep other tasks from overlapping, and helps people understand whether the tasks add up to the full story.
The key measure is, "can we mark it as done?" The team needs to agree on what that means, but it should include "does what it is intended to," "tests are included," and "the code has been refactored."
The task owner should expect to be able to achieve a task. Anybody can ask for help whenever they need it; this certainly includes ensuring that task owners are up to the job.
Every task should be relevant, contributing to the story at hand. Stories are broken into tasks for the benefit of developers, but a customer should still be able to expect that every task can be explained and justified.
A task should be time-boxed: limited to a specific duration. This doesn't need to be a formal estimate in hours or days, but there should be an expectation so people know when they should seek help. If a task is harder than expected, the team needs to know it must split the task, change players, or do something to help the task (and story) get done.
From Bill Wake XP123.com
We ran into a problem using NHibernate to persist our domain. Here's an example of a domain object; an Order class with a collection of OrderLine objects to represent the line in each order placed in the system. In the system we want to be able to check if an order exists or not so we use an anonymous delegate as a predicate on the OrderLine collection:
7 class Order
9 private IList<OrderLine> Lines;
11 public Order()
13 Lines = new List<OrderLine>();
16 public bool DoesOrderExist(string OrderNumber)
18 return ((List<OrderLine>)Lines).Exists(
19 delegate(OrderLine line)
21 if (line.OrderNumber == OrderNumber)
22 return true;
23 return false;
29 class OrderLine
31 public string OrderNumber;
32 public int Quantity;
33 public string Item;
34 public double Cost;
This is all fine and dandy but when we reconsistute the object from the back-end data store using NHibernate, it blows its head off with an exception saying it can't cast the collection to a list. Internally NHibernate creates a PersistantBag object (which implements IList) but can't be directly cast to a List, so we can't use our predicate.
There's a quick fix we came up with which is to modify the DoesOrderExist method to look like this instead:
16 public bool DoesOrderExist(string OrderNumber)
18 List<OrderLine> list = new List<OrderLine>(Lines);
19 return (list.Exists(
20 delegate(OrderLine line)
22 if (line.OrderNumber == OrderNumber)
23 return true;
24 return false;
This feel dirty and smells like a hack to me. Rebuilding the list from the original one when we want to find something? Sure, we could do this and cache it (so we're not recreating it every time) but that just seems ugly.
Any other ideas about how to keep our predicates intact when reconsituting collections?
I got an email from GoDaddy today, where most of my domains are hosted, about a reduction of ICANN fees. I must say that GoDaddy is absolutely brilliant in crediting me the overpayment I've made as a result of the reduction of the fees:
Dear Bil Simser,
The Internet Corporation for Assigned Names and Numbers (ICANN®) recently agreed to reduce their Registrar Transaction Fee from $.25 to $.22. What does this mean for you?
Good news. You have been credited $.03/yr for each domain name you registered or renewed dating back to July 1, 2006* -- $.15 has been placed into your Go Daddy® account with this customer number: 9999999.
Your in-store credit will be applied to your purchases at GoDaddy.com® until it's gone or for up to 12 months, whichever comes sooner. If you have any questions, please contact a customer service representative at 480-505-8877.
As always, thank you for being a Go Daddy customer.
CEO and Founder
Wow. 15 cents for all my domains. What should I buy with this winfall first? A new laptop? A 42" LCD TV? I understand that it's the law and without sending this note out, I would probably be complaining about them stealing my $0.15 but it's akin to a bank sending you a cheque for $0.02 interest and is somewhat funny (even if others think its not)