Archives / 2004 / March
  • Customer Experience and Customer Expectations

    I recently had a conversation with my cell phone carrier that went something like this:

    Me: Hello, my phone sometimes won't receive calls until I power it off and then on again even though I am in a location where I receive calls all the time.

    Rep: I see. How many times per day do you turn your phone off and on again?

    Me: Unless I have to, I never turn my phone off.

    Rep: I think you should turn your phone off and then on again at least 3-4 times per day.

    Me: (incredulous) 3-4 times per day! Why?

    Rep: The longer your phone stays within 1 cell tower the lower your signal strength gets. When you power your device on and off it re-registers you on the network and increases your signal strength.

    Me: (still incredulous) Shouldn't the system take care of that? Why should staying near 1 tower cause me to not receive calls?

    Rep: You should do it 3-4 times per day and that should solve your problems.

    Me: I understand what you're saying, but I don't think this is a reasonable solution.

    Rep: (exasperated) Do you leave your computer on all the time or do you shut it down?

    Me: Actually I leave it on 24 hours a day.

    Rep: But don't you find that it slows down the longer you leave it on? Its kind of like that.

    Me: (several seconds of befuddled silence trying to decide how deep I should get into this)

    Me: Yes, I see what you are trying to say, but do you think that is the way things should be?

    Rep: Is there anything else I can help you with?

    Me: No thanks.

    Rep: Thanks for using T-Mobile


    So rather than commiserating with me about this unfortunate “feature“ and telling me why this might happen. (I'm guessing this allows more phones per tower or something like that) she tries to tell me some lame story about another industry that has had a poor customer experience. As if that justifies what she is telling me. I expect my phone to work without “rebooting“ it every 2 hours. My expectation is that it works the same as my land line. I know that technologically they don't have anything in common except a speaker and a microphone, but that is my expectation. And normally my expectation is met, which is why I'm frustrated by this issue.


  • Multiprojecting

    Johanna Rothman has written a nice article on the pitfalls of Multiprojecting. While others such as Gerald Weinberg, Jim Highsmith and Mary Poppendeick have correctly pointed out similar issues the primary thing that is missing from these discussion is why managers try to do this in the first place, and how project staff can “manage upwards” to correct or at least lessen the impact of this poor practice.

    The primary reason I've seen for managers inflicting multiprojecting/tasking is that they are afraid that the employees won't have enough to do because they know that the resources needed for completing any one project are not completely ready. In once extreme case I know of a team of about 10 usually had 12 simultaneous projects assigned. Management specifically stated they needed to assign so many projects because history had shown that the project team was not always able to get answers from project stakeholders when required. Management wanted to make sure the team had something to do (i.e. weren’t wasting their time waiting). In this case management identified the problem of stakeholders not being available to answer questions, but rather than fix the problem by truly prioritizing projects, establishing project communities through chartering, or insisting that stakeholders be involved or lose their project’s place in the schedule they heap more projects on the already overloaded team.

    Another common reason is that the managers think they are mitigating risk by having one team work on multiple projects simultaneously. The reasoning goes like this: “if one of these projects fail, at least some work got done on the others.” A COO I worked for at one time told me this with a straight face! In this case the team naturally self organized into smaller teams that focused on one project at a time. This approach only worked because the COO didn’t have any day-to-day oversight of the tasks worked on. This allowed the team to cope the best way they knew how. If the project manager had been assigning tasks rather than the team members self selecting tasks the story would have been very different.

    Even with these two tactics for dealing with multiprojecting managers the real solution is educating your manager. Show her articles on this topic. Try to convince him to let you try working on a single project for a while and deal with the issues at a higher level (systems thinking) rather than going back to the band-aid.

    Let me know your successes and your failures. Lets learn about this together rather than just commiserating together.

  • When Tests Aren't Tests

    There has been a great discussion on the XP and XP-Testing mailing lists about what “tests” are and how Exploratory Testing relates to XP/Agile development.

    According to James Bach the mission of a tester is “to learn enough important information about the product so that our clients (developers and managers, mainly) can make informed decisions about it.

    Does this sound anything like the purpose of what XP calls “tests”?

    My assertion is that “tests” in the XP sense (i.e. unit/programmer & acceptance/customer tests) are really about design and communication, not testing. Others are starting to see this as well.

    Unit tests (or TDD) are for fleshing out micro-design during initial code creation. They then fall into the role of supporting design iterations through the refactoring safety net. And finally they show intent of how the code is to be used (communication).

    Acceptance tests are for communicating to the developers what needs to be built and communicating back to the customers how much has been correctly built.

    One of the results of this viewpoint is that not only are developers responsible for creating XP unit/programmer tests, but they are also responsible for creating XP acceptance/customer tests. Testers will have some significant input into these tests (especially the customer tests), but their job is different than just automating acceptance test cases. In fact most testers look at customer tests and say, “thats a good start, where are the rest of the tests?”

    Has your concept of programmer/customer tests changed since starting XP? Have you talked to testers about what they think of XP “tests”? What did you find out?

  • Hosting Company with Customer Service

    If any of you out there want a recommendation for a hosting company that wants to take care of you. Check out CyberGate Web Hosting. I had the opportunity to put them to the test last week and they came through with flying colors. No .NET hosting, it is all Redhat, but very inexpensive and great uptimes.

  • NAnt Vault Tasks Updated

    Jonathan Cogley asked and posted about needing an update to the NAnt Vault tasks. I can't say everything is perfect, but I have updated the code so it will compile. I don't have everything I need to fully test the changes (primarily updates to NAnt 0.8.4 and Vault 2.0), plus I had to remove the Label task since it isn't obvious how to do that with the new vault APIs.

    If you haven't already checked it out and you are using Vault and NAnt take a look.